Feed aggregator

Change Decimal pointer

Tom Kyte - Thu, 2016-12-08 12:06
Hi, Please inform how can replace decimal operator dot(.) with comma(,) in oracle database Permanently I want to change it in database. Also no separator is required for numaric values.All decimal fields of table will accept comma(,) i...
Categories: DBA Blogs

How to use Clob parameter inside dynamic sql statement to execute immediate

Tom Kyte - Thu, 2016-12-08 12:06
Hello, Ask Tom team! There is my problem. i have a package procedure with some parameters: procedure updt(p_col varchar2,p_value clob) is v_update_clob clob; begin v_update_clob:= 'update table set :p_col=:p_value where cond = ''some_conditi...
Categories: DBA Blogs

Video - Live Webinar - Master Class - ADF Bindings Explained

Andrejus Baranovski - Thu, 2016-12-08 09:47
Posting 2 hours long video recording for yesterday webinar - Master Class - ADF Bindings Explained. Thanks to everyone for attending. It was large crowd, nearly 100 attendees live. I enjoyed answering to all of your questions, hopefully this will be useful in your daily work.


Webinar topics:

1. ADF Bindings overview. Why ADF Bindings are required and how they are useful
2. Drill down into ADF Bindings. Explanation how binding object is executed from UI fragment down to Page Definition
3. ADF Binding types explanation. Information about different bindings generated, when using JDeveloper wizards. What happens with ADF Bindings, when using LOV, table, ADF Query, Task Flow wizards.
4. Declarative ADF binding access with expressions
5. Programmatic ADF binding access from managed beans
6. ADF binding sharing and access from ADF Task Flows. How to create binding layer for Task Flow method call or router activities.
7. Best practices for ADF Bindings
8. Your questions

Download webinar ADF 12c app from GitHub - ADFAltaApp

Oracle Application Container Cloud Service Now Supports Clustered Applications!

I am happy to let you now that starting from December 2016 Oracle AcCS supports clusterization, so you can deploy your clustered applications to Oracle AcCS with standard scaling of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Rittman Mead Open Source Project

Rittman Mead Consulting - Thu, 2016-12-08 08:00

We have a strong innovation spirit at Rittman Mead, with all staff encouraged to use technology to its best advantage in order to do things with the software that haven't been done before. Some of these projects may may be 'scratching the itch' of a repeated manual task that should be automated. Others use technology to extend the capabilities of the tools or write new ones to fill gaps that have been identified.

At Rittman Mead we pride ourselves in our sharing of knowledge with the BI/DI community, both 'offline' at conferences and online through our blog. Today we are excited to extend this further, with the release over the next few days and weeks into open-source of some key code projects: -

  • insights - a javascript API/framework for building a new frontend for OBIEE, building on the OBIEE web service interface, as described here
  • vpp - "Visual Plugin Pack" - innovative visualisation capabilities to use natively within OBIEE
  • obi-enhanced-usage-tracking - the ability to track and audit user behaviour per-click, as described here

They will be available shortly on the Rittman Mead GitHub repository. The license for these is the MIT licence.

These projects are in addition to existing code that we have shared with the community over the years, including the obi-metrics-agent tool and the popular OBIEE 11g Linux service script.

We're very excited about opening up these projects to the community, and would be delighted to see forks and pull-requests as people build and expand on them. It should go without saying, but these are contributed 'as is'; any bugs and problems you find we will happily receive a pull request for :-)

If you would like help implementing and extending these for your own project, we would be delighted to offer services in doing so - just get in touch to find out more.

Over the next few weeks keep an eye on the blog for more information about each project, and future ones.

Categories: BI & Warehousing

An Oracle DVD story of... DVDs

Rittman Mead Consulting - Thu, 2016-12-08 07:00

Have you ever wondered what the trend in movie releases has been for the past few decades? Comparing the number of Sci-Fi releases vs. Romantic Comedy releases? Me too, which is why I've taken my first look Oracle Data Visualization Desktop (DVD) to spot trends between these movie genres - Sci-Fi and the Romantic Comedy.

For this post, I found an interesting dataset from IMDB.com on Kaggle.com, listing a smattering of movies since the early 1900s from which to sort and analyze. For this example, I will contrast the number of releases between the two movie genres, looking for any possible relationship as to number of releases for both.
If you haven't installed the application yet, take a quick look at Matthew Walding's post for a good introduction. Oracle's DVD installer is fairly quick and simple, and you'll be creating visualizations in no time.

So, once the DVD application is running, we can create our first project:

Or, alternatively...


 Next, we'll need a data source:

And, we'll import the CSV-formatted file I downloaded from Kaggle.com earlier:


 Select the "movie_metadata.csv" file to import:

And, change the Name to "IMDB Movies A" for clarification:

After the file has been imported, we see a problem:

Clicking "More Detail", the following screen displays the detail we can use to troubleshoot the query error:

For troubleshooting, I used the highlighted value and found the problem is with the "budget" column, which requires a datatype change from "Integer" to "Double":

The next task is to create a method of identifying a specific movie genre, however, as you can see, all genre labels for each movie are stored in a pipe-delimited value of the genres column:

So, for this demonstration, I've chosen to add a calculated column for each genre I want to analyze, locating the desired string within the pipe-delimited value under the assumption that the same value, "Sci-Fi" for instance, is recorded with the same characters in every occurrence of each pipe-delimited value.

For the first column (data element), I chose the functions LOCATE and SIGN to provide a simple logical indicator (0 and 1) that can be aggregated (summed) easily.

Click the "Validate" button to verify syntax:

The LOCATE function returns a positive integer where the expression "Sci-Fi" is located in a given string, the genres data element in this case. The SIGN function subsequently returns either a 0, 1, or -1, depending on the sign of the resulting integer from the LOCATE function. 1 (one) indicates yes, this movie release includes a Sci-Fi label for genre. 0 (zero) indicates a missing Sci-Fi label for genre, for example.

Here is the new column, appended to our existing dataset:


Next, I will create another column to identify the Romantic Comedy genre - genre_RomCom_Ind, as follows:

With the two new data columns, our dataset is expanded accordingly:


Now, it's let's create the visualizations:


Let's create a bar graph for each of our new Indicator columns, starting with the Sci-Fi genre:


We'll create a filter to include data only for title years between 1977 and 2015:


Afterward, our initial graph appears as follows, with a default aggregate summing all genre_SciFi_Ind values (0 or 1) for each title year:


Now, let's add a similar bar graph for all Romantic Comedy (genre_RomCom_Ind) releases. Notice, the same filter for title year will be applied to this new graph:


Next, I'll change the labels for each graph, providing proper context for the visualization:


We can also change the aggregate method used for the graph, when necessary:


For an added touch, let's add a trend line to the bar graph for even easier viewing:


And now, our graph appears as follows:


Applying similar modifications to our Sci-Fi Releases graph and displaying both graphs together on Canvas 1, we have the following:


Next, we'll add this Canvas to an Insight, select Narrate, and add our own description of any interesting comparisons we can identify:


In viewing the two graphs side-by-side, we notice one interesting outlier that, in year 2010, the number of Romantic Comedy releases outnumbered Sci-Fi releases by 21 movies, and on this Insight, I can enter a description (narration) beneath the graphs, highlighting this departure from the plotted trend line, as shown below. Another interesting, and unexpected, trend we see is the decreasing number of Romantic Comedy releases after 2008. But, do these two graphs display an obvious relationship, or correlation, between the two genres, either positive or negative? If we look at each trend line between 1990 and 2008, each movie genre shows an increasing number of releases, generally speaking, and leaves a somewhat inconclusive determination as to correlation, although, the periods after 2008 seem to indicate an inverse relationship.


At this point, the project can be saved (with a new title), exported, and/or printed:


In this post, I've demonstrated a basic example as an introduction to Oracle Data Visualization Desktop, with IMDB movie data, to visually quantify the number of movie releases in a given timeline, 1977 to 2015, creating custom calculations and dynamic visualizations for our particular measures.

One note I would add is that I did not attempt to eliminate any overlapping indicators, which were minimal, in the newly-added columns, genre_Sci-Fi_Ind and genre_RomCom_Ind. Although, it is possible that a movie can be labeled as all three - Sci-Fi, Romance, and Comedy, it did not distort the overall trend. And, these graphs now create a question - Can we know what influences the release of Sci-Fi movies and their increasing popularity? Are Romantic Comedy movies truly decreasing in popularity or is the movie studios choice to decrease the number of RomCom releases because of the surge in Sci-Fi releases? We all understand limitations of all studios investment capital, but must the RomCom genre suffer because of the Sci-Fi genre? If so, why? I realize this is a simplified view of the trend, but does lend itself to more scrutiny among other genres as well.

Categories: BI & Warehousing

Oracle TAN Function with Examples

Complete IT Professional - Thu, 2016-12-08 05:00
In this article, I’ll explain the Oracle TAN function, what it is, and show you some examples. Purpose of the Oracle TAN Function The purpose of the Oracle TAN function is to calculate the tangent of a number. The tangent is ratio of the length of the side opposite the angle to the length of the […]
Categories: Development

Can I do it with PostgreSQL? – 7 – Partitioning

Yann Neuhaus - Thu, 2016-12-08 04:19

PostgreSQL supports tables up to 32TB. Do you want to be the one responsible for managing such a table? I guess not. Usually you start to partition your tables when they grow very fast and consume more than hundreds of gigabytes. Can PostgreSQL do this? Do you you know what table inheritance is? No? PostgreSQL implements partitioning by using table inheritance and constraint exclusion. Sounds strange? Lets have a look …

Us usual I am running the currently latest version of PostgreSQL:

postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 0.513 ms
(postgres@[local]:5439) [postgres] > 

So, what is table inheritance. In PostgreSQL you do things like this:

(postgres@[local]:5439) [postgres] > create table databases ( name varchar(10), vendor varchar(10) );
CREATE TABLE
Time: 20.477 ms
(postgres@[local]:5439) [postgres] > create table databases_rdbms ( rdbms boolean ) inherits (databases);
CREATE TABLE
Time: 20.080 ms
(postgres@[local]:5439) [postgres] > create table databases_nosql ( nosql boolean ) inherits (databases);
CREATE TABLE
Time: 22.048 ms

What we’ve done here is: We created three tables in total. The “databases_rdbms” and “databases_nosql” tables inherit from the “databases” table. What does that mean? Lets insert some data into the tables that inherit from the “databases” table:

(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('PostgreSQL','Community',true);
INSERT 0 1
Time: 20.215 ms
(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('MariaDB','MariaDB',true);
INSERT 0 1
Time: 1.666 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('MongoDB','MongoDB',true);
INSERT 0 1
Time: 1.619 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('Cassandra','Apache',true);
INSERT 0 1
Time: 0.833 ms

Note that we did not insert any data into the “databases” table, but when we query the “databases” table we get this result:

(postgres@[local]:5439) [postgres] > select * from databases;
    name    |  vendor   
------------+-----------
 PostgreSQL | Community
 MariaDB    | MariaDB
 MongoDB    | MongoDB
 Cassandra  | Apache
(4 rows)

All the data from all child tables has been retrieved (of course without the additional column on the child tables). We can still query the child tables:

(postgres@[local]:5439) [postgres] > select * from databases_rdbms;
    name    |  vendor   | rdbms 
------------+-----------+-------
 PostgreSQL | Community | t
 MariaDB    | MariaDB   | t
(2 rows)

Time: 0.224 ms
(postgres@[local]:5439) [postgres] > select * from databases_nosql;
   name    | vendor  | nosql 
-----------+---------+-------
 MongoDB   | MongoDB | t
 Cassandra | Apache  | t
(2 rows)

But when we query “only” on the master table there is no result:

(postgres@[local]:5439) [postgres] > select * from only databases;
 name | vendor 
------+--------
(0 rows)

Of course for this specific example it would be better to add an additional column to the master table which specifies if a database is a NoSQL database or not. This is just to show how it works. There is a good example for another use case in the documentation.

What does all this have to do with partitioning? When you want to partition your tables in PostgreSQL you’ll do exactly the same thing:

(postgres@[local]:5439) [postgres] > create table log_data ( id int, some_data varchar(10), ts date );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2016() inherits ( log_data );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2015() inherits ( log_data );
CREATE TABLE

We want to partition our log data by year, so we create a child table for each year we know we have data for. We additionally need is a check constraint on each of the child tables:

(postgres@[local]:5439) [postgres] > \d+ log_data_2016
                             Table "public.log_data_2016"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2016-01-01'::date AND ts  \d+ log_data_2015
                             Table "public.log_data_2015"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2015-01-01'::date AND ts < '2016-01-01'::date)
Inherits: log_data

This guarantees that the child tables only get data for a specific year. So far so good. But how does PostgreSQL know that inserts into the master table should get routed to the corresponding child table? This is done by using triggers:

(postgres@[local]:5439) [postgres] > CREATE OR REPLACE FUNCTION log_data_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.ts >= DATE '2015.01.01' AND
NEW.ts < DATE '2016-01-01' ) THEN INSERT INTO log_data_2015 VALUES (NEW.*); ELSIF ( NEW.ts >= DATE '2016-01-01' AND
NEW.ts < DATE '2017-01-01' ) THEN
INSERT INTO log_data_2016 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'Date out of range!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_log_data_trigger
BEFORE INSERT ON log_data
FOR EACH ROW EXECUTE PROCEDURE log_data_insert_trigger();

When there are inserts against the master table, from now on these go to the corresponding child table:

(postgres@[local]:5439) [postgres] > insert into log_data values ( 1, 'aaaa', date('2016.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > insert into log_data values ( 2, 'aaaa', date('2015.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > select * from log_data;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
  2 | aaaa      | 2015-03-03
(2 rows)
(postgres@[local]:5439) [postgres] > select * from log_data_2015;
 id | some_data |     ts     
----+-----------+------------
  2 | aaaa      | 2015-03-03
(1 row)

(postgres@[local]:5439) [postgres] > select * from log_data_2016;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
(1 row)

Selects against the master table where we use the ts column in the where condition now only select from the child table:

(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2016.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.006..0.006 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2016-03-03'::date)
   ->  Seq Scan on log_data_2016  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2016-03-03'::date)
 Planning time: 0.131 ms
 Execution time: 0.019 ms
(7 rows)
(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2015.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.007..0.007 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2015-03-03'::date)
   ->  Seq Scan on log_data_2015  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2015-03-03'::date)
 Planning time: 0.102 ms
 Execution time: 0.019 ms
(7 rows)

Of course you can create indexes on the child tables as well. This is how partitioning basically works in PostgreSQL. To be honest, this is not the most beautiful way to do partitioning and this can become tricky to manage. But as always there are projects that assist you, e.g. pg_partman or pg_pathman.

Wouldn’t it be nice to have a SQL syntax to do table partitioning? Exactly this was committed yesterday and will probably be there in PostgreSQL 10 next year. The development documentation already describes the syntax:

[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
 

Cet article Can I do it with PostgreSQL? – 7 – Partitioning est apparu en premier sur Blog dbi services.

OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7

Yann Neuhaus - Thu, 2016-12-08 02:43

User and group quotas do exist for quite a while for all kind of file systems, like ext4 or vxfs and maybe many others.

However, for my use case, I do need quotas for different directories on the same file system. My mount point is /u01 and in that file system, I do have different Oracle Homes which belong to the same user, oracle. However, I do not want that my 11.2.0.4 Oracle Home influences my 12.1.0.2 Oracle Home in respect of file system usage. e.g. if the 11.2.0.4 home is core dumping, it should not fill up the space of the 12.1.0.2 home, and the other way around. But how can I do that?

The idea is to create XFS project quotas, and this is how it works.

First of all, we need to enable project quotas on the XFS file system.  /u01 is currently mounted with XFS default options, which are (rw,relatime,attr2,inode64,noquota). As you can see, the default is “noquota”.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

We can enable project quotas by adding the “prjquota” mount option to the /etc/fstab and afterwards remounting the file system.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

Unfortunately, the remount option does not work with the XFS filesystem. Meaning, I can remount the file system, but my new option “prjquota” is not reflected.

[root@dbidg01 ~]# mount -o remount,rw,relatime,attr2,inode64,prjquota /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

So I have to do a umount, mount. Not a good thing from my point of view, because it means that I cannot enable project quotas online for my /u01 directory, where I have different Oracle homes located. In other words, I need to shutdown all Oracle databases.

[root@dbidg01 ~]# umount /u01
[root@dbidg01 ~]# mount /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,prjquota)

Ok. Now it looks better. The next step is to define unique project id’s for the different directory hierarchies in the /etc/projects file.

For example, to set a project ID of 11 for the directory hierarchy /u01/app/oracle/product/11.2.0, and the project ID of 12 for the directory hierarchy /u01/app/oracle/product/12.1.0 we can do the following.

# echo "11:/u01/app/oracle/product/11.2.0" >> /etc/projects
# echo "12:/u01/app/oracle/product/12.1.0" >> /etc/projects

[root@dbidg01 ~]# cat /etc/projects
11:/u01/app/oracle/product/11.2.0
12:/u01/app/oracle/product/12.1.0

If you don’t want to work with ID’s, you have the possibility to map project names to the project ID’s in your /etc/projid file. It is much easier in regards of reporting quota usage, which we will see later.

For example, to map the project name oracle11gR2 to the project with ID 11 or to map the project name oracle12cR1 to 12 do the following.

# echo "oracle11gR2:11" >> /etc/projid
# echo "oracle12cR1:12" >> /etc/projid

Now use the project subcommand of xfs_quota to define a managed tree in the XFS file system for the different projects.

For example, to define a managed tree in the /u01 file system for the project oracle11gR2, which corresponds to the directory hierarchy /u01/app/oracle/product/11.2.0, do the following.

# xfs_quota -x -c 'project -s oracle11gR2' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle11gR2' /u01
Setting up project oracle11gR2 (path /u01/app/oracle/product/11.2.0)...
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libodm11.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdk.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdkbase.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle11gR2 with recursion depth infinite (-1).

The same applies to project oracle12cR1.

# xfs_quota -x -c 'project -s oracle12cR1' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle12cR1' /u01
Setting up project oracle12cR1 (path /u01/app/oracle/product/12.1.0)...
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/classes.bin
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/cbp.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/libjtcjt.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/US_export_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/cacerts
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/java.security
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/local_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/jce.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/sunjce_provider.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/javaws/javaws
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libjavavm12.a
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libodm12.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntshcore.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.11.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle12cR1 with recursion depth infinite (-1).

Now the fun part begins, and we can start using the limit subcommand to set the limits on the disk usage for the different projects. My 11.2.0.4 Oracle home is currently 5.7G in size, and the 12.1.0.2 Oracle home is 7.1G big. I want to configure for the 11g home a soft limit of 8G and a hard limit of 12G, and for the 12c home a soft limit of 10G and a hard limit of 16G.

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 11.2.0
5.7G    11.2.0

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 12.1.0
7.1G    12.1.0

# xfs_quota -x -c 'limit -p bsoft=8g bhard=12g oracle11gR2' /u01
# xfs_quota -x -c 'limit -p bsoft=10g bhard=16g oracle12cR1' /u01

[root@dbidg01 ~]# xfs_quota -x -c 'report -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0            8776636          0          0     00 [--------]
oracle11gR2    5883604    8388608   12582912     00 [--------]
oracle12cR1    7415292   10485760   16777216     00 [--------]

[root@dbidg01 ~]# xfs_quota -x -c 'report -h -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                        Blocks
Project ID   Used   Soft   Hard Warn/Grace
---------- ---------------------------------
#0           8.4G      0      0  00 [------]
oracle11gR2   5.6G     8G    12G  00 [------]
oracle12cR1   7.1G    10G    16G  00 [------]

The quotas are immediately seen by the df command. The df on the /u01 shows the full filesystem size, which is 50G, however, if I navigate to my 11g or 12c home, I see the soft limit which I have configured beforehand.

[root@dbidg01 product]# df -h /u01
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   50G   22G   29G  43% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/11.2.0
[root@dbidg01 11.2.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01  8.0G  5.7G  2.4G  71% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/12.1.0
[root@dbidg01 12.1.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   10G  7.1G  3.0G  71% /u01
Conclusion

The project quotas with XFS are a quite cool feature. Limiting file system usage per projects gives you a huge flexibility. Setting quotas on individual Oracle homes is one thing, but you could also limit filesystem usage for your Oracle databases in the DIAG directory based on ORACLE_SID, so that /u01/app/oracle/diag/rdbms/<SID1> can never fill up /u01/app/oracle/diag/rdbms/<SID2>. The only drawback I see, is that enabling project quotas is not an online operation. You need to umount and mount the file system to activate this feature.

Have fun with XFS quotas.

Cheers,
William

 

Cet article OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7 est apparu en premier sur Blog dbi services.

Updated: Why Does EBS Integration with Oracle Access Manager Require Oracle Internet Directory?

Steven Chan - Thu, 2016-12-08 02:05

[The article applies to EBS 12.1 and 12.2; it is an updated version of this older article that covered EBS 12.0 and 11i]

The E-Business Suite has its own security and user-management capabilities.  You can use the E-Business Suite's native features to authenticate users, authorize users (i.e. assign responsibilities to them), and manage your EBS user repository.  The majority of E-Business Suite system administrators simply use these built-in capabilities for enabling access to the E-Business Suite.

When EBS built-in capabilities aren't enough

Some organisations have third-party user authentication systems in place.  These include CA Netegrity SiteMinder, Windows Kerberos, and others.  These organisations frequently use third-party LDAP directory solutions such as Microsoft Active Directory, OpenLDAP, and others. 

We don't certify the E-Business Suite with those third-party products directly, and we don't have any plans to do so.  This article is intended to explain why Oracle Internet Directory (OID) is required when integrating with Oracle Access Manager (OAM). 

OAM and OID are mandatory for third-party integration

Oracle Internet Directory and Oracle Access Manager are mandatory requirements when integrating third-party authentication products directly with the E-Business Suite.

It is not possible to integrate E-Business Suite directly with Microsoft Active Directory, Windows Kerberos, or CA Netegrity Siteminder directly.

It's possible to integrate the E-Business Suite with those third-party solutions via Oracle Access Manager and Oracle Internet Directory.  See these articles:

Before going on, I'd recommend reading that third-party integration articles.  If you don't have those concepts under your belt, the rest of this article isn't going to make much sense.

Architecture diagram showing Oracle Access Manager Oracle Internet Directory E-Business Suite AccessGate WebGate

Why does EBS require OID with OAM?

Oracle Access Manager itself doesn't require Oracle Internet Directory.  However, Oracle Internet Directory is a mandatory requirement when Oracle Access Manager is integrated with the E-Business Suite.

Why?  The short answer is that the E-Business Suite has hardcoded dependencies on Oracle Internet Directory for this configuration. These dependencies mean that you cannot replace Oracle Internet Directory with any third-party LDAP directory for this particular configuration. 

There are two cases of hardcoded dependencies on Oracle Internet Directory:

1. Reliance on Oracle GUIDs

From the articles linked above, you know that user authentication is handled by Oracle Access Manager, and user authorization is handled by the E-Business Suite itself.  This means that there are two different user namespaces. 

These namespaces must be linked and coordinated somehow, to ensure that a particular user logging in via Oracle Access Manager is the same user represented within the E-Business Suite's own internal FNDUSER repository.

We associate externally-managed Oracle Access Manager users with internally-managed E-Business Suite users via a Global Unique Identifier (GUID).  These Global Unique Identifiers are generated exclusively by Oracle Internet Directory. 

The E-Business Suite has hardcoded functions to handle the mapping of these Global Unique Identifiers between Oracle Access Manager and the E-Business Suite.  These mapping functions are specific to Oracle Internet Directory; it isn't possible to replace Oracle Internet Directory with a generic third-party LDAP directory and still preserve this functionality.

2. Synchronous user account creation

The E-Business Suite is predominantly used internally within an organisation.  Certain E-Business Suite application modules can be made visible to users outside of an organisation.  These include iStore, iRecruitment, iSupplier, and other application modules where the users aren't necessarily restricted to an organisation's own employees.

Users of some of those application modules expect to be able to register for a new account and use it immediately.  This makes sense.  If you're posting job openings via iRecruitment, potential applicants shouldn't need to hold off on submitting their resumes while your E-Business Suite sysadmin creates an account manually, assigns EBS responsibilities, and emails them the account login details. They'll be long gone before that happens.

This means that EBS application modules that support self-registration must create user accounts synchronously.  A new account must be created within the E-Business Suite and the externalized directory at the same time, on demand.

The E-Business Suite has hardcoded dependencies upon Oracle Internet Directory function calls that handle these synchronous account creation tasks.  These function calls are specific to Oracle Internet Directory; it isn't possible to replace Oracle Internet Directory with a generic third-party LDAP directory and still preserve this functionality.

Related Articles


      Categories: APPS Blogs

      Retrieving DBMS_OUTPUT.put_line from JDBC?

      Tom Kyte - Wed, 2016-12-07 17:46
      I'm using a Java application to process an incoming feed. The app inserts data into some Oracle tables, then runs an Oracle stored procedure. The Java application is called from a UNIX script, connects to Oracle using Oracle's thin client driver an...
      Categories: DBA Blogs

      index issue with our partitioned table ora14196 and question if plan OK?

      Tom Kyte - Wed, 2016-12-07 17:46
      Hello Tom I have an index issue with our partitioned table this table cw_tb_zvlist11 has about 500 mio rows inserted and the insert process is still active I see that unfortunality we created the primary index as a nonunique one. Now I tr...
      Categories: DBA Blogs

      adding column with default value 0

      Tom Kyte - Wed, 2016-12-07 17:46
      Hi Tom, I have read lot of posts which you have discussed on the subject of adding column with some default value. Thanks a lot for such a great service you render to oracle community. Unfortunately one of my friend informed me some bugs are as...
      Categories: DBA Blogs

      Is there another option for Next Value For

      Tom Kyte - Wed, 2016-12-07 17:46
      Oracle Database 11g Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for 64-bit Windows: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production How would I go abo...
      Categories: DBA Blogs

      Using ADO and HeatMap in the Utilities ILM Solution

      Anthony Shorten - Wed, 2016-12-07 16:19

      The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map.

      The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed.

      Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules.

      ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data.

      For more information about Heat Map and ADO refer to the following information:

      Extended Stats

      Jonathan Lewis - Wed, 2016-12-07 09:54

      After my Masterclass on indexes at the UKOUG Tech2016 conference this morning I got into a conversation about creating extended stats on a table. I had pointed out in the masterclass that each time you dropped an index you really ought to be prepared to create a set of extended stats (specifically a column group) on the list of columns that had defined the index just in case the optimizer had been using the distinct_keys statistic from the index to help it calculate cardinalities.

      Unfortunately there is a limit on the number of column groups (or any other type of extended stats) you can have on a table and that limit is the larger of 20 and ceiling(number of columns / 10) – so you typically run into a problem if you want to take defensive action after dropping more than twenty (multi-column) indexes. (And you wonder how Oracle’s adaptive dynamic stats process that silently creates column groups overnight handles the problem of needing far more column groups than are allowed.)

      The conversation led on to the oddity that the column count includes the virtual columns representing the column groups so, for example, if you have 253 columns in your table you can create 26 column groups; but if you have 26 column groups that means you have a total of 279 columns, so you can actually create a total of 28 groups (an extra 2); but if you create those two column groups you now have a total of 281 columns in the table which means you’re allowed a total of 29 column groups so you can add one more column group for a total of 282 columns. Here’s some code (which I’ve run only on 11.2.0.4) to play with – to keep things very simple I’ve generated some trivial extended stats rather than column groups:

      
      rem
      rem     Script:         extended_stats_limit2.sql
      rem     Author:         Jonathan Lewis
      rem     Dated:          Dec 2016
      rem
      
      drop table t1 purge;
      
      begin
              for i in 2..253 loop
                      execute immediate
                      'alter table t1 add (c' || to_char(i,'FM000') || ' number)';
              end loop;
      end;
      /
      
      desc t1
      
      prompt  ============================================================================================
      prompt  This will raise an error on the 30th addition
      prompt  ORA-20008: Number of extensions in table TEST_USER.T1 already reaches the upper limit (28.2)
      prompt  ============================================================================================
      
      declare
              ext_string varchar2(20);
      begin
              for i in 1..30 loop
                      ext_string := '(c001 + ' || i || ')';
                      dbms_output.put_line(
                              dbms_stats.create_extended_stats(
                                      ownname         => user,
                                      tabname         => 'T1',
                                      extension       => ext_string
                              )
                      );
              end loop;
      end;
      /
      
      column column_name format a32
      
      select
              column_name, hidden_column, virtual_column, segment_column_id, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      order by
              internal_column_id
      ;
      
      

      This code results in a table with 253 segment columns, and 29 hidden, virtual columns (with names like SYS_STU0#$2X$X1M4NFZVM2O_5A3FC) representing the extended stats. What if I want more extended stats ? There is no limit on virtual columns in general, beyond the inherent table limit of 1,000 columns total, so what if I create a few virtual columns (another 39, say, taking my total column count to 321): would this allow me to increase the number of extended stats to 33 – and if so, what would happen if I then dropped the virtual columns:

      
      prompt  ============================================
      prompt  Now we add some virtual columns after which
      prompt  we will be able to add more extended stats
      prompt  and drop the virtual columns
      prompt  ============================================
      
      begin
              for i in 1..39 loop
                      execute immediate
                      'alter table t1 add (virt' || to_char(i,'fm000') ||
                              ' generated always as ( c002 + ' || i || ') virtual)'
                      ;
              end loop;
      end;
      /
      
      select
              column_name, hidden_column, virtual_column, segment_column_id, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      order by
              internal_column_id
      ;
      
      prompt  ============================================================================================
      prompt  We can now get up to 33 extended stats
      prompt  This will raise an error on the attempt to add the 34th set
      prompt  ORA-20008: Number of extensions in table TEST_USER.T1 already reaches the upper limit (32.5)
      prompt  ============================================================================================
      
      declare
              ext_string varchar2(20);
      begin
              for i in 30..34 loop
                      ext_string := '(c001 + ' || i || ')';
                      dbms_output.put_line(
                              dbms_stats.create_extended_stats(
                                      ownname         => user,
                                      tabname         => 'T1',
                                      extension       => ext_string
                              )
                      );
              end loop;
      end;
      /
      
      select
              column_name, hidden_column, virtual_column, segment_column_id, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      order by
              internal_column_id
      ;
      
      
      select
              column_name, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      and     hidden_column = 'YES'
      and     virtual_column = 'YES'
      order by
              internal_column_id
      ;
      
      prompt  ============================
      prompt  Now drop the virtual columns
      prompt  ============================
      
      begin
              for r in (
                      select column_name from user_tab_cols
                      where  column_name like 'VIRT%'
              ) loop
                      execute immediate
                      'alter table t1 drop column ' || r.column_name;
              end loop;
      end;
      /
      
      select
              column_name, internal_column_id
      from
              user_tab_cols
      where
              table_name = 'T1'
      and     virtual_column = 'YES'
      order by
              internal_column_id
      ;
      
      

      When I ran this code I ended up with a table consisting of 286 columns, of which 253 were my original columns and 33 – with internal column ids of 254 to 286 inclusive – were the extended stats. It seems there is a way to bypass the limit if you really want to – though I’m not sure I’d really want to do it on a production system.

      Left as Exercise for the Reader:

      Create a table with 5 real columns and the 26 column groups needed to represent all (multi-column) combinations of those five columns. (Remember that the order of columns in a column group is not really significant). (The 26 groups consist of: 1 x 5 column, 5 x 4 column, 10 x 3 column, 10 x 2 column – this may remind some of you of binomial expansions, others may remember it as a row from Pascal’s triangle, you could also view it as a particular subset of the binary representations of the integers from 1 to 31.)

       


      UKOUG 2016 DAY 4

      Yann Neuhaus - Wed, 2016-12-07 09:29

      uk4

      Today is the last day at UKOUG in Birmingham; the first session I attended this morning was presented by Julian Dyke about installing and upgrading Oracle 12c release 2 Grid infrastructure and RAC.

      He had the possibility to test the installation and upgrade phases at Oracle during 5 days at Oracle last spring. The following tests were done:

      single instance : install 12.2.0.1, create database with dbca, upgrade 12.1.0.2 to 12..2.0.1 with dbua

      RAC: install 12.2.0.2 grid infrastructure, install 12.2.0.1 RDBMS software, create ASM disk groups (ASMCA), create 12.2.0.2 RAC database (DBCA) , upgrade 12.1.0.2 Grid infrastructure to 12.2.0.1 (gridSetup.sh), upgrade 12.1.0.2 RAC database to 12.2.0.1.

      He showed us the main different screenshots describing the installation phases and told us that they did not meet a lot of problems during their installation or upgrade  phases. To upgrade the Grid infrastructure, it is important to run the CVU connected as grid user for example :

      runcluvfy.sh -src_crshome=/u00/app/12.1.0.2 -dest_crshome=/u00/app/12.2.0.1
       -dest_version=12.2.0.1 -fixupnoexec
      
      

      Then after you have the possibility to resolve any issues detected using the generated fixup script.

      In his opinion, the use of DBUA is sufficiently robust to use for most upgrades, expecially when the upgrade concerns non critical databases, or databases with fast recovery times or databases on virtual machines. By the way he also mentioned that Oracle is still recommending using scripts for upgrades of large or business critical databases.

      He encountered some isssues concerning the upgrade phase for Grid Infrastructure. In particular with the memory_target parameter setting because the ASM and GIMR instances use more memory than in 12.1.0.2, he received the classical ORA-00845 error message. He also encountered problems with invalid objects  and had to extend the root file system of his virtual machine.

      Then I attended to Franck Pachot’s session about Statistics Gathering, Best Practices  and Statistic Advisor:

      uk5

      His session described us his findings and recommendations about how to gather statistics, with a lot of technical demonstrations done on the Cloud. A lot of cases were shown, for example volatile tables, preferences for partitioned tables. index gathering statistics.

      He showed us the Oracle 12c release 2 statistics Advisor which might be a useful tool, I will check if it is available in Enterprise Manager 13.2.

      He finished his by giving us hsi own recommendations: use automatic job for most of the tables, customize the statistics gathering for volatile tables, gather statistics for tables that you load, and important customize the maintenance window for the gathering statistics job.

      Finally I wanted to attend at the OEM round table, but unfortunately the session has been canceled :=((

      UK6

      Well,this was a very interesting week with a lot of exchanges and sharing experiences with other Oracle DBA. hope to come back at UKOUG next year !

       

       

       

      Cet article UKOUG 2016 DAY 4 est apparu en premier sur Blog dbi services.

      Is Your Job At Risk Of Automation?

      David Haimes - Wed, 2016-12-07 08:15

      Last week I blogged about Automated Accountants, which was discussing chatbot use cases rather than full automation of a person. However the concept of automating what knowledge workers do today to the point were we really do have a fully automated accountant is something I have been researching too.

      I had a twitter discussion with Vinnie Mirchandani (@dealarchitect) about automation as he is authoring a book on the subject.  I agree with him that dirty and dangerous jobs are the first candidates for automation, but I also believe knowledge jobs are good candidates too and it will happen faster than many expect.  In the world of finance we have seen continual adoption of technology to move us from paper based Ledgers to highly automated cloud accounting software with integrated reporting, social networks, mobile etc. I think the pace of change will accelerate because new technology is now so easy to adopt in the cloud and innovations are rolled out faster than ever before.  So automation will happen quicker for those corporations that are already adopting these cloud based business applications.

      I came across an interesting article on the BBC entitled Will a Robot Take My Job? which has some nice tools to let you search for your job and determine how likely it is to be automated and despite my earlier assertions I was surprised that accounting professions were so high on the list of those that would be automated.  You can see my summary graphic below

      automation

      The full report from Oxford University’s Martin School is worth a read as it goes into a lot of detail of skills that are hard for a machine to replicate and those that are easier so you can understand the reasoning behind the ratings.  My profession in software development has a pretty low chance of being automated, I have to come up with original ideas and negotiate and these are things it is harder (at the moment) for machines to automate but it might just be a matter of time.

      This will not happen in one shot, but over time more and more tasks will be automated, which is good because it will give is some time to think about the much bigger issue; what do we do now so much of what we as a workforce do is automated?  The common wisdom is the workers will gradually move to work on higher value tasks that cannot easily be automated and drive greater and greater value.  There is a lot of evidence to support this, but that discussion probably deserves a blog post of it’s own.


      Categories: APPS Blogs

      Oracle 12cR2: Statistics Advisor

      Yann Neuhaus - Wed, 2016-12-07 04:48

      Today at #ukoug_tech16 12:30 in hall 11A I’ll talk about Statistics Gathering Best Practice & 12cR2 Statistics Advisor
      Rather than taking the Optimizer Statistics Advisor rules one by one I’ll show the things to take care (some people may call it best practices) when gathering statistics and they mention the Statistics Advisor Rule.
      If you need a reference about all rules, you can get it from V$STATS_ADVISOR_RULES

      09:41:19 SQL> select to_char(rule_id,99)||' '||description||' ('||name||')' from V$STATS_ADVISOR_RULES where rule_id>0 order by rule_id;
      &nbsp
      TO_CHAR(RULE_ID,99)||''||DESCRIPTION||'('||NAME||')'
      ----------------------------------------------------------------------------------------------------

      I’ll explain them briefly here.

      You should always enable to automatic statistic gathering job. You may manage special cases manually, but do not disable it.
      1 Use Auto Job for Statistics Collection (UseAutoJob)
      For sure if the job does not complete successfully, the advisor detects the probable cause.
      2 Auto Statistics Gather Job should complete successfully (CompleteAutoJob)
      Statistics history may save your life in case of a regression. But be sure the retention is not too large and purge occurs or SYSAUX will grow
      3 Maintain Statistics History (MaintainStatsHistory)
      The faster the statistics gathering run, the more statistics you can gather. Use all your server resources for it.
      4 Use Concurrent preference for Statistics Collection (UseConcurrent)
      Default options for global preferences are what the optimizer developers think are the best for most cases.
      5 Use Default Preference for Stats Collection (UseDefaultPreference)
      Humm.. this one was introduced before the decision not to activate SPD by default
      (see http://blog.dbi-services.com/oracle-12cr2-optimizer-adaptive-statistics/)
      6 SQL Plan Directives should not be disabled (TurnOnSQLPlanDirective)
      Setting statistics manually may be used as a workaround but not the general case
      7 Avoid Set Statistics Procedures (AvoidSetProcedures)
      When you run dbms_stats.gather_…_stats manually, default options are what the optimizer developers think are the best for most cases.
      8 Use Default Parameters in Statistics Collection Procedures (UseDefaultParams)
      And in those cases, better to run it for a schema so that you are sure to include newly created tables
      9 Use gather_schema_stats procedure (UseGatherSchemaStats)
      You waste time and ressources if you gather statistics in a addition to what is done with online statistics gathering
      10 Avoid inefficient statistics operation sequences (AvoidInefficientStatsOprSeq)
      You waste time and ressources if you gather statistics when nothing has changed
      11 Avoid unnecessary statistics collection (AvoidUnnecessaryStatsCollection)
      You need statistics for all tables
      12 Avoid objects with stale or no statistics (AvoidStaleStats)
      Statistics gathered before bulk inserts will be immediately stale
      13 Do not gather statistics right before bulk DML (GatherStatsAfterBulkDML)
      You don’t want the automatic statistics gathering run on a table between a truncate and an insert
      14 Statistics for objects with volatile data should be locked (LockVolatileTable)
      But let it run for tables with no massive change
      15 Statistics for objects with non-volatile should not be locked (UnlockNonVolatileTable)
      16 Statistics of dependent objects should be consistent (MaintainStatsConsistency)

      Better truncate, make indexes unusable, and insert /*+ append */ than drop and recreate the table (which removes statistics).
      17 Avoid drop and recreate object seqauences (AvoidDropRecreate)
      Statistics advisor may detect when incremental statistics gathering is me efficient for partitioned tables
      18 Statistics should be maintained incrementally when it is beneficial (UseIncremental)
      19 Statistics should not be maintained incrementally when it is not beneficial (NotUseIncremental)

      Stale statistics may lead to under-estimation because of linear decay
      20 Avoid Out of Range Histogram endpoints (AvoidOutOfRange)
      Large tables can be scanned in parallel, recommendation is default degree
      21 Use Auto Degree for statistics collection (UseAutoDegree)
      As we have seen about global preferences, table preference should be default for most cases (rolling invalidation, auto sample size, auto histogram size)
      22 Use Default Object Preference for statistics collection (UseDefaultObjectPreference)
      And for sure dbms_stats is the way to gather statistics for the optimizer. ANALYZE is deprecated for that since 91
      23 Avoid using analyze table commands for statistics collection (AvoidAnalyzeTable)

      Those are only my interpretation. 12.2 is new (and cloud first) and I’ve not observed all those recommandations yet. But there are properly described by the advisor.
      This is the kind of output we can get:

      ----------------------------------------------------------------------------------------------------
      GENERAL INFORMATION
      -------------------------------------------------------------------------------
       
      Task Name : MY_TASK
      Execution Name : EXEC_52
      Created : 12-07-16 11:31:40
      Last Modified : 12-07-16 11:32:37
      -------------------------------------------------------------------------------
      SUMMARY
      -------------------------------------------------------------------------------
      For execution EXEC_52 of task MY_TASK, the Statistics Advisor has 6
      finding(s). The findings are related to the following rules: USECONCURRENT,
      AVOIDSETPROCEDURES, USEDEFAULTPARAMS, USEGATHERSCHEMASTATS, AVOIDSTALESTATS,
      UNLOCKNONVOLATILETABLE. Please refer to the finding section for detailed
      information.
      -------------------------------------------------------------------------------
      FINDINGS
      -------------------------------------------------------------------------------
      Rule Name: UseConcurrent
      Rule Description: Use Concurrent preference for Statistics Collection
      Finding: The CONCURRENT preference is not used.
       
      Recommendation: Set the CONCURRENT preference.
      Example:
      dbms_stats.set_global_prefs('CONCURRENT', 'ALL');
      Rationale: The system's condition satisfies the use of concurrent statistics
      gathering. Using CONCURRENT increases the efficiency of statistics
      gathering.
      ----------------------------------------------------
      ...

       

      Cet article Oracle 12cR2: Statistics Advisor est apparu en premier sur Blog dbi services.

      OEL 7 – How to disable IPv6 on Oracle Linux 7

      Yann Neuhaus - Wed, 2016-12-07 02:36

      In case you are not interested in IPv6, you can use the following HowTo to disable it on Oracle Linux 7. Unless you have something very very special on your System, these 10 Steps should do it.

      1. First of all, check if IPv6 is active at all
      2. Add the disable_ipv6 = 1 entries to the /etc/sysctl.conf file
      3. Disable IPv6 in all /etc/sysconfig/network-scripts/ifcfg-* files, e.g.
      4. Disable IPv6 in /etc/sysconfig/network
      5. Remove the “::1″ line from the /etc/hosts file
      6. Remove the “restrict -6″ line from the /etc/ntp.conf
      7. Add ipv6.disable=1 to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub file
      8. Regenerate a GRUB configuration file and overwrite the existing one
      9. Reboot the server
      10. Confirm if IPV6 is disabled

       

      First of all, check if IPv6 is active at all

      [root@dbidg01 ~]# /sbin/ip -6 addr
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
          inet6 fe80::ad02:9b6a:bf40:5a3a/64 scope link
             valid_lft forever preferred_lft forever
      3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
          inet6 fe80::a00:27ff:feb8:3544/64 scope link
             valid_lft forever preferred_lft forever

       

      Add the disable_ipv6 = 1 entries to the /etc/sysctl.conf file

      #-- Disable IPv6
      net.ipv6.conf.all.disable_ipv6 = 1
      net.ipv6.conf.default.disable_ipv6 = 1

       

      Disable IPv6 in all /etc/sysconfig/network-scripts/ifcfg-* files, e.g.

      cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 | grep IPV6INIT
      IPV6INIT=no

       

      Disable IPv6 in /etc/sysconfig/network

      cat /etc/sysconfig/network | grep NETWORKING_IPV6
      NETWORKING_IPV6=no

       

      Remove the following line from the /etc/hosts file

      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

       

      Remove the following line from the /etc/ntp.conf

      cat /etc/ntp.conf | egrep ' -6'
      restrict -6 default kod nomodify notrap nopeer noquery
      restrict -6 ::1

       

      Add ipv6.disable=1 to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub file

      [root@dbidg01 /]# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
      GRUB_CMDLINE_LINUX="ipv6.disable=1 crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet numa=off transparent_hugepage=never"

       

      Regenerate a GRUB configuration file and overwrite the existing one

      [root@dbidg01 /]# grub2-mkconfig -o /boot/grub2/grub.cfg
      Generating grub configuration file ...
      Found linux image: /boot/vmlinuz-4.1.12-61.1.19.el7uek.x86_64
      Found initrd image: /boot/initramfs-4.1.12-61.1.19.el7uek.x86_64.img
      Found linux image: /boot/vmlinuz-4.1.12-61.1.18.el7uek.x86_64
      Found initrd image: /boot/initramfs-4.1.12-61.1.18.el7uek.x86_64.img
      Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
      Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
      Found linux image: /boot/vmlinuz-0-rescue-547c48bd53614a2ca2d16909b3c14419
      Found initrd image: /boot/initramfs-0-rescue-547c48bd53614a2ca2d16909b3c14419.img
      done

       

      Reboot the server

      init 6

       

      Confirm if IPV6 is disabled

      [root@dbidg01 ~]# /sbin/ip -6 addr
      [root@dbidg01 ~]# lsmod | grep -i v6

       

      In case the ip and the lsmod command do not return anything back, then you have successfully disabled IPv6.

      Cheers, William

       

       

       

       

      Cet article OEL 7 – How to disable IPv6 on Oracle Linux 7 est apparu en premier sur Blog dbi services.

      Pages

      Subscribe to Oracle FAQ aggregator