Feed aggregator

Node.js on Oracle Linux: It's So Easy

Christopher Jones - Mon, 2018-09-10 18:39
This is a quick placeholder cross-post of the 'so I can find it again' category: Sergio Leunissen from our Linux group has a blog post on using Node.js and node-oracledb on Oracle Linux.  You can find the post here.  The Oracle Linux RPMs packages make it all so easy!

Increasing Maximum Web Service Requests in Oracle APEX 18.1

Dimitri Gielis - Mon, 2018-09-10 15:29
While running our final tests of APEX Office Print (AOP) 18.1 we hit "ORA-20000: Issue calling Main AOP Service (REST call: ): ORA-20001: You have exceeded the maximum number of web service requests per workspace."


When you login into the Internal Workspace and navigate to a workspace, there's a setting for  Maximum Web Service Requests. The default value is 1000 requests per 24h (rolling window).

If you know that AOP has next to hundreds of server tests, also around 500 automated tests through APEX, we hit this limit after the second full run. After setting the value to 20000, we are able to continue our final testing :)

I guess the chances are small you will hit the limit in a normal APEX app, but if you do, it's easy to fix by setting a higher value for your workspace.
Categories: Development

Alternative for greatest function to improve performance

Tom Kyte - Mon, 2018-09-10 12:06
Hi Tom, I have a view, which has date_p,date_p_c. Now I have a ssas cube where it has many partitions based on date.I need to fetch the data based on latest date. Monthly partition brings data from beginning of year to prior month. Daily pa...
Categories: DBA Blogs

Column default values in Oracle 12C

Tom Kyte - Mon, 2018-09-10 12:06
Hi Tom, We are converting oracle tables as partition tables.To do this we are using ORACLE Exchange Partition. Database version 12C. Steps we are following. 1.Original table --existing table <code> create table tab1(col1 integer, ...
Categories: DBA Blogs

CONNECT BY with Subquery in START WITH

Tom Kyte - Mon, 2018-09-10 12:06
Hi Tom, ehm, Chris of course, after migration our production database to 12.2.0.1, I have found out that there is a problem regarding CONNECT BY with a subquery in the START WITH clause. FYI: LiveSQL Link didn't work when I copied it, here is ...
Categories: DBA Blogs

getting xml file as result, and updating other table when executing stored procedure

Tom Kyte - Mon, 2018-09-10 12:06
HI EveryOne, I'm new be to pl/sql. I have a functionality like, when i read the data from multiple tables as one xml file(each record as one xml), i have to update in other table.For that i have written procedure in that procedure i'm calling func...
Categories: DBA Blogs

Average wait time for Log File Sync and Log File Parallel Write wait events increasing daily

Tom Kyte - Mon, 2018-09-10 12:06
Hi Team, First of all, thanks for all the awesome work you are doing! We are facing a issue with Log File Sync wait events. Pasted below is a detailed write-up. It would be great if you can help us by sharing your views. Thanks again for al...
Categories: DBA Blogs

PDB lockdown with Oracle 18.3.0.0

Yann Neuhaus - Mon, 2018-09-10 11:01

The PDB lockdown feature offers you the possibility to restrict operations and functionality available from within a PDB, and might be very useful from a security perspective.

Some new features have been added to the 18.3.0.0 Oracle version:

  • You have the possibility to create PDB lockdown profiles in the application root like in the CDB root. This facilitates to have a more precise control access to the applications associated with the application container.
  • You can create a PDB lockdown profile from another PDB lockdown profile.
  • Three default PDB lockdown profiles have been added : PRIVATE_DBAAS, SAAS and PUBLIC_DBAAS
  • The v$lockdown_rules is a new view allowing you to display the contents of a PDB lockdown profile.

Let’s make some tests:

At first we create a lockdown profile from the CDB (as we did with Oracle 12.2)

SQL> create lockdown profile psi;

Lockdown Profile created.

We alter the lockdown profile to disable any statement on the PDB side except alter system set open_cursors=500;

SQL> alter lockdown profile PSI disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

Then we enable the lockdown profile:

SQL> alter system set PDB_LOCKDOWN=PSI;

System altered.

We can check the pdb_lockdown parameter value from the CDB side:

SQL> show parameter pdb_lockdown

NAME				     TYPE	 VALUE
------------------------------------ ----------- -------
pdb_lockdown			     string	 PSI

From the PDB side what happens ?

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set optimizer_mode='FIRST_ROWS_10';
alter system set optimizer_mode='FIRST_ROWS_10'
*
ERROR at line 1:
ORA-01031: insufficient privileges

This is a good feature, allowing a greater degree of separation between different PDB of the same instance.

We can create a lockdown profile disabling partitioned tables creation:

SQL> connect / as sysdba
Connected.
SQL> create lockdown profile psi;

Lockdown Profile created.

SQL> alter lockdown profile psi disable option=('Partitioning');

Lockdown Profile altered.

SQL> alter system set pdb_lockdown ='PSI';

System altered.

On the CDB side, we can create partitioned tables:

SQL> create table emp (name varchar2(10)) partition by hash(name);

Table created.

On the PDB side we cannot create partitioned tables:

SQL> alter session set container = pdb;

Session altered.

SQL> show parameter pdb_lockdown

NAME				     TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
pdb_lockdown			     string
APP
SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

We now have the possibility to create a lockdown profile from another one:

Remember we have the pdb lockdown profile app disabling partitioned tables creation, we can create a new app_hr lockdown profile from the app lockdown profile and add new features to the app_hr one:

SQL> create lockdown profile app_hr from app;

Lockdown Profile created.

The app_hr lockdown profile will not have the possibility to run alter system flush shared_pool:

SQL> alter lockdown profile app_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

We can query the dba_lockdown_profiles view:

SQL> SELECT profile_name, rule_type, rule, status 
     FROM   dba_lockdown_profiles order by 1;

PROFILE_NAME		   RULE_TYPE	    RULE.        STATUS

APP			    OPTION.     PARTITIONING	 DISABLE
APP_HR			   STATEMENT	ALTER SYSTEM	 DISABLE
APP_HR		            OPTION.     PARTITIONING     DISABLE
SQL> alter system set pdb_lockdown=app_hr;

System altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system flush shared_pool ;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

If we reset the pdb_lockdown to app, we now can flush the shared pool:

SQL> alter system set pdb_lockdown=app;

System altered.

SQL> alter system flush shared_pool ;

System altered.

We now can create lockdown profiles in the application root, so let’s create an application PDB:

SQL> CREATE PLUGGABLE DATABASE apppsi 
AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY manager
file_name_convert=('/home/oracle/oradata/DB18', 
'/home/oracle/oradata/DB18/apppsi');  

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO
	 4 APPPSI			  MOUNTED

We open the application PDB:

SQL> alter pluggable database apppsi open;

Pluggable database altered.

We connect to the application container :

SQL> alter session set container=apppsi;

Session altered.

We have the possibility to create a lockdown profile:

SQL> create lockdown profile apppsi;

Lockdown Profile created.

And to disable some features:

SQL> alter lockdown profile apppsi disable option=('Partitioning');

Lockdown Profile altered.

But there is a problem if we try to enable the profile:

SQL> alter system set pdb_lockdown=apppsi;
alter system set pdb_lockdown=apppsi
*
ERROR at line 1:
ORA-65208: Lockdown profile APPPSI does not exist.

And surprise we cannot create a partitioned table:

SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

Let’s do some more tests: we alter the lockdown profile like this:

SQL> alter lockdown profile apppsi disable statement=('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

In fact we cannot use sys in order to test lockdown profiles in APP root, we have to use an application user with privileges such as create or alter lockdown profiles in the application container. So after creating an appuser in the application root:

SQL> connect appuser/appuser@apppsi

SQL> create lockdown profile appuser_hr;

Lockdown Profile created.

SQL> alter lockdown profile appuser_hr disable option=('Partitioning');

Lockdown Profile altered.

And now it works fine:

SQL> alter system set pdb_lockdown=appuser_hr;

System altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

And now can we enable again the partitioning option for the appuser_hr profile in the APP root ?

SQL> alter lockdown profile appuser_hr enable option = ('Partitioning');

Lockdown Profile altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

It does not work as expected, the lockdown profile has been updated, but as previously we cannot create a partitioned table.

Let’s do another test with the statement option: we later the lockdown profile in order to disable all alter system set statements except with open_cursors:

SQL> alter lockdown profile appuser_hr disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter system set open_cursors=500;

System altered.

This is a normal behavior.

Now we alter the lockdown profile in order to disable alter system flush shared_pool:

SQL> alter lockdown profile appuser_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

That’s fine :=)

Now we enable the statement:

SQL> alter lockdown profile appuser_hr enable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

And again this is not possible …

Let’s try in the CDB root:

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app disable statement =('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

The behavior is correct, let’s try to enable it :

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app enable statement=('ALTER SYSTEM') 
clause ALL;

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';

System altered.

This is correct again, it seems it does not work correctly in the APP root …

In conclusion the lockdown profile new features are powerful and will be very useful for security reasons. It will allow the DBAs to define a finer granularity  to restrict user’s rights to what they only need to access. But we have to be careful, with the PDB lockdown profiles we can build and generate very complicated database administration.

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article PDB lockdown with Oracle 18.3.0.0 est apparu en premier sur Blog dbi services.

Perry Ellis International, Inc. Implements Oracle Retail Cloud to Personalize Global Customer Experience

Oracle Press Releases - Mon, 2018-09-10 10:46
Press Release
Perry Ellis International, Inc. Implements Oracle Retail Cloud to Personalize Global Customer Experience Expanded Relationship with Oracle Enables PEI to gain 360 Degree View of Over 1 Million Customers in Less Than 7 Weeks

Redwood Shores Calif—Sep 10, 2018

Global apparel fashion house Perry Ellis International, Inc. (PEI) has deployed Oracle Retail Customer Engagement Cloud Services to better personalize the shopping experience for customers across the United States and United Kingdom.  PEI’s brands Perry Ellis®, An Original Penguin by Munsingwear®, and Cubavera® manage a large international footprint with their own retail stores and e-commerce channels. These multiple points of engagement for consumers created complexity that made it difficult to understand consumer behavior. Oracle Retail Customer Engagement provides PEI with a comprehensive view of shopping behavior of over 1 million loyalty members and a platform to leverage that information to personalize customer brand interactions across all touch points.
 
“Our goal with our digital transformation is to focus on the experience we provide for our customers. Customer Engagement was an important step in this transformation. We were able to quickly integrate to our legacy systems. Now we can move forward with our next generation Point of Service with the confidence to deliver a superior in-store experience for our customers and associates,” said Luis Paez, chief information officer, PEI. “We recognize Oracle’s continued investment in retail-specific solutions while providing lower cost of ownership with the new cloud service. For us the value is the robust functionality, scalability and speed to market.”
 
“Having a holistic perspective of how consumers engage with your brand is critical to develop experiences necessary to compete and thrive in the retail community,” said Mike Webster, senior vice president and general manager, Oracle Retail. “With cloud technology, brands can begin innovating and refocusing their efforts to amplifying the customer experience at an unprecedented rate. This implementation represents the fastest go-live in the cloud to date for Oracle Retail Customer Engagement.”
 
PEI and Oracle Retail have a long-standing relationship having previously implemented Oracle Retail Merchandising System, Oracle Retail Price ManagementOracle Retail Sales AuditOracle Retail AllocationOracle Retail Store Inventory Management, Oracle Retail Point of Service and Oracle Retail Central Office. The interoperability compelled PEI to replace their existing customer relationship management system and upgrade to the latest Oracle Retail Cloud release to support their loyalty program that rewards and retains customers for their purchases while optimizing margins. PEI is currently migrating to the latest version of Oracle Retail Xstore Point-of-Service.
 
The implementation process was guided by IT leadership from PEI with a clear strategy that begins with executive sponsorship and concludes with rigorous testing of the solutions. This process was supported by BTM Global, a Gold-level member of Oracle Partner Network (OPN).  BTM Global has provided a full range of systems integration services including functional and technical design work, custom integration points, interface designs, development and testing, and scripting through training and user-focused testing for PEI for multiple Oracle solutions. PEI decided to implement Oracle Retail Customer Engagement Cloud Services after experiencing operational efficiencies from upgrading the enterprise suite of Oracle Retail Merchandise Operations Management in just six months with BTM Global.
 
"In complex projects with fast timelines, collaboration and trust between the retailer and integration partner are required," said Tom Schoen, Chief Executive Officer, BTM Global. "This unique project was successfully and efficiently launched because of our history with Perry Ellis International. We were honored to be chosen once again by them as their implementation partner." 
Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Perry Ellis International
Perry Ellis International, Inc. is a leading designer, distributor and licensor of a broad line of high quality men's and women's apparel, accessories and fragrances. The company's collection of dress and casual shirts, golf sportswear, sweaters, dress pants, casual pants and shorts, jeans wear, active wear, dresses and men's and women's swimwear is available through all major levels of retail distribution. The company, through its wholly owned subsidiaries, owns a portfolio of nationally and internationally recognized brands, including: Perry Ellis®, An Original Penguin by Munsingwear®, Laundry by Shelli Segal®, Rafaella®, Cubavera®, Ben Hogan®, Savane®, Grand Slam®, John Henry®, Manhattan®, Axist®, Jantzen® and Farah®. The company enhances its roster of brands by licensing trademarks from third parties, including: Nike® for swimwear, and Callaway®, PGA TOUR®, and Jack Nicklaus® for golf apparel and Guy Harvey® for performance fishing and resort wear.  Additional information on the company is available at http://www.pery.com.
 
About Oracle Retail:
Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.
Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • 4155951584

Stats time

Jonathan Lewis - Mon, 2018-09-10 07:37

I wrote a note a couple of years ago explaining how I used to get a rough idea (with some errors) of how much time was spent in the overnight stats collection by each object. One of the nice little enhancements that appeared in 12c was the appearance of a couple of functions that can report information about this type of thing, and more. These are the dbms_stats function report_stats_operations() and report_single_stats_operation() with the following definitions:


function report_stats_operations(
        detail_level  varchar2                  default 'TYPICAL',
        format        varchar2                  default 'TEXT', 
        latestN       number                    default null,
        since         timestamp with time zone  default null,
        until         timestamp with time zone  default null,
        auto_only     boolean                   default false,
        container_ids dbms_utility.number_array default dbms_stats.NULL_NUMTAB
) return clob;

function report_single_stats_operation(
        opid         number,
        detail_level varchar2 default 'TYPICAL', 
        format       varchar2 default 'TEXT', 
        container_id number   default null
) return clob;

As you can see, there are lots of options to generating the report of stats operations, and you can check the manuals or $ORACLE_HOME/rdbms/admin/dbmsstat.sql for information about how you can use it. One of the simplest options would be to run from SQL*Plus:

set long 1000000

set pagesize    0
set linesize  255
set trimspool on

column text_line format a254

select
        dbms_stats.report_stats_operations(
                since => sysdate - 3
        ) text_line
from dual
;

Of course you wouldn’t be able to pick the option that limited the report to just the auto gather stats jobs (auto_only => true) as SQL doesn’t a boolean type, so you would have to write a little PL/SQL wrapper to capture just those details. Here’s a sample of the (rather wide) output:


select
        dbms_stats.report_single_stats_operation(25809) text_line
from dual
;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation Id | Operation             | Target                             | Start Time          | End Time            | Status      | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25811        | purge_stats           |                                    | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 0           | 0                | 0            | 0            |
|              |                       |                                    | 01.47.37.764146 PM  | 01.47.38.405437 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25810        | purge_stats           |                                    | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 0           | 0                | 0            | 0            |
|              |                       |                                    | 01.47.35.827284 PM  | 01.47.37.763926 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809        | gather_database_stats | AUTO                               | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 285         | 282              | 3            | 0            |
|              | (auto)                |                                    | 01.46.31.672033 PM  | 01.47.35.826873 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25807        | gather_table_stats    | TEST_USER.T1                       | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 1           | 1                | 0            | 0            |
|              |                       |                                    | 12.59.57.704111 PM  | 12.59.57.822695 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |

etc.

You’ll notice in this little sample that operation 25809 is an (auto) gather_database_stats operation which ran 285 tasks, failing on 3 and succeeding on 282 – so lets run the “single stats operation” report to find out more.


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation Id | Operation                    | Target | Start Time                      | End Time                        | Status    | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809        | gather_database_stats (auto) | AUTO   | 08-SEP-18 01.46.31.672033 PM    | 08-SEP-18 01.47.35.826873 PM    | COMPLETED | 285         | 282              | 3            | 0            |
|              |                              |        | +01:00                          | +01:00                          |           |             |                  |              |              |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                                                                                                                                     |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|                                                                                              T A S K S                                                                                              |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | Target                                                         | Type            | Start Time                          | End Time                            | Status                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$                                                | TABLE           | 08-SEP-18 01.46.50.719791 PM +01:00 | 08-SEP-18 01.46.51.882418 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$_OBJ                                            | INDEX           | 08-SEP-18 01.46.51.273134 PM +01:00 | 08-SEP-18 01.46.51.773297 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$_TS                                             | INDEX           | 08-SEP-18 01.46.51.777032 PM +01:00 | 08-SEP-18 01.46.51.787730 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
...
...
...
|    | SYS.WRH$_SEG_STAT_PK.WRH$_SEG_ST_3089296639_5150               | INDEX PARTITION | 08-SEP-18 01.47.35.409615 PM +01:00 | 08-SEP-18 01.47.35.483637 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$LOGMNR_CONTENTS                                          | TABLE           | 08-SEP-18 01.47.35.520504 PM +01:00 | 08-SEP-18 01.47.35.696953 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$LOGMNR_REGION                                            | TABLE           | 08-SEP-18 01.47.35.699253 PM +01:00 | 08-SEP-18 01.47.35.722545 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$DRC                                                      | TABLE           | 08-SEP-18 01.47.35.725003 PM +01:00 | 08-SEP-18 01.47.35.801384 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|                                                                                                                                                                                                     |
|                                                                                                                                                                                                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I’ve trimmed out most of the 285 entries, of course, showing that the last three in the list failed; but with no indication why they failed. Fortunately we could have called the report with “detail_level => ‘ALL'” – so let’s see what that gives us:

select
        dbms_stats.report_single_stats_operation(
                opid         => 25809,
                detail_level => 'ALL'
        ) text_line
from dual
;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation | Operation             | Target | Start Time      | End Time        | Status    | Total    | Successful | Failed   | Active   | Job Name | Session  | Additional Info             |
| Id        |                       |        |                 |                 |           | Tasks    | Tasks      | Tasks    | Tasks    |          | Id       |                             |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809     | gather_database_stats | AUTO   | 08-SEP-18       | 08-SEP-18       | COMPLETED | 285      | 282        | 3        | 0        |          | 250      | Parameters: [block_sample:  |
|           | (auto)                |        | 01.46.31.672033 | 01.47.35.826873 |           |          |            |          |          |          |          | FALSE] [cascade: NULL]      |
|           |                       |        | PM +01:00       | PM +01:00       |           |          |            |          |          |          |          | [concurrent: FALSE]         |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [degree:                    |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_DEGREE_VALUE]       |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [estimate_percent:          |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_ESTIMATE_PERCENT]   |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [granularity:               |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_GRANULARITY]        |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [method_opt:                |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_METHOD_OPT]         |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [no_invalidate:             |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DBMS_STATS.AUTO_INVALIDATE] |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [reporting_mode: FALSE]     |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [stattype: DATA]            |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                                                                                                                                     |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|                                                                                              T A S K S                                                                                              |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | Target       | Type         | Start Time   | End Time     | Status    | Rank  | Job Name | Estimated    | Batching     | Histogram    | Extended     | Reason Code  | Additional   |        |
|       |              |              |              |              |           |       |          | Cost         | Info         | Columns      | Stats        |              | Info         |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | SYS.RECYCLEB | TABLE        | 08-SEP-18 01 | 08-SEP-18 01 | COMPLETED | 1     |          | N/A          | N/A          |              |              | stale stats  |              |        |
|       | IN$          |              | .46.50.71979 | .46.51.88241 |           |       |          |              |              |              |              |              |              |        |
|       |              |              | 1 PM +01:00  | 8 PM +01:00  |           |       |          |              |              |              |              |              |              |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
...
...
...
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | SYS.X$DRC    | TABLE        | 08-SEP-18 01 | 08-SEP-18 01 | FAILED    | 151   |          | N/A          | N/A          |              |              | no stats     | ORA-20000:   |        |
|       |              |              | .47.35.72500 | .47.35.80138 |           |       |          |              |              |              |              |              | Unable to    |        |
|       |              |              | 3 PM +01:00  | 4 PM +01:00  |           |       |          |              |              |              |              |              | analyze      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | TABLE "SYS". |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | "X$DRC", log |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | miner or     |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | data guard   |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | must be      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | started      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | before       |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | analyzing    |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | this fixed   |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | table"       |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|                                                                                                                                                                                                     |
|                                                                                                                                                                                                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------




So we can now see that stats collection failed on the one object I’ve left in the extract because it’s an X$ object that only exists when LogMiner is running. You’ll notice that the we also get some information about things like input parameters to calls and reasons why objects were selected (“stale stats” in the first item in this list).

It’s a great convenience – but it’s always possible to grumble: I’d rather like to see the elapsed time for each operation, or even a filter to limit the report to any operation that took more than X seconds. However, if I want to do a quick check on a client site I’d rather not have to type in the code to query the base tables by hand.

Connecting to Azure SQL Managed Instance from on-premise network

Yann Neuhaus - Mon, 2018-09-10 06:26

A couple of weeks ago, I wrote up about my first immersion into the SQL Server managed instances (SQLMIs), a new deployment model of Azure SQL Database which provides near 100% compatibility with the latest SQL Server on-premises Database Engine. In the previous blog post, to test a connection to this new service, I installed an Azure virtual machine on the same VNET (172.16.0.0/16) including SQL Server management studio. For testing purpose, we don’t need more, but in real production scenario chances are your SQL Azure MI would be part of your on-premise network with a more complex Azure network topology including VNET, Express Route or VPN S2S as well. Implementing such infrastructure won’t be likely your concern if you are a database administrator. But you need to be aware of the underlying connectivity components and how to diagnose possible issues or how to interact with your network team in order to avoid being under pressure and feeling the wrath of your application users too quickly :)

So, I decided to implement this kind of infrastructure in my lab environment but if you’re not a network guru like me you will likely face some difficulties to configure some components especially when it comes the VPN S2S. In addition, you have to understand different new notions about Azure network before hoping to see your infrastructure work correctly. As an old sysadmin, I admit it was a very great opportunity to turn my network learning into a concrete use case. Let’s first set the initial context. Here my lab environment I’ve been using for a while for different purposes as internal testing and event presentations as well. It addresses a lot of testing scenarios including multi-subnet architectures with SQL Server FCIs and SQL Server availability groups.

blog 142 - 1 - lab environment

bviously, some static routes are already set up to allow network traffic between my on-premise subnets. As you guessed, the game consisted in extending this on-premise network to my SQL MI network on Azure. As a reminder, SQL MI is not reachable from a public endpoint and you may connect only from an internal network (either directly from Azure or from your on-premise network). As said previously one of my biggest challenges was to configure my remote access servers as VPN server to communicate with my SQL MI Azure network. Fortunately, you have a plenty of pointers on the internet that may help you to achieve this task.  This blog post is a good walk-through by the way. In my context, you will note I also had to apply special settings to my home routeur in order to allow IPsec Passthrough as well as to add my RRAS server internal IP (192.168.0.101) to the DMZ. I also used IKEv2 VPN protocol and pre-shared key for authentication between my gateways on-premise and on Azure. The VPN S2S configuration is environment specific and this probably why doing a presentation to customer or at events is so difficult especially if you’re outside of your own network.

Anyway, let’s talk about the Azure side configuration. My Azure network topology is composed of two distinct VNETs as follows:

blog 142 - 2 - VNET configuration

The connection between my on-premise and my Azure networks are defined as shown below:

$vpnconnection = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg 
$vpnconnection.Name
$vpnconnection.VirtualNetworkGateway1.Id
$vpnconnection.LocalNetworkGateway2.Id 

dbi-vpn-connection
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The first VNET (172.17.x.x) is used as hub virtual network and owns my gateway. The second one (172.16.x.x) concerns is SQL MI VNET:

Get-AzureRmVirtualNetwork | Where-Object { $_.Name -like '*-vnet' } | % {

    Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $_ | Select Name, AddressPrefix
} 

Name          AddressPrefix  
----          -------------  
default       172.17.0.0/24  
GatewaySubnet 172.17.1.0/24  
vm-net        172.16.128.0/17
sql-mi-subnet 172.16.0.0/24

 

My azure gateway subnet (GatewaySubnet) is part of the VPN connectivity with the related gateway connections:

$gatewaycfg = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg -Name dbi-vpn-connection 
$gatewaycfg.VirtualNetworkGateway1.Id
$gatewaycfg.LocalNetworkGateway2.Id 

/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The dbi-local-network-gw local gateway includes the following addresses prefix that correspond to my local lab environment network:

$gatewaylocal = Get-AzureRMLocalNetworkGateway -ResourceGroupName dbi-onpremises-rg -Name dbi-local-network-gw 
$gatewaylocal.LocalNetworkAddressSpace.AddressPrefixes 

192.168.0.0/16
192.168.5.0/24
192.168.40.0/24

 

Note that I’ve chosen a static configuration but my guess is that I could turn to the BGP protocol instead to make things more dynamic. I will talk quickly about using BGP with routing issues at the end of the write-up. But at this stage, some misconfiguration steps are missing to hope reaching out my SQL MI instance from my lab environment network. Indeed, although my VPN connection status is ok, I was able only to reach out my dbi-on-premise-vnet VNET and I need a way to connect to the sql-mi-vnet VNET. So, I had to turn on both the virtual network peering and gateway transit mechanism. Peering 2 VNETs Azure automatically routes traffic between them by the way.

blog 142 - 3 - VNET configuration peering

Here the peering configuration I applied to my dbi-onpremises-vnet VNET (first VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet 
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : dbi-onpremises-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : True
UseRemoteGateways         : False

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/sql-mi-rg/providers/Microsoft.Network/virtualNetworks/sql-mi-vnet

 

And here the peering configuration of my sql-mi-vnet VNET (2nd VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : sql-mi-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : False
UseRemoteGateways         : True

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworks/dbi-onpremises-vnet

 

Note that to allow traffic that comes from my on-premise network to go through my first VNET (dbi-onpremises-vnet) at the destination of the second one (sql-mi-vnet), I need to enable some configuration settings as Allow Gateway Transit, Allow Forwarded Traffic and remote gateway on the concerned networks.

At this stage, I still faced a weird issue because I was able to connect to a virtual machine installed on the same VNET than my SQL MI but no luck with the SQL instance. In addition, the psping command output confirmed my connectivity issue letting me think about a routing issue.

blog 142 - 5 - psping command output

Routes from my on-premise network seemed to be well configured as show below. The VPN is a dial up internet connection in my case.

blog 142 - 6 - local route

I also got the confirmation that my on-premise network packets were correctly sent through my Azure VPN gateway by the Microsoft support team (a particular to Filipe Bárrios – support engineer Azure Networking). In fact, I got stuck a couple of days without to figure out exactly what happens. Furthermore Checking effective routes seemed to not viable option in my case because there is no explicit network interface with SQL MI. Please feel free to comment if I get wrong on this point. Fortunately, I found out a PowerShell script provided by Jovan Popovic (MSFT) which seems to have put me on the right track:

blog 142 - 4 - VNET PGP configuration

Referring to the Microsoft documentation, it seems PGP propagation could be very helpful in my case.

Support transit routing between your on-premises networks and multiple Azure VNets

BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.

After enabling the corresponding option in the SQL MI route table and opening the SQL MI ports in my firewall connection was finally successful.

blog 142 - 8 - sql mi route bgp

blog 142 - 7 - psping output 2

Hope it helps!

See you

 

 

Cet article Connecting to Azure SQL Managed Instance from on-premise network est apparu en premier sur Blog dbi services.

Convert function not working with CLOB datatype giving garbage value

Tom Kyte - Sun, 2018-09-09 17:46
We have some data of type lob and varchar2 which we need to export to another application which is CSV supported i.e. it reads data in form of CSV only. Its legacy application so we need to export data from oracle database in CSV format and need to s...
Categories: DBA Blogs

Partitioning -- 4 : Row Movement

Hemant K Chitale - Sun, 2018-09-09 10:06
Do you expect Primary Keys to be updatable ?  Some argue that Primary Key values should be immutable.  The argument is that a Primary Key should not be modified.

What about Partition Keys ?  Would you allow a Partition Key to be updated ?

Let me take the SALES_DATA table again :

SQL> desc sales_data
Name Null? Type
----------------------------------------- -------- ----------------------------
SALE_ID NOT NULL NUMBER
SALE_DATE DATE
INVOICE_NUMBER VARCHAR2(21)
CUSTOMER_ID NUMBER
PRODUCT_ID NUMBER
SALE_VALUE NUMBER

SQL> insert into sales_data
2 values (sales_data_seq.nextval,
3 to_date('09-SEP-2019','DD-MON-YYYY'),
4 'INV320001X',
5 45,
6 52,
7 10000)
8 /

1 row created.

SQL> commit;

Commit complete.

SQL>


After the INSERT, I realise that the year in the SALE_DATE is wrong -- it is 2019 instead of 2018.  I need to update the row to set the year to 2018.
(Since the SALES_DATA table is partitioned to have a separate Partition for each year, this row has gone into the P_2019 Partition).

SQL> select * from sales_data
2 where invoice_number='INV320001X' and customer_id=45;

SALE_ID SALE_DATE INVOICE_NUMBER CUSTOMER_ID PRODUCT_ID SALE_VALUE
---------- --------- --------------------- ----------- ---------- ----------
320001 09-SEP-19 INV320001X 45 52 10000

SQL> select * from sales_data partition (P_2019);

SALE_ID SALE_DATE INVOICE_NUMBER CUSTOMER_ID PRODUCT_ID SALE_VALUE
---------- --------- --------------------- ----------- ---------- ----------
320001 09-SEP-19 INV320001X 45 52 10000

SQL>
SQL> update sales_data
2 set sale_date = to_date('09-SEP-2018','DD-MON-YYYY')
3 where sale_id=320001
4 /
update sales_data
*
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change


SQL>


I encounter an error.  Oracle does not like updating a Partition Key value such that the row would have to move to a different Partition --- from the P_2019 Partition to the P_2018 Partition.

How would I allow updates that result in a row moving to a different Partition ?

SQL> alter table sales_data enable row movement;

Table altered.

SQL> update sales_data
2 set sale_date = to_date('09-SEP-2018','DD-MON-YYYY')
3 where sale_id=320001
4 /

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from sales_data partition (P_2019);

no rows selected

SQL> select * from sales_data partition (P_2018)
2 where sale_id=320001
3 /

SALE_ID SALE_DATE INVOICE_NUMBER CUSTOMER_ID PRODUCT_ID SALE_VALUE
---------- --------- --------------------- ----------- ---------- ----------
320001 09-SEP-18 INV320001X 45 52 10000

SQL>


The ALTER TABLE ... ENABLE ROW MOVEMENT is a DDL command (needs to be issued only once to allow any number of subsequent updates to the tables rows) that allows a row to move from one Partition to another Partition.  In this case, the row moved from P_2019 to P_2018.

Moving rows from one Partition to another Partition is expensive.  Each row moved in such a manner results in
(a) marking deletion of the row from the original Partition
(b) physically inserting the *entire* rows (irrespective of length of the row) into the new Partition -- not just the SALE_DATE value but every column has to be written into a block in the new Partition
(c) updating *every* index (Global or Local) on the Table

That is why it is not a good design to have frequently updated Partition Keys resulting in a row moving from one Partition to another.  You may have to reconsider the Partitioning definition or data and transaction flow in the application.

(Do you know where else ENABLE ROW MOVEMENT is required ?  There are other cases, not related to Partitioning, where you may have to ENABLE ROW MOVEMENT for a table.  By default when you CREATE a Table, ROW MOVEMENT is not enabled unless you explicitly enable it).



Categories: DBA Blogs

Oracle 18c: Cluster With Oracle ASM Filter Driver

Yann Neuhaus - Sat, 2018-09-08 17:32

During the installation of Oracle Grid Infrastructure, you can optionally enable automated installation and configuration of Oracle ASM Filter Driver for your system with the Configure ASM Filter Driver check box on the Create ASM Disk Group wizard page. When you enable the Configure ASM Filter Driver box, an automated process for Oracle ASMFD is launched during Oracle Grid Infrastructure installation.

If Oracle ASMLIB exists on your Linux system, then deinstall Oracle ASMLIB before installing Oracle Grid Infrastructure, so that you can choose to install and configure Oracle ASMFD during an Oracle Grid Infrastructure installation.
In this blog I do install a 2 nodes cluster of Oracle 18c using Oracle ASMFD. Below the disks we will use.

[root@rac18ca ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:09 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:09 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:09 /dev/sdf
[root@rac18ca ~]#

[root@rac18cb ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:46 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:46 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:46 /dev/sdf
[root@rac18cb ~]#

We suppose that all prerequisites are done (public IP, private IP, scan,shared disks ….). Also we will not show all print screens.
The first step is to unzip the Oracle software in the ORACLE_HOME for the grid infrastructure.

unzip -d /u01/app/grid/18.0.0.0 LINUX.X64_180000_grid_home.zip

After we have to use the ASMCMD afd_label command to provision disk devices for use with Oracle ASM Filter Driver as follows.

[root@rac18ca ~]# export ORACLE_HOME=/u01/app/oracle/18.0.0.0/grid
[root@rac18ca ~]# export ORACLE_BASE=/tmp                                       
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label VOTOCR /dev/sde --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DATA /dev/sdd --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DIVERS /dev/sdf --init
[root@rac18ca ~]#

And then we can use the ASMCMD afd_lslbl command to verify the device has been marked for use with Oracle ASMFD.

[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lsl                              bl /dev/sde
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
VOTOCR                                /dev/sde
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdd
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA                                  /dev/sdd
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdf
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DIVERS                                /dev/sdf
[root@rac18ca network-scripts]#

Now that disks are initialized for ASMFD, we can start the installation.

[oracle@rac18ca grid]$ ./gridSetup.sh

We will not show all the pictures.

imag1

imag2

imag3

imag4

imag5

imag6

imag7

And in next window, we can choose the disks for the OCR and Voting files. We will also check Configure Oracle ASM Filter Driver.

imag8

And then continue the installation. We will have to run the orainstRoot.sh and the root.sh scripts. All these steps are not shown here.
At the end of the installation we can verify the status of the cluster

[oracle@rac18cb ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[oracle@rac18ca ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_VOTOCR.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.net1.network
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.ons
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac18ca                  Started,STABLE
      2        ONLINE  ONLINE       rac18cb                  Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18ca.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18cb.vip
      1        ONLINE  ONLINE       rac18cb                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
--------------------------------------------------------------------------------
[oracle@rac18ca ~]$

We also can check that ASMFD is enabled.

[oracle@rac18ca ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DATA                        ENABLED   /dev/sdd
DIVERS                      ENABLED   /dev/sdf
VOTOCR                      ENABLED   /dev/sde
[oracle@rac18ca ~]$


[oracle@rac18ca ~]$ asmcmd dsget
parameter:/dev/sd*, AFD:*
profile:/dev/sd*,AFD:*
[oracle@rac18ca ~]$

[oracle@rac18ca ~]$ asmcmd lsdsk
Path
AFD:DATA
AFD:DIVERS
AFD:VOTOCR
[oracle@rac18ca ~]$

Conclusion
In this blog we have seen how we can install a cluster using ASMFD

 

Cet article Oracle 18c: Cluster With Oracle ASM Filter Driver est apparu en premier sur Blog dbi services.

Partitioning -- 3d : Partial Indexing (in 11g)

Hemant K Chitale - Sat, 2018-09-08 10:24
Oracle 12c has introduced a new feature called "Partial Index" whereby selective partitions of a Table are indexed.  This is useful, for example, where you have a large historical table and you know that older Partitions are infrequently accessed and no longer need to be indexed.  For such tables, you can afford to "lose" the index for these older Partitions.

How would you do this in 11.2 ?

Let me go back to the SALES_DATA table with data from 2016 to 2018 populated.  This is the status of the index partition segments :

SQL> l
1 select segment_name, partition_name, bytes/1024
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4 and segment_name in
5 (select index_name
6 from user_indexes
7 where table_name = 'SALES_DATA')
8* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024
------------------------------ ------------ ----------
SALES_DATA_LCL_NDX_1 P_2016 3072
SALES_DATA_LCL_NDX_1 P_2017 3072
SALES_DATA_LCL_NDX_1 P_2018 3072
SALES_DATA_LCL_NDX_2 P_2016 64
SALES_DATA_LCL_NDX_2 P_2017 64
SALES_DATA_LCL_NDX_2 P_2018 64

6 rows selected.

SQL>


So, if I now want to "unindex" the year 2016 partition (P_2016) of the SALES_DATA table, I can :

SQL> show parameter deferred_segment_creation

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE

SQL> alter index SALES_DATA_LCL_NDX_1 modify partition P_2016 unusable;

Index altered.

SQL> alter index SALES_DATA_LCL_NDX_2 modify partition P_2016 unusable;

Index altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4 and segment_name in
5 (select index_name
6 from user_indexes
7 where table_name = 'SALES_DATA')
8* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024
------------------------------ ------------ ----------
SALES_DATA_LCL_NDX_1 P_2017 3072
SALES_DATA_LCL_NDX_1 P_2018 3072
SALES_DATA_LCL_NDX_2 P_2017 64
SALES_DATA_LCL_NDX_2 P_2018 64

SQL>
SQL> select count(*) from sales_data partition (P_2016);

COUNT(*)
----------
100000

SQL>



You will notice that although the P_2016 Partition in the Table has data, the corresponding Index Partition no longer has a segment -- no space is allocated to it  (although the logical definition of the index exists).  This is possible with the "deferred_segment_creation" parameter set to TRUE in 11g.

In fact, you will notice that although the table has Partitions for 2019 and 2020 and MAXVALUE, corresponding Index Partition Segments do not exist (because no data has been inserted into those Table Partitions yet) !

SQL> select partition_name           
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NA
------------
P_2016
P_2017
P_2018
P_2019
P_2020
P_MAXVALUE

6 rows selected.

SQL>
SQL> select index_name, partition_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7 order by index_name, partition_position
8 /

INDEX_NAME PARTITION_NA STATUS
------------------------------ ------------ --------
SALES_DATA_LCL_NDX_1 P_2016 UNUSABLE
SALES_DATA_LCL_NDX_1 P_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USABLE
SALES_DATA_LCL_NDX_2 P_2016 UNUSABLE
SALES_DATA_LCL_NDX_2 P_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USABLE

12 rows selected.

SQL>


This behaviour is a consequence of "deferred_segment_creation".

Note : If a Partitioned Index is a Unique / Primary Key Index, do NOT attempt to set an Index Partition to UNUSABLE.  UNUSABLE status would prevent INSERTs into the table.



Categories: DBA Blogs

Documentum – Silent Install – xPlore IndexAgent

Yann Neuhaus - Sat, 2018-09-08 08:00

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here), D2 and finally the xPlore binaries & Dsearch. This blog will be the last one of this series related to silent installation on Documentum and it will be about how to install an xPlore IndexAgent on the existing docbase/repository created previously.

So let’s start, as always, with the preparation of the properties file:

[xplore@full_text_server_01 ~]$ vi /tmp/xplore_install/FT_IA_Installation.properties
[xplore@full_text_server_01 ~]$ cat /tmp/xplore_install/FT_IA_Installation.properties
### Silent installation response file for Indexagent
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
indexagent.configMode.create=1
indexagent.configMode.upgrade=0
indexagent.configMode.delete=0
indexagent.configMode.create.migration=0

### Other configurations
indexagent.ess.host=full_text_server_01.dbi-services.com
indexagent.ess.port=9300

indexagent.name=Indexagent_Docbase1
indexagent.FQDN=full_text_server_01.dbi-services.com
indexagent.instance.port=9200
indexagent.instance.password=ind3x4g3ntAdm1nP4ssw0rd

indexagent.docbase.name=Docbase1
indexagent.docbase.user=dmadmin
indexagent.docbase.password=dm4dm1nP4ssw0rd

indexagent.connectionBroker.host=content_server_01.dbi-services.com
indexagent.connectionBroker.port=1489

indexagent.globalRegistryRepository.name=gr_docbase
indexagent.globalRegistryRepository.user=dm_bof_registry
indexagent.globalRegistryRepository.password=dm_b0f_reg1s7ryP4ssw0rd

indexagent.storage.name=default
indexagent.local_content_area=/opt/xPlore/wildfly9.0.1/server/DctmServer_Indexagent_Docbase1/data/Indexagent_Docbase1/export

common.installOwner.password=ind3x4g3ntAdm1nP4ssw0rd

[xplore@full_text_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the below mentioned java is a 32 or 64 bits
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • indexagent.configMode.create: Whether or not you want to install an IndexAgent (binary value)
  • indexagent.configMode.upgrade: Whether or not you want to upgrade an IndexAgent (binary value)
  • indexagent.configMode.delete: Whether or not you want to delete an IndexAgent (binary value)
  • indexagent.configMode.create.migration: This isn’t used anymore in recent installer versions but I still don’t know what was its usage before… In any cases, set this to 0 ;)
  • indexagent.ess.host: The Fully Qualified Domain Name of the primary Dsearch this new IndexAgent will be linked to
  • indexagent.ess.port: The port that the primary Dsearch is using
  • indexagent.name: The name of the IndexAgent to be installed. The default name is usually Indexagent_<docbase_name>
  • indexagent.FQDN: The Fully Qualified Domain Name of the current host the IndexAgent is being installed on
  • indexagent.instance.port: The port that the IndexAgent is/will be using (HTTP)
  • indexagent.instance.password: The password to be used for the new IndexAgent JBoss admin
  • indexagent.docbase.name: The name of the docbase/repository that this IndexAgent is being installed for
  • indexagent.docbase.user: The name of an account on the target docbase/repository to be used to configure the objects (updating the dm_server_config, dm_ftindex_agent_config, aso…) and that has the needed permissions for that
  • indexagent.docbase.password: The password of the above-mentioned account
  • indexagent.connectionBroker.host: The Fully Qualified Domain Name of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.connectionBroker.port: The port of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.globalRegistryRepository.name: The name of the GR repository
  • indexagent.globalRegistryRepository.user: The name of the BOF Registry account created on the CS inside the GR repository. This is usually something like “dm_bof_registry”
  • indexagent.globalRegistryRepository.password: The password used by the BOF Registry account
  • indexagent.storage.name: The name of the storage location to be created. The default one is “default”. If you intend to create new collections, you might want to give it a more meaningful name
  • indexagent.local_content_area: The path to be used to store the content temporarily on the file system. The value I used above is the default one but you can put it wherever you want. If you are using a multi-node, this path needs to be accessible from all nodes of the multi-node so you can put it under the “ess.data_dir” folder for example
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue

 

Once the properties file is ready, make sure that the Dsearch this IndexAgent is linked to is currently running (http(s)://<indexagent.ess.host>:<indexagent.ess.port>/dsearchadmin), make sure that the Global Registry repository (gr_docbase) as well as the target repository (Docbase1) are running and then you can install the Documentum IndexAgent in silent using the following command:

[xplore@full_text_server_01 ~]$ /opt/xPlore/setup/indexagent/iaConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f /tmp/xplore_install/FT_IA_Installation.properties

 

This now concludes the series about Documentum silent installation. There are other components that support the silent installation like the Process Engine for example but usually they require only a few parameters (or even none) so that’s why I’m not including them here.

 

Cet article Documentum – Silent Install – xPlore IndexAgent est apparu en premier sur Blog dbi services.

Differences in charts in IRs, IGs, and Charts

Tom Kyte - Sat, 2018-09-08 05:26
Hi, can you please explain what is the architectural reason for such a difference between charts in IR, IG and chart regions. For example UI, series ( it can be specified in IG chart and not in IR chart ), type of charts ( in IR we have only 4 char...
Categories: DBA Blogs

How to use partial indexes for PK/UK keys

Tom Kyte - Fri, 2018-09-07 11:06
LiveSQL link: not working post link into LiveSQL link, but link works https://livesql.oracle.com/apex/livesql/s/g78ij8jxinq4409pp8dbz3owo Hello all, we have a BI solution (Oracle 12c + ODI11). We have a lot of partitioned tables (daily and mont...
Categories: DBA Blogs

Step by Step: Ansible Role To Setup Oracle ACFS On Multiple Nodes

Pakistan's First Oracle Blog - Thu, 2018-09-06 17:46
This post contains step by step instructions for creating an Ansible role acfssetup to setup Oracle ASM Cluster Filesystem (ACFS) on multiple nodes of a cluster. This assumes that Grid Infrastructure 12.1.0.2.0 is already installed on the nodes, and ASM is working fine. This also assumes that there already is Ansible installed on some controller host with ssh equivalency setup between root and Oracle users.




Step 1: Create directory structure for the role acfssetup:


$ cd /etc/ansible/roles
$ mkdir acfssetup
$ mkdir files handlers meta templates tasks vars

Step 2: Create the Tasks (/etc/ansible/roles/acfssetup/tasks/main.yml):

---

- name: Install ACFS/ADVM modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsroot install"
  tags:acfs

- name: Start and enable the ACFS modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsload start"
  shell:"{ gi_home_path }}/bin/acfsroot enable"
  tags:acfs
 
- name: As oracle user, create an ASM volume for ACFS on first node
  when: inventory_hostname in groups['node1']
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/asmcmd volcreate -G {{ asm_dg_name }} -s {{ acfs_vol_size }} {{ acfs_vol_name }}"
  shell:"{ gi_home_path }}/bin/asmcmd volinfo -G {{ asm_dg_name }} {{ acfs_vol_name }} | grep Device | sed 's/.*://'"
  register: {{ device_name }}
  tags:acfs

- name: As oracle user, create the filesystem on the volume which was just created
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"/sbin/mkfs -t acfs {{ device_name }}.stdout"
  tags:acfs

- name: As root, create an empty directory which will house the file system
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"mkdir -p /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chown root:oinstall /{{ acfs_mount_name }}; chmod 770 /{{ acfs_mount_name }}; chown -R oracle:oinstall /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chmod 775 /{{ acfs_mount_name }}/{{ acfs_vol_name }}"
  tags:acfs

- name: As root, setup the file system to be auto mounted by clusterware
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/srvctl add volume -volume {{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -device {{ device_name }}.stdout; { gi_home_path }}/bin/srvctl add filesystem -device {{ device_name }}.stdout -path {{ acfs_mount_name }}/{{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -user {{ gi_owner }} -fstype ACFS -description \"ACFS General Purpose Mount\""
  tags:acfs

  Step 3: Create the Variables (/etc/ansible/roles/acfssetup/vars/main.yml):

ver: "12.1.0.2.0"
superuser: root
asm_instance: +ASM
asm_dg_name: DATA
acfs_vol_name: ACFSVOL1
acfs_vol_size: 10G
acfs_mount_name: acfsmounts
device_name: default([])
gi_owner: oracle
gi_group: oinstall
gi_base_path: "/u01/app/oracle"
gi_home_path: "{{ gi_base_path }}/product/{{ ver |
regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1.\\2.\\3') }}/grid"
gi_home_name: "OraGI{{ ver | regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1\\2')}}"

Step 4: Configure Ansible host file (/etc/ansible/hosts)

node1 ansible_host=node1.foo.com
node2 ansible_host=node2.foo.com

Step 5: Create the skeleton Playbook (/etc/ansible/acfs.yml):

---
- hosts: all
become: true
roles:
- acfssetup

Step 6: Run the playbook

$ ansible-playbook acfs.yml
Categories: DBA Blogs

How to sort in alphabetical order and get the 2nd ranked rows

Tom Kyte - Thu, 2018-09-06 17:06
Hi Tom, How to do sorting the data in alphabetical order and assign the rank of them like I have data and assign the rank(rank must be assign alphabetical order) like this. for that, I used to order by, <code>A ADAMS-1 ALLEN-2 B B...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator