Feed aggregator

[BLOG] Oracle GoldenGate: How to Handle Tables Without Primary Keys/Unique Indexes

Online Apps DBA - Tue, 2018-09-11 07:44

Visit: https://k21academy.com/goldengate25 & know all about: 1) Common Query & Errors faced in Oracle Goldengate 2) How To Troubleshoot the Errors 3) How to Handle Tables Without Primary Keys or Unique Indexes With Oracle GoldenGate Visit: https://k21academy.com/goldengate25 & know all about: 1) Common Query & Errors faced in Oracle Goldengate 2) How To Troubleshoot the […]

The post [BLOG] Oracle GoldenGate: How to Handle Tables Without Primary Keys/Unique Indexes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Database link queries

Tom Kyte - Tue, 2018-09-11 06:26
create or replace procedure P_POP_ILC (P_POL_NO VARCHAR2 DEFAULT NULL) is cursor c1 is select * from rsds_locn_exposure@dmn_rsk_150 where RLE_ULM_NO = NVL(P_POL_NO , RLE_ULM_NO); begin DELETE FROM IDS_LOCN_CLM@dmn_rsk_150; comm...
Categories: DBA Blogs

oracle 12g directory object feature. Is there an equivalent to utl_file_dir='*'?

Tom Kyte - Tue, 2018-09-11 06:26
Ok so I am trying to get some things with oracle to work. I am TSQL guy, and new to oracle. The guy who used to manage the Oracle side of things retired. One of our products is pretty old, but still has a large customer base. Some of those customers ...
Categories: DBA Blogs

generate test data automagically

Tom Kyte - Tue, 2018-09-11 06:26
To me is often send only description of tables, I need to create and some business logic to put in stored packages. But no flat files to load test data. That is the reason for my thoughts of dynamic procedure, which should (depended of column typ...
Categories: DBA Blogs

No matching authentication protocol

Tom Kyte - Tue, 2018-09-11 06:26
Hello All, I am trying to connect to Database . I am getting error as below while trying connection to oracle DB: Can't get the Connection for specified properties; java.sql.SQLException: ORA-28040: No matching authentication protocol...
Categories: DBA Blogs

PII search using HCI

Yann Neuhaus - Tue, 2018-09-11 05:04

In a previous blog, we described how to install Hitachi Content Intelligence the solution of Hitachi Ventara for data indexing and search. In this blog post, we will see how we can use Hitachi Content Intelligence to perform the basic search on personal information (PII).

Data Connections

HCI allows you to connect to multiple data source using default data connectors. The first step is to create a data connection. By default, multiples data connectors are available:

HCI_data_connectors

For our example, we will simply use the Local File System as the data repository. Note that, the directory must be within the HCI install directory

Below the data connection configuration for our PII demo.

HCI_Data_Connection

Click on Test after adding the information and click on Create.

A new data connection will appear in your dashboard.

Processing Pipelines
After creating the data connection, will build a processing pipeline for our PII example
Click on Processing Pipelines > Create a Pipeline. Enter a name for your pipeline (optionally a description) and click on Create.
Click on Add Stages, and create your desired pipeline. For PII search we will use the following pipeline.
HCI_PII_Pipeline
After building your pipeline, you can test it by clicking on the Test Pipeline button at the top right of your page.
Index Collections
We should now, create an index collection to specify how you want to index your data set.
First, click on Create Index inside the Index Collections button. Create an HCI Index and use the schemaless option.
HCI_Index
Content Classes
Then you should create your content classes to extract your desired information from your data set. For our PII example, we will create 3 content classes for American Express and Visa credit card and for Security Social Number.
HCI_content_classes
For America Express credit card, your should add the following pattern.
HCI_AMEX
Pattern for Visa credit card.
HCI_VISA
Pattern for Social Security Number.
HCI_SSN
Start your workflow
When all steps are completed you can start your workflow and wait until it finish.
HCI_Workflow_Start
HCI Search
Use the HCI Search application to visualize the results.
https://<hci_instance_ip>:8888
Select your index name in the search field, and naviguate through the results.
You can also display the results in charts and graphics
HCI_Search_Graph
This demo is also available in the Hitachi Ventara Community website: https://community.hitachivantara.com/thread/13804-pii-workflows
 

Cet article PII search using HCI est apparu en premier sur Blog dbi services.

OpenWorld 2018

Jim Marion - Mon, 2018-09-10 20:46

With just over a month until OpenWorld, it is time to finalize travel reservations and surf the content catalog. As always, the session catalog is loaded with great sessions from all of our favorite presenters: psadmin.io, Presence of IT, SpearMC, Cedar UK (Graham Smith), Oracle, Smart ERP and so on. I am definitely looking forward to hearing from customers and partners at the panel sessions on this year's agenda. If you have room on your agenda, I would love to have you in my session, Getting the Most Out of PeopleSoft PeopleTools: Tips and Techniques, on Monday, Oct 22 at 3:45 PM in room 3016. I have spent this entire year investigating Fluid and Event Mapping and can't wait to share some new tips.

Are you presenting? If so, leave a note in the comments to help promote your session. This year's catalog is quite exhaustive. Help us find the best sessions of the conference by letting us know what you are presenting.

Node.js on Oracle Linux: It's So Easy

Christopher Jones - Mon, 2018-09-10 18:39
This is a quick placeholder cross-post of the 'so I can find it again' category: Sergio Leunissen from our Linux group has a blog post on using Node.js and node-oracledb on Oracle Linux.  You can find the post here.  The Oracle Linux RPMs packages make it all so easy!

Increasing Maximum Web Service Requests in Oracle APEX 18.1

Dimitri Gielis - Mon, 2018-09-10 15:29
While running our final tests of APEX Office Print (AOP) 18.1 we hit "ORA-20000: Issue calling Main AOP Service (REST call: ): ORA-20001: You have exceeded the maximum number of web service requests per workspace."


When you login into the Internal Workspace and navigate to a workspace, there's a setting for  Maximum Web Service Requests. The default value is 1000 requests per 24h (rolling window).

If you know that AOP has next to hundreds of server tests, also around 500 automated tests through APEX, we hit this limit after the second full run. After setting the value to 20000, we are able to continue our final testing :)

I guess the chances are small you will hit the limit in a normal APEX app, but if you do, it's easy to fix by setting a higher value for your workspace.
Categories: Development

Alternative for greatest function to improve performance

Tom Kyte - Mon, 2018-09-10 12:06
Hi Tom, I have a view, which has date_p,date_p_c. Now I have a ssas cube where it has many partitions based on date.I need to fetch the data based on latest date. Monthly partition brings data from beginning of year to prior month. Daily pa...
Categories: DBA Blogs

Column default values in Oracle 12C

Tom Kyte - Mon, 2018-09-10 12:06
Hi Tom, We are converting oracle tables as partition tables.To do this we are using ORACLE Exchange Partition. Database version 12C. Steps we are following. 1.Original table --existing table <code> create table tab1(col1 integer, ...
Categories: DBA Blogs

CONNECT BY with Subquery in START WITH

Tom Kyte - Mon, 2018-09-10 12:06
Hi Tom, ehm, Chris of course, after migration our production database to 12.2.0.1, I have found out that there is a problem regarding CONNECT BY with a subquery in the START WITH clause. FYI: LiveSQL Link didn't work when I copied it, here is ...
Categories: DBA Blogs

getting xml file as result, and updating other table when executing stored procedure

Tom Kyte - Mon, 2018-09-10 12:06
HI EveryOne, I'm new be to pl/sql. I have a functionality like, when i read the data from multiple tables as one xml file(each record as one xml), i have to update in other table.For that i have written procedure in that procedure i'm calling func...
Categories: DBA Blogs

Average wait time for Log File Sync and Log File Parallel Write wait events increasing daily

Tom Kyte - Mon, 2018-09-10 12:06
Hi Team, First of all, thanks for all the awesome work you are doing! We are facing a issue with Log File Sync wait events. Pasted below is a detailed write-up. It would be great if you can help us by sharing your views. Thanks again for al...
Categories: DBA Blogs

PDB lockdown with Oracle 18.3.0.0

Yann Neuhaus - Mon, 2018-09-10 11:01

The PDB lockdown feature offers you the possibility to restrict operations and functionality available from within a PDB, and might be very useful from a security perspective.

Some new features have been added to the 18.3.0.0 Oracle version:

  • You have the possibility to create PDB lockdown profiles in the application root like in the CDB root. This facilitates to have a more precise control access to the applications associated with the application container.
  • You can create a PDB lockdown profile from another PDB lockdown profile.
  • Three default PDB lockdown profiles have been added : PRIVATE_DBAAS, SAAS and PUBLIC_DBAAS
  • The v$lockdown_rules is a new view allowing you to display the contents of a PDB lockdown profile.

Let’s make some tests:

At first we create a lockdown profile from the CDB (as we did with Oracle 12.2)

SQL> create lockdown profile psi;

Lockdown Profile created.

We alter the lockdown profile to disable any statement on the PDB side except alter system set open_cursors=500;

SQL> alter lockdown profile PSI disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

Then we enable the lockdown profile:

SQL> alter system set PDB_LOCKDOWN=PSI;

System altered.

We can check the pdb_lockdown parameter value from the CDB side:

SQL> show parameter pdb_lockdown

NAME				     TYPE	 VALUE
------------------------------------ ----------- -------
pdb_lockdown			     string	 PSI

From the PDB side what happens ?

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set optimizer_mode='FIRST_ROWS_10';
alter system set optimizer_mode='FIRST_ROWS_10'
*
ERROR at line 1:
ORA-01031: insufficient privileges

This is a good feature, allowing a greater degree of separation between different PDB of the same instance.

We can create a lockdown profile disabling partitioned tables creation:

SQL> connect / as sysdba
Connected.
SQL> create lockdown profile psi;

Lockdown Profile created.

SQL> alter lockdown profile psi disable option=('Partitioning');

Lockdown Profile altered.

SQL> alter system set pdb_lockdown ='PSI';

System altered.

On the CDB side, we can create partitioned tables:

SQL> create table emp (name varchar2(10)) partition by hash(name);

Table created.

On the PDB side we cannot create partitioned tables:

SQL> alter session set container = pdb;

Session altered.

SQL> show parameter pdb_lockdown

NAME				     TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
pdb_lockdown			     string
APP
SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

We now have the possibility to create a lockdown profile from another one:

Remember we have the pdb lockdown profile app disabling partitioned tables creation, we can create a new app_hr lockdown profile from the app lockdown profile and add new features to the app_hr one:

SQL> create lockdown profile app_hr from app;

Lockdown Profile created.

The app_hr lockdown profile will not have the possibility to run alter system flush shared_pool:

SQL> alter lockdown profile app_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

We can query the dba_lockdown_profiles view:

SQL> SELECT profile_name, rule_type, rule, status 
     FROM   dba_lockdown_profiles order by 1;

PROFILE_NAME		   RULE_TYPE	    RULE.        STATUS

APP			    OPTION.     PARTITIONING	 DISABLE
APP_HR			   STATEMENT	ALTER SYSTEM	 DISABLE
APP_HR		            OPTION.     PARTITIONING     DISABLE
SQL> alter system set pdb_lockdown=app_hr;

System altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system flush shared_pool ;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

If we reset the pdb_lockdown to app, we now can flush the shared pool:

SQL> alter system set pdb_lockdown=app;

System altered.

SQL> alter system flush shared_pool ;

System altered.

We now can create lockdown profiles in the application root, so let’s create an application PDB:

SQL> CREATE PLUGGABLE DATABASE apppsi 
AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY manager
file_name_convert=('/home/oracle/oradata/DB18', 
'/home/oracle/oradata/DB18/apppsi');  

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO
	 4 APPPSI			  MOUNTED

We open the application PDB:

SQL> alter pluggable database apppsi open;

Pluggable database altered.

We connect to the application container :

SQL> alter session set container=apppsi;

Session altered.

We have the possibility to create a lockdown profile:

SQL> create lockdown profile apppsi;

Lockdown Profile created.

And to disable some features:

SQL> alter lockdown profile apppsi disable option=('Partitioning');

Lockdown Profile altered.

But there is a problem if we try to enable the profile:

SQL> alter system set pdb_lockdown=apppsi;
alter system set pdb_lockdown=apppsi
*
ERROR at line 1:
ORA-65208: Lockdown profile APPPSI does not exist.

And surprise we cannot create a partitioned table:

SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

Let’s do some more tests: we alter the lockdown profile like this:

SQL> alter lockdown profile apppsi disable statement=('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

In fact we cannot use sys in order to test lockdown profiles in APP root, we have to use an application user with privileges such as create or alter lockdown profiles in the application container. So after creating an appuser in the application root:

SQL> connect appuser/appuser@apppsi

SQL> create lockdown profile appuser_hr;

Lockdown Profile created.

SQL> alter lockdown profile appuser_hr disable option=('Partitioning');

Lockdown Profile altered.

And now it works fine:

SQL> alter system set pdb_lockdown=appuser_hr;

System altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

And now can we enable again the partitioning option for the appuser_hr profile in the APP root ?

SQL> alter lockdown profile appuser_hr enable option = ('Partitioning');

Lockdown Profile altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

It does not work as expected, the lockdown profile has been updated, but as previously we cannot create a partitioned table.

Let’s do another test with the statement option: we later the lockdown profile in order to disable all alter system set statements except with open_cursors:

SQL> alter lockdown profile appuser_hr disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter system set open_cursors=500;

System altered.

This is a normal behavior.

Now we alter the lockdown profile in order to disable alter system flush shared_pool:

SQL> alter lockdown profile appuser_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

That’s fine :=)

Now we enable the statement:

SQL> alter lockdown profile appuser_hr enable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

And again this is not possible …

Let’s try in the CDB root:

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app disable statement =('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

The behavior is correct, let’s try to enable it :

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app enable statement=('ALTER SYSTEM') 
clause ALL;

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';

System altered.

This is correct again, it seems it does not work correctly in the APP root …

In conclusion the lockdown profile new features are powerful and will be very useful for security reasons. It will allow the DBAs to define a finer granularity  to restrict user’s rights to what they only need to access. But we have to be careful, with the PDB lockdown profiles we can build and generate very complicated database administration.

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article PDB lockdown with Oracle 18.3.0.0 est apparu en premier sur Blog dbi services.

Perry Ellis International, Inc. Implements Oracle Retail Cloud to Personalize Global Customer Experience

Oracle Press Releases - Mon, 2018-09-10 10:46
Press Release
Perry Ellis International, Inc. Implements Oracle Retail Cloud to Personalize Global Customer Experience Expanded Relationship with Oracle Enables PEI to gain 360 Degree View of Over 1 Million Customers in Less Than 7 Weeks

Redwood Shores Calif—Sep 10, 2018

Global apparel fashion house Perry Ellis International, Inc. (PEI) has deployed Oracle Retail Customer Engagement Cloud Services to better personalize the shopping experience for customers across the United States and United Kingdom.  PEI’s brands Perry Ellis®, An Original Penguin by Munsingwear®, and Cubavera® manage a large international footprint with their own retail stores and e-commerce channels. These multiple points of engagement for consumers created complexity that made it difficult to understand consumer behavior. Oracle Retail Customer Engagement provides PEI with a comprehensive view of shopping behavior of over 1 million loyalty members and a platform to leverage that information to personalize customer brand interactions across all touch points.
 
“Our goal with our digital transformation is to focus on the experience we provide for our customers. Customer Engagement was an important step in this transformation. We were able to quickly integrate to our legacy systems. Now we can move forward with our next generation Point of Service with the confidence to deliver a superior in-store experience for our customers and associates,” said Luis Paez, chief information officer, PEI. “We recognize Oracle’s continued investment in retail-specific solutions while providing lower cost of ownership with the new cloud service. For us the value is the robust functionality, scalability and speed to market.”
 
“Having a holistic perspective of how consumers engage with your brand is critical to develop experiences necessary to compete and thrive in the retail community,” said Mike Webster, senior vice president and general manager, Oracle Retail. “With cloud technology, brands can begin innovating and refocusing their efforts to amplifying the customer experience at an unprecedented rate. This implementation represents the fastest go-live in the cloud to date for Oracle Retail Customer Engagement.”
 
PEI and Oracle Retail have a long-standing relationship having previously implemented Oracle Retail Merchandising System, Oracle Retail Price ManagementOracle Retail Sales AuditOracle Retail AllocationOracle Retail Store Inventory Management, Oracle Retail Point of Service and Oracle Retail Central Office. The interoperability compelled PEI to replace their existing customer relationship management system and upgrade to the latest Oracle Retail Cloud release to support their loyalty program that rewards and retains customers for their purchases while optimizing margins. PEI is currently migrating to the latest version of Oracle Retail Xstore Point-of-Service.
 
The implementation process was guided by IT leadership from PEI with a clear strategy that begins with executive sponsorship and concludes with rigorous testing of the solutions. This process was supported by BTM Global, a Gold-level member of Oracle Partner Network (OPN).  BTM Global has provided a full range of systems integration services including functional and technical design work, custom integration points, interface designs, development and testing, and scripting through training and user-focused testing for PEI for multiple Oracle solutions. PEI decided to implement Oracle Retail Customer Engagement Cloud Services after experiencing operational efficiencies from upgrading the enterprise suite of Oracle Retail Merchandise Operations Management in just six months with BTM Global.
 
"In complex projects with fast timelines, collaboration and trust between the retailer and integration partner are required," said Tom Schoen, Chief Executive Officer, BTM Global. "This unique project was successfully and efficiently launched because of our history with Perry Ellis International. We were honored to be chosen once again by them as their implementation partner." 
Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Perry Ellis International
Perry Ellis International, Inc. is a leading designer, distributor and licensor of a broad line of high quality men's and women's apparel, accessories and fragrances. The company's collection of dress and casual shirts, golf sportswear, sweaters, dress pants, casual pants and shorts, jeans wear, active wear, dresses and men's and women's swimwear is available through all major levels of retail distribution. The company, through its wholly owned subsidiaries, owns a portfolio of nationally and internationally recognized brands, including: Perry Ellis®, An Original Penguin by Munsingwear®, Laundry by Shelli Segal®, Rafaella®, Cubavera®, Ben Hogan®, Savane®, Grand Slam®, John Henry®, Manhattan®, Axist®, Jantzen® and Farah®. The company enhances its roster of brands by licensing trademarks from third parties, including: Nike® for swimwear, and Callaway®, PGA TOUR®, and Jack Nicklaus® for golf apparel and Guy Harvey® for performance fishing and resort wear.  Additional information on the company is available at http://www.pery.com.
 
About Oracle Retail:
Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.
Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Matt Torres

  • 4155951584

Stats time

Jonathan Lewis - Mon, 2018-09-10 07:37

I wrote a note a couple of years ago explaining how I used to get a rough idea (with some errors) of how much time was spent in the overnight stats collection by each object. One of the nice little enhancements that appeared in 12c was the appearance of a couple of functions that can report information about this type of thing, and more. These are the dbms_stats function report_stats_operations() and report_single_stats_operation() with the following definitions:


function report_stats_operations(
        detail_level  varchar2                  default 'TYPICAL',
        format        varchar2                  default 'TEXT', 
        latestN       number                    default null,
        since         timestamp with time zone  default null,
        until         timestamp with time zone  default null,
        auto_only     boolean                   default false,
        container_ids dbms_utility.number_array default dbms_stats.NULL_NUMTAB
) return clob;

function report_single_stats_operation(
        opid         number,
        detail_level varchar2 default 'TYPICAL', 
        format       varchar2 default 'TEXT', 
        container_id number   default null
) return clob;

As you can see, there are lots of options to generating the report of stats operations, and you can check the manuals or $ORACLE_HOME/rdbms/admin/dbmsstat.sql for information about how you can use it. One of the simplest options would be to run from SQL*Plus:

set long 1000000

set pagesize    0
set linesize  255
set trimspool on

column text_line format a254

select
        dbms_stats.report_stats_operations(
                since => sysdate - 3
        ) text_line
from dual
;

Of course you wouldn’t be able to pick the option that limited the report to just the auto gather stats jobs (auto_only => true) as SQL doesn’t a boolean type, so you would have to write a little PL/SQL wrapper to capture just those details. Here’s a sample of the (rather wide) output:


select
        dbms_stats.report_single_stats_operation(25809) text_line
from dual
;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation Id | Operation             | Target                             | Start Time          | End Time            | Status      | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25811        | purge_stats           |                                    | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 0           | 0                | 0            | 0            |
|              |                       |                                    | 01.47.37.764146 PM  | 01.47.38.405437 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25810        | purge_stats           |                                    | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 0           | 0                | 0            | 0            |
|              |                       |                                    | 01.47.35.827284 PM  | 01.47.37.763926 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809        | gather_database_stats | AUTO                               | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 285         | 282              | 3            | 0            |
|              | (auto)                |                                    | 01.46.31.672033 PM  | 01.47.35.826873 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25807        | gather_table_stats    | TEST_USER.T1                       | 08-SEP-18           | 08-SEP-18           | COMPLETED   | 1           | 1                | 0            | 0            |
|              |                       |                                    | 12.59.57.704111 PM  | 12.59.57.822695 PM  |             |             |                  |              |              |
|              |                       |                                    | +01:00              | +01:00              |             |             |                  |              |              |

etc.

You’ll notice in this little sample that operation 25809 is an (auto) gather_database_stats operation which ran 285 tasks, failing on 3 and succeeding on 282 – so lets run the “single stats operation” report to find out more.


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation Id | Operation                    | Target | Start Time                      | End Time                        | Status    | Total Tasks | Successful Tasks | Failed Tasks | Active Tasks |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809        | gather_database_stats (auto) | AUTO   | 08-SEP-18 01.46.31.672033 PM    | 08-SEP-18 01.47.35.826873 PM    | COMPLETED | 285         | 282              | 3            | 0            |
|              |                              |        | +01:00                          | +01:00                          |           |             |                  |              |              |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                                                                                                                                     |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|                                                                                              T A S K S                                                                                              |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | Target                                                         | Type            | Start Time                          | End Time                            | Status                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$                                                | TABLE           | 08-SEP-18 01.46.50.719791 PM +01:00 | 08-SEP-18 01.46.51.882418 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$_OBJ                                            | INDEX           | 08-SEP-18 01.46.51.273134 PM +01:00 | 08-SEP-18 01.46.51.773297 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.RECYCLEBIN$_TS                                             | INDEX           | 08-SEP-18 01.46.51.777032 PM +01:00 | 08-SEP-18 01.46.51.787730 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
...
...
...
|    | SYS.WRH$_SEG_STAT_PK.WRH$_SEG_ST_3089296639_5150               | INDEX PARTITION | 08-SEP-18 01.47.35.409615 PM +01:00 | 08-SEP-18 01.47.35.483637 PM +01:00 | COMPLETED                  |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$LOGMNR_CONTENTS                                          | TABLE           | 08-SEP-18 01.47.35.520504 PM +01:00 | 08-SEP-18 01.47.35.696953 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$LOGMNR_REGION                                            | TABLE           | 08-SEP-18 01.47.35.699253 PM +01:00 | 08-SEP-18 01.47.35.722545 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|    | SYS.X$DRC                                                      | TABLE           | 08-SEP-18 01.47.35.725003 PM +01:00 | 08-SEP-18 01.47.35.801384 PM +01:00 | FAILED                     |    |
|    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    |
|                                                                                                                                                                                                     |
|                                                                                                                                                                                                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I’ve trimmed out most of the 285 entries, of course, showing that the last three in the list failed; but with no indication why they failed. Fortunately we could have called the report with “detail_level => ‘ALL'” – so let’s see what that gives us:

select
        dbms_stats.report_single_stats_operation(
                opid         => 25809,
                detail_level => 'ALL'
        ) text_line
from dual
;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Operation | Operation             | Target | Start Time      | End Time        | Status    | Total    | Successful | Failed   | Active   | Job Name | Session  | Additional Info             |
| Id        |                       |        |                 |                 |           | Tasks    | Tasks      | Tasks    | Tasks    |          | Id       |                             |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 25809     | gather_database_stats | AUTO   | 08-SEP-18       | 08-SEP-18       | COMPLETED | 285      | 282        | 3        | 0        |          | 250      | Parameters: [block_sample:  |
|           | (auto)                |        | 01.46.31.672033 | 01.47.35.826873 |           |          |            |          |          |          |          | FALSE] [cascade: NULL]      |
|           |                       |        | PM +01:00       | PM +01:00       |           |          |            |          |          |          |          | [concurrent: FALSE]         |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [degree:                    |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_DEGREE_VALUE]       |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [estimate_percent:          |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_ESTIMATE_PERCENT]   |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [granularity:               |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_GRANULARITY]        |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [method_opt:                |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DEFAULT_METHOD_OPT]         |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [no_invalidate:             |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | DBMS_STATS.AUTO_INVALIDATE] |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [reporting_mode: FALSE]     |
|           |                       |        |                 |                 |           |          |            |          |          |          |          | [stattype: DATA]            |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                                                                                                                                     |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|                                                                                              T A S K S                                                                                              |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | Target       | Type         | Start Time   | End Time     | Status    | Rank  | Job Name | Estimated    | Batching     | Histogram    | Extended     | Reason Code  | Additional   |        |
|       |              |              |              |              |           |       |          | Cost         | Info         | Columns      | Stats        |              | Info         |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | SYS.RECYCLEB | TABLE        | 08-SEP-18 01 | 08-SEP-18 01 | COMPLETED | 1     |          | N/A          | N/A          |              |              | stale stats  |              |        |
|       | IN$          |              | .46.50.71979 | .46.51.88241 |           |       |          |              |              |              |              |              |              |        |
|       |              |              | 1 PM +01:00  | 8 PM +01:00  |           |       |          |              |              |              |              |              |              |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
...
...
...
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|       | SYS.X$DRC    | TABLE        | 08-SEP-18 01 | 08-SEP-18 01 | FAILED    | 151   |          | N/A          | N/A          |              |              | no stats     | ORA-20000:   |        |
|       |              |              | .47.35.72500 | .47.35.80138 |           |       |          |              |              |              |              |              | Unable to    |        |
|       |              |              | 3 PM +01:00  | 4 PM +01:00  |           |       |          |              |              |              |              |              | analyze      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | TABLE "SYS". |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | "X$DRC", log |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | miner or     |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | data guard   |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | must be      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | started      |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | before       |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | analyzing    |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | this fixed   |        |
|       |              |              |              |              |           |       |          |              |              |              |              |              | table"       |        |
|       --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------        |
|                                                                                                                                                                                                     |
|                                                                                                                                                                                                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------




So we can now see that stats collection failed on the one object I’ve left in the extract because it’s an X$ object that only exists when LogMiner is running. You’ll notice that the we also get some information about things like input parameters to calls and reasons why objects were selected (“stale stats” in the first item in this list).

It’s a great convenience – but it’s always possible to grumble: I’d rather like to see the elapsed time for each operation, or even a filter to limit the report to any operation that took more than X seconds. However, if I want to do a quick check on a client site I’d rather not have to type in the code to query the base tables by hand.

[BLOG] Troubleshooting: Concurrent Manager: System Hold, Fix Manager Before Resetting Counters

Online Apps DBA - Mon, 2018-09-10 06:57

Do you know how to handle Concurrent Manager issue? Visit: https://k21academy.com/appsdba29 and learn about: ✔What is Concurrent Manager issue. ✔What is the cause of this issue ✔How to fix the issue & much more… Do you know how to handle Concurrent Manager issue? Visit: https://k21academy.com/appsdba29 and learn about: ✔What is Concurrent Manager issue. ✔What is […]

The post [BLOG] Troubleshooting: Concurrent Manager: System Hold, Fix Manager Before Resetting Counters appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Connecting to Azure SQL Managed Instance from on-premise network

Yann Neuhaus - Mon, 2018-09-10 06:26

A couple of weeks ago, I wrote up about my first immersion into the SQL Server managed instances (SQLMIs), a new deployment model of Azure SQL Database which provides near 100% compatibility with the latest SQL Server on-premises Database Engine. In the previous blog post, to test a connection to this new service, I installed an Azure virtual machine on the same VNET (172.16.0.0/16) including SQL Server management studio. For testing purpose, we don’t need more, but in real production scenario chances are your SQL Azure MI would be part of your on-premise network with a more complex Azure network topology including VNET, Express Route or VPN S2S as well. Implementing such infrastructure won’t be likely your concern if you are a database administrator. But you need to be aware of the underlying connectivity components and how to diagnose possible issues or how to interact with your network team in order to avoid being under pressure and feeling the wrath of your application users too quickly :)

So, I decided to implement this kind of infrastructure in my lab environment but if you’re not a network guru like me you will likely face some difficulties to configure some components especially when it comes the VPN S2S. In addition, you have to understand different new notions about Azure network before hoping to see your infrastructure work correctly. As an old sysadmin, I admit it was a very great opportunity to turn my network learning into a concrete use case. Let’s first set the initial context. Here my lab environment I’ve been using for a while for different purposes as internal testing and event presentations as well. It addresses a lot of testing scenarios including multi-subnet architectures with SQL Server FCIs and SQL Server availability groups.

blog 142 - 1 - lab environment

bviously, some static routes are already set up to allow network traffic between my on-premise subnets. As you guessed, the game consisted in extending this on-premise network to my SQL MI network on Azure. As a reminder, SQL MI is not reachable from a public endpoint and you may connect only from an internal network (either directly from Azure or from your on-premise network). As said previously one of my biggest challenges was to configure my remote access servers as VPN server to communicate with my SQL MI Azure network. Fortunately, you have a plenty of pointers on the internet that may help you to achieve this task.  This blog post is a good walk-through by the way. In my context, you will note I also had to apply special settings to my home routeur in order to allow IPsec Passthrough as well as to add my RRAS server internal IP (192.168.0.101) to the DMZ. I also used IKEv2 VPN protocol and pre-shared key for authentication between my gateways on-premise and on Azure. The VPN S2S configuration is environment specific and this probably why doing a presentation to customer or at events is so difficult especially if you’re outside of your own network.

Anyway, let’s talk about the Azure side configuration. My Azure network topology is composed of two distinct VNETs as follows:

blog 142 - 2 - VNET configuration

The connection between my on-premise and my Azure networks are defined as shown below:

$vpnconnection = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg 
$vpnconnection.Name
$vpnconnection.VirtualNetworkGateway1.Id
$vpnconnection.LocalNetworkGateway2.Id 

dbi-vpn-connection
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The first VNET (172.17.x.x) is used as hub virtual network and owns my gateway. The second one (172.16.x.x) concerns is SQL MI VNET:

Get-AzureRmVirtualNetwork | Where-Object { $_.Name -like '*-vnet' } | % {

    Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $_ | Select Name, AddressPrefix
} 

Name          AddressPrefix  
----          -------------  
default       172.17.0.0/24  
GatewaySubnet 172.17.1.0/24  
vm-net        172.16.128.0/17
sql-mi-subnet 172.16.0.0/24

 

My azure gateway subnet (GatewaySubnet) is part of the VPN connectivity with the related gateway connections:

$gatewaycfg = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg -Name dbi-vpn-connection 
$gatewaycfg.VirtualNetworkGateway1.Id
$gatewaycfg.LocalNetworkGateway2.Id 

/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The dbi-local-network-gw local gateway includes the following addresses prefix that correspond to my local lab environment network:

$gatewaylocal = Get-AzureRMLocalNetworkGateway -ResourceGroupName dbi-onpremises-rg -Name dbi-local-network-gw 
$gatewaylocal.LocalNetworkAddressSpace.AddressPrefixes 

192.168.0.0/16
192.168.5.0/24
192.168.40.0/24

 

Note that I’ve chosen a static configuration but my guess is that I could turn to the BGP protocol instead to make things more dynamic. I will talk quickly about using BGP with routing issues at the end of the write-up. But at this stage, some misconfiguration steps are missing to hope reaching out my SQL MI instance from my lab environment network. Indeed, although my VPN connection status is ok, I was able only to reach out my dbi-on-premise-vnet VNET and I need a way to connect to the sql-mi-vnet VNET. So, I had to turn on both the virtual network peering and gateway transit mechanism. Peering 2 VNETs Azure automatically routes traffic between them by the way.

blog 142 - 3 - VNET configuration peering

Here the peering configuration I applied to my dbi-onpremises-vnet VNET (first VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet 
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : dbi-onpremises-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : True
UseRemoteGateways         : False

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/sql-mi-rg/providers/Microsoft.Network/virtualNetworks/sql-mi-vnet

 

And here the peering configuration of my sql-mi-vnet VNET (2nd VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : sql-mi-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : False
UseRemoteGateways         : True

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworks/dbi-onpremises-vnet

 

Note that to allow traffic that comes from my on-premise network to go through my first VNET (dbi-onpremises-vnet) at the destination of the second one (sql-mi-vnet), I need to enable some configuration settings as Allow Gateway Transit, Allow Forwarded Traffic and remote gateway on the concerned networks.

At this stage, I still faced a weird issue because I was able to connect to a virtual machine installed on the same VNET than my SQL MI but no luck with the SQL instance. In addition, the psping command output confirmed my connectivity issue letting me think about a routing issue.

blog 142 - 5 - psping command output

Routes from my on-premise network seemed to be well configured as show below. The VPN is a dial up internet connection in my case.

blog 142 - 6 - local route

I also got the confirmation that my on-premise network packets were correctly sent through my Azure VPN gateway by the Microsoft support team (a particular to Filipe Bárrios – support engineer Azure Networking). In fact, I got stuck a couple of days without to figure out exactly what happens. Furthermore Checking effective routes seemed to not viable option in my case because there is no explicit network interface with SQL MI. Please feel free to comment if I get wrong on this point. Fortunately, I found out a PowerShell script provided by Jovan Popovic (MSFT) which seems to have put me on the right track:

blog 142 - 4 - VNET PGP configuration

Referring to the Microsoft documentation, it seems PGP propagation could be very helpful in my case.

Support transit routing between your on-premises networks and multiple Azure VNets

BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.

After enabling the corresponding option in the SQL MI route table and opening the SQL MI ports in my firewall connection was finally successful.

blog 142 - 8 - sql mi route bgp

blog 142 - 7 - psping output 2

Hope it helps!

See you

 

 

Cet article Connecting to Azure SQL Managed Instance from on-premise network est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator