Feed aggregator

Adaptive Plans and SQL Baselines

Yann Neuhaus - Tue, 2016-06-21 12:44

I encountered recently an issue with Adaptive Plan and SPM. Documentations says that it works perfectly together but I remembered a presentation from Nelson Calero at UKOUG TECH15 mentioning strange behavior. I reproduced the issue and share the test case here as you may encounter it in 12.1 leading to regressions when you capture SQL Plan Baselines.

Cleanup

Whith all those adaptive features, you need to start clean if you want a reproductible testcase
SQL> -- drop tables
SQL> drop table DEMO1;
Table dropped.
SQL> drop table DEMO2;
Table dropped.
SQL>
SQL> whenever sqlerror exit failure
SQL>
SQL> -- drop all sql plan baselines
SQL> set serveroutput on long 100000 longc 100000
SQL> exec for i in (select sql_handle, plan_name, accepted, origin, created from dba_sql_plan_baselines) loop dbms_output.put_line(''||dbms_spm.drop_sql_plan_baseline(sql_handle=>i.sql_handle,plan_name=>i.plan_name)); end loop;
1
 
PL/SQL procedure successfully completed.
 
SQL> set serveroutput off
SQL> select 0/0 from dba_sql_plan_baselines;
no rows selected
SQL>
SQL> -- flush shared pool
SQL> alter system flush shared_pool;
System altered.
SQL> select 0/0 from v$sql where sql_id='agw7bn072730a';
no rows selected

Create the tables

SQL> -- create two tables with few rows for L=1 and lot of rows for L=15
SQL> create table DEMO2 (id constraint PK1 primary key,l) as select rownum,floor(log(2,rownum)) from xmltable('1 to 100000');
Table created.
 
SQL> create table DEMO1 as select * from DEMO2;
Table created.

Run the query


SQL> -- run a join for the few rows case
SQL> alter session set statistics_level=all;
Session altered.
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3

And here is the adaptive plan:

SQL> select * from table(dbms_xplan.display_cursor(null,null,format=>'allstats last +adaptive +outline'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------
SQL_ID agw7bn072730a, child number 0
-------------------------------------
select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
 
Plan hash value: 2870612662
 
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 203 | 183 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 203 | 183 |
|- * 2 | HASH JOIN | | 1 | 5882 | 3 |00:00:00.01 | 203 | 183 |
| 3 | NESTED LOOPS | | 1 | 5882 | 3 |00:00:00.01 | 203 | 183 |
| 4 | NESTED LOOPS | | 1 | | 3 |00:00:00.01 | 200 | 183 |
|- 5 | STATISTICS COLLECTOR | | 1 | | 3 |00:00:00.01 | 195 | 179 |
| * 6 | TABLE ACCESS FULL | DEMO1 | 1 | 5882 | 3 |00:00:00.01 | 195 | 179 |
| * 7 | INDEX UNIQUE SCAN | PK1 | 3 | | 3 |00:00:00.01 | 5 | 4 |
| 8 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 3 | 1 | 3 |00:00:00.01 | 3 | 0 |
|- 9 | TABLE ACCESS FULL | DEMO2 | 0 | 100K| 0 |00:00:00.01 | 0 | 0 |
-------------------------------------------------------------------------------------------------------------
 
Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
INDEX(@"SEL$58A6D7F6" "DEMO2"@"SEL$1" ("DEMO2"."ID"))
NLJ_BATCHING(@"SEL$58A6D7F6" "DEMO2"@"SEL$1")
USE_NL(@"SEL$58A6D7F6" "DEMO2"@"SEL$1")
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$58A6D7F6")
MERGE(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEMO1"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "DEMO1"@"SEL$1" "DEMO2"@"SEL$1")
END_OUTLINE_DATA
*/
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
6 - filter("DEMO1"."L"=1)
7 - access("DEMO1"."ID"="DEMO2"."ID")
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)

It’s an adaptive plan, HASH JOIN was the initial choice but first execution activated the NESTED LOOP.

SQL Baseline Capture

SQL> alter session set optimizer_capture_sql_plan_baselines=true;
Session altered.
 
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3
 
SQL> alter session set optimizer_capture_sql_plan_baselines=false;
Session altered.

Here is the SQL Baseline:

SQL> select sql_handle, plan_name, accepted, origin, created from dba_sql_plan_baselines;
 
SQL_HANDLE PLAN_NAME ACC ORIGIN CREATED
------------------------------ ---------------------------------------- --- ----------------------------- ---------------------------------------------------------------------------
SQL_4c1b404640b73a81 SQL_PLAN_4s6u08t0bffn1e47b6a4d YES AUTO-CAPTURE 28-MAY-16 09.13.04.000000 PM

and its plan:

SQL> select plan_table_output from dba_sql_plan_baselines,table(dbms_xplan.display_sql_plan_baseline(sql_handle, plan_name, format=>'+adaptive'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------
SQL handle: SQL_4c1b404640b73a81
SQL text: select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
--------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_4s6u08t0bffn1e47b6a4d Plan id: 3833293389
Enabled: YES Fixed: NO Accepted: YES Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
 
Plan hash value: 740165205
 
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 108 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 16 | | |
| * 2 | HASH JOIN | | 5882 | 94112 | 108 (2)| 00:00:01 |
| * 3 | TABLE ACCESS FULL| DEMO1 | 5882 | 47056 | 54 (2)| 00:00:01 |
| 4 | TABLE ACCESS FULL| DEMO2 | 100K| 781K| 54 (2)| 00:00:01 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
3 - filter("DEMO1"."L"=1)
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)

Unfortunately, the baseline captured only the ‘initial’ plan with the HASH JOIN.
This is not what is documented in Maria Colgan paper:
SPM plan capture and Adaptive Plans: When automatic plan capture is enabled and a SQL statement that has an adaptive plan is executed, only the final plan used will be captured in the SQL plan baseline.

Run with baseline

SQL> alter session set statistics_level=all;
Session altered.
 
SQL> select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1;
 
COUNT(DEMO2.L)
--------------
3
 
SQL> select * from table(dbms_xplan.display_cursor(null,null,format=>'allstats last +adaptive'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID agw7bn072730a, child number 1
-------------------------------------
select count(DEMO2.l) from DEMO1 join DEMO2 using(id) where DEMO1.l=1
 
Plan hash value: 740165205
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 390 | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 390 | | | |
|* 2 | HASH JOIN | | 1 | 5882 | 3 |00:00:00.01 | 390 | 2545K| 2545K| 826K (0)|
|* 3 | TABLE ACCESS FULL| DEMO1 | 1 | 5882 | 3 |00:00:00.01 | 195 | | | |
| 4 | TABLE ACCESS FULL| DEMO2 | 1 | 100K| 100K|00:00:00.01 | 195 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."ID"="DEMO2"."ID")
3 - filter("DEMO1"."L"=1)
 
Note
-----
- SQL plan baseline SQL_PLAN_4s6u08t0bffn1e47b6a4d used for this statement

This confirms that the SQL baseline forces the initial HASH JOIN plan. It’s a bug that should be fixed in 12.2 so for the moment, be very careful when you want to fix an adaptive plan with SQL Baselines: your goal is to stabilize once you have the optimal plan, but the result may be a regression to a bad plan.

 

Cet article Adaptive Plans and SQL Baselines est apparu en premier sur Blog dbi services.

Rittman Mead at KScope16

Rittman Mead Consulting - Tue, 2016-06-21 10:01

June is the perfect month: summer begins, major football (and futbol) tournaments are in full swing, and of course, KScope16 is on! Rittman Mead have participated in many of the past KScope conferences and will continue that tradition this year with a wide range of presentation and training topics across the Business Intelligence, Data Warehousing, Big Data, and Advanced Analytics subject areas. This year the event is held in Chicago at the Sheraton Grand Chicago, a great location right on the river and a short distance from sights such as the Navy Pier and Magnificent Mile. And, as always, there will be some great networking opportunities throughout the week.

chicago-kscope16

This year, we’re pleased to congratulate our own Becky Wagner (of Becky’s BI Apps Corner) on her graduation from the ODTUG Leadership Program. This is the third class to complete the program, which aims to build leadership skills, confidence, and work/life balance skills, amongst many other things, throughout the 9 month class duration. Well done, Becky! You’ll be able to find her and the rest of the 2016 Leadership class at the Career Community/Leadership Program Information Breakfast – Monday, June 27th 7:30AM at the Sheraton hotel main dining area. They will be kicking off the application season for the 2017 Leadership Program and providing further information about the new ODTUG career program as well.

The rest of the Rittman Mead attendees will be presenting one or more sessions during the week.

KScope16 Rittman Mead Schedule

The week kicks off with my talk Monday on GoldenGate and Kafka integration using the new Oracle GoldenGate for Big Data release 12.2. The topic of this session, real-time data streaming, provides a great solution for the challenging task of ingesting data from multiple different sources, and different structures, and making that data readily available for others in the organization to use. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

GoldenGate to Kafka logo

Tuesday morning, Charles Elliott, Managing Director of Rittman Mead in the US, will begin the first of his two-part hands on lab sessions on Free Form Data Visualizations. This hands-on introduction to all things visual in OBIEE’s new Visual Analyzer (VA) will allow attendees to see the top reporting features within the current release while also understanding the core differences between OBIEE’s previous reporting platform and what’s available now. In addition to creating some of the most eye-catching visualizations the new tool has to offer, this session will highlight new OBIEE 12c features such as mapping, advanced analytics, and an overview of the brand-new web catalog. Whether you are just interested in what’s new in free-form Oracle BI visualizations or preparing for an upgrade, this session is for you!

During the lunch and learn sessions you can find me moderating the BI and Data Warehousing panel and Mark Rittman will be moderating the Big Data and Advanced Analytics crew. Grab a lunch and join us!

The remainder of Rittman Mead sessions fall on Wednesday, with some interesting topics around Big Data Discovery, OBIEE branding and Essbase integration, and the Kimball ETL Subsystems. Also, the second day of the Free Form Data Visualizations HOL will be held on Wednesday afternoon.

It all starts with Mark Rittman presenting “OBIEE 12c: What’s New for Integration and Reporting Against EPM Sources”. He’ll dig into interesting new features around OBIEE 12c and Essbase integration, including improvements around using Essbase as a query acceleration layer and general improvements around usability and RPD/Outline integration. Come to this session to see what they are, how they affect OBIEE/Essbase integration, and where this initiative may be going over time.

Following Mark and his dive into OBIEE and Essbase on Wednesday will be my session, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration”, with a focus on how to implement the ETL Subsystems using Oracle Data Integration solutions. As you may know, Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet analytical reporting needs. His book, “The Data Warehouse ETL Toolkit,” is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories, describing how the Oracle Data Integration products (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality) are perfectly suited for the Kimball approach.

After lunch, Andy Rocha and Pete Tamisin will dive into “How to Brand and Own Your OBIEE Interface: Past, Present, and Future”. In this presentation, they will explore the different techniques in branding and customizing your OBIEE environment in different versions: 11.1.1.7, 11.1.1.9, and 12c. What that really means is they will talk about how to brand the interface in a thoughtful way… not just throwing the company’s colors on the screen, but thinking about what will actually make the application more attractive and useful for the user community.

bdd-rittman

Closing out the Rittman Mead Wednesday sessions is Mark Rittman with his unique look at machine learning and advanced visualizations using Oracle Big Data Discovery (BDD). Mark will use the BDD Shell, pySpark, and Jupyter to ingest, process and run machine learning routines on IoT and smart home data, which he has been collecting over the past year. Oracle Big Data Discovery provides a graphical environment for cataloging, searching, and displaying data from Hadoop and other sources along with some basic data wrangling features and display options. In this session, he’ll look at how BDD can be extended using the aforementioned APIs to cover more advanced predictive modeling, machine learning, and data science tasks, and how the graphics API can be used to add custom D3 advanced visualizations for your datasets and analyses. I’m definitely looking forward to this one!

We’re all looking forward to attending the event in Chicago this year and can’t wait for next week! If you’d like to get together to discuss any of the above topics we’re presenting, or anything data integration or BI related, feel free to reach out via email (info@rittmanmead.com), twitter (@mRainey) or drop a note in the comments below. Hope to see you there!

The post Rittman Mead at KScope16 appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

DEADLINE EXTENDED! Win a FREE PASS to Oracle OpenWorld 2016: Oracle Cloud Platform Innovation Awards - Submit Nominations by June 30th 2016!

WebCenter Team - Tue, 2016-06-21 09:45


Call for Nominations:
Oracle Cloud Platform Innovation 2016

Are you using Oracle Cloud Platform to deliver unique business value? If so, submit a nomination today for the 2016 Oracle Excellence Awards for Oracle Cloud Platform Innovation. These highly coveted awards honor customers and their partners for their cutting-edge solutions using Oracle Cloud Platform. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture.


Customer Winners receive 
a free pass to Oracle OpenWorld 2016 in San Francisco (September 18-September 22) and will be honored during a special event at OpenWorld.  

Our 2016 Award Categories are:

To be considered for this award, complete the online nomination forms and submit before June 30th, 2016. For any questions email: innovation-cloud-platform_ww_grp@oracle.com

NOTE: The deadline to submit all nominations is 5pm Pacific on June 30th, 2016. Customers don't have to be in production to submit a nomination and nominations are for both Cloud and on-premise solutions.

Rittman Mead at KScope16

Rittman Mead Consulting - Tue, 2016-06-21 09:01
Rittman Mead at KScope16

June is the perfect month: summer begins, major football (and futbol) tournaments are in full swing, and of course, KScope16 is on! Rittman Mead have participated in many of the past KScope conferences and will continue that tradition this year with a wide range of presentation and training topics across the Business Intelligence, Data Warehousing, Big Data, and Advanced Analytics subject areas. This year the event is held in Chicago at the Sheraton Grand Chicago, a great location right on the river and a short distance from sights such as the Navy Pier and Magnificent Mile. And, as always, there will be some great networking opportunities throughout the week.

Rittman Mead at KScope16

This year, we're pleased to congratulate our own Becky Wagner (of Becky's BI Apps Corner) on her graduation from the ODTUG Leadership Program. This is the third class to complete the program, which aims to build leadership skills, confidence, and work/life balance skills, amongst many other things, throughout the 9 month class duration. Well done, Becky! You'll be able to find her and the rest of the 2016 Leadership class at the Career Community/Leadership Program Information Breakfast - Monday, June 27th 7:30AM at the Sheraton hotel main dining area. They will be kicking off the application season for the 2017 Leadership Program and providing further information about the new ODTUG career program as well.

The rest of the Rittman Mead attendees will be presenting one or more sessions during the week.

Rittman Mead at KScope16

The week kicks off with my talk Monday on GoldenGate and Kafka integration using the new Oracle GoldenGate for Big Data release 12.2. The topic of this session, real-time data streaming, provides a great solution for the challenging task of ingesting data from multiple different sources, and different structures, and making that data readily available for others in the organization to use. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

Rittman Mead at KScope16

Tuesday morning, Charles Elliott, Managing Director of Rittman Mead in the US, will begin the first of his two-part hands on lab sessions on Free Form Data Visualizations. This hands-on introduction to all things visual in OBIEE’s new Visual Analyzer (VA) will allow attendees to see the top reporting features within the current release while also understanding the core differences between OBIEE’s previous reporting platform and what’s available now. In addition to creating some of the most eye-catching visualizations the new tool has to offer, this session will highlight new OBIEE 12c features such as mapping, advanced analytics, and an overview of the brand-new web catalog. Whether you are just interested in what’s new in free-form Oracle BI visualizations or preparing for an upgrade, this session is for you!

During the lunch and learn sessions you can find me moderating the BI and Data Warehousing panel and Mark Rittman will be moderating the Big Data and Advanced Analytics crew. Grab a lunch and join us!

The remainder of Rittman Mead sessions fall on Wednesday, with some interesting topics around Big Data Discovery, OBIEE branding and Essbase integration, and the Kimball ETL Subsystems. Also, the second day of the Free Form Data Visualizations HOL will be held on Wednesday afternoon.

Rittman Mead at KScope16

It all starts with Mark Rittman presenting "OBIEE 12c: What's New for Integration and Reporting Against EPM Sources". He'll dig into interesting new features around OBIEE 12c and Essbase integration, including improvements around using Essbase as a query acceleration layer and general improvements around usability and RPD/Outline integration. Come to this session to see what they are, how they affect OBIEE/Essbase integration, and where this initiative may be going over time.

Following Mark and his dive into OBIEE and Essbase on Wednesday will be my session, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration", with a focus on how to implement the ETL Subsystems using Oracle Data Integration solutions. As you may know, Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet analytical reporting needs. His book, “The Data Warehouse ETL Toolkit,” is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories, describing how the Oracle Data Integration products (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality) are perfectly suited for the Kimball approach.

After lunch, Andy Rocha and Pete Tamisin will dive into "How to Brand and Own Your OBIEE Interface: Past, Present, and Future". In this presentation, they will explore the different techniques in branding and customizing your OBIEE environment in different versions: 11.1.1.7, 11.1.1.9, and 12c. What that really means is they will talk about how to brand the interface in a thoughtful way… not just throwing the company’s colors on the screen, but thinking about what will actually make the application more attractive and useful for the user community.

Rittman Mead at KScope16

Closing out the Rittman Mead Wednesday sessions is Mark Rittman with his unique look at machine learning and advanced visualizations using Oracle Big Data Discovery (BDD). Mark will use the BDD Shell, pySpark, and Jupyter to ingest, process and run machine learning routines on IoT and smart home data, which he has been collecting over the past year. Oracle Big Data Discovery provides a graphical environment for cataloging, searching, and displaying data from Hadoop and other sources along with some basic data wrangling features and display options. In this session, he'll look at how BDD can be extended using the aforementioned APIs to cover more advanced predictive modeling, machine learning, and data science tasks, and how the graphics API can be used to add custom D3 advanced visualizations for your datasets and analyses. I'm definitely looking forward to this one!

We're all looking forward to attending the event in Chicago this year and can't wait for next week! If you'd like to get together to discuss any of the above topics we're presenting, or anything data integration or BI related, feel free to reach out via email (info@rittmanmead.com), twitter (@mRainey) or drop a note in the comments below. Hope to see you there!

Categories: BI & Warehousing

ORA-01775: looping chain of synonyms

Yann Neuhaus - Tue, 2016-06-21 07:13

This error message is misleading. You may encounter it when you expect ORA-00942: table or view does not exist. Let’s explain

I’m connected as SCOTT and create a PUBLIC SYNONYM for an object that do not exists:

SQL> create public synonym MONEY for NOTHING;
Synonym created.

No error message.
Only when I read it I have an error message telling me that there are no table or view behind it:

SQL> select * from NOTHING;
select * from NOTHING
*
ERROR at line 1:
ORA-00942: table or view does not exist

Let’s do the same but call it BONUS instead of MONEY:

SQL> create public synonym BONUS for NOTHING;
Synonym created.
 
SQL> select * from BONUS;
no rows selected

No error here. Why? because I’ve a table that is called BONUS. So the name is resolved with the table and the synonym is not even tried.

I’ll now drop that synonym and create it for the table BONUS. Same name for the public synonym and for the table.

SQL> drop public synonym BONUS;
Synonym dropped.
 
SQL> create public synonym BONUS for BONUS;
Synonym created.

As user SCOTT, when I query BONUS the name is resolved as the table:

SQL> show user
USER is "SCOTT"
SQL> select * from BONUS;
no rows selected

As another user, when I query BONUS the name is resolved as the synonym, which finally reads SCOTT.BONUS:

SQL> show user
USER is "SCOTT"
SQL> select * from BONUS;
no rows selected

In 12c it is easy to see the final query:

SQL> variable c clob
SQL> exec dbms_utility.expand_sql_text('select * from BONUS',:c);
PL/SQL procedure successfully completed.
 
SQL> print c
 
C
----------------------------------------------------------------------------------------------------------
SELECT "A1"."ENAME" "ENAME","A1"."JOB" "JOB","A1"."SAL" "SAL","A1"."COMM" "COMM" FROM "SCOTT"."BONUS" "A1"

But now, what happens when we drop the table?

SQL> drop table SCOTT.BONUS;
Table dropped.

Do you expect a ORA-00942: table or view does not exist?

SQL> select * from BONUS;
select * from BONUS
*
ERROR at line 1:
ORA-01775: looping chain of synonyms

Here is the ‘looping chain of synonyms’. I ask for BONUS. The name resolution first check for an object in my schema, but there are none:

SQL> select object_type from user_objects where object_name='BONUS';
no rows selected

Then it looks for public synonym and there is one:

SQL> select object_type from all_objects where owner='PUBLIC' and object_name='BONUS';
 
OBJECT_TYPE
-----------------------
SYNONYM

So we check what it is a synonym for:

SQL> select table_owner,table_name from all_synonyms where owner='PUBLIC' and synonym_name='BONUS';
 
TABLE_OWNER TABLE_NAME
------------ ------------
SCOTT BONUS

And there it is interesting. Besides the column names that includes ‘TABLE’ a synonym can reference any object. So it’s not just replacing the synonym with ‘SCOTT.BONUS’ which would raise an ORA-00942. It is doing name resolution of BONUS in the context of the user SCOTT. Something similar to:

SQL> alter session set current_schema=SCOTT;
Session altered.
SQL> select * from BONUS;

And then, what do you expect from that? There is no table named BONUS but there is a public synonym… and you’re back to the begining:

select * from BONUS
*
ERROR at line 1:
ORA-01775: looping chain of synonyms

Most of the time, you don’t have synonyms that reference other synonyms, so you don’t really have a ‘chain’ of synonyms. Except when there is only synonym in the namespace and it’s a self-reference loop. So if you see ORA-01775, check if the referenced object is not missing.

 

Cet article ORA-01775: looping chain of synonyms est apparu en premier sur Blog dbi services.

Oracle Delivers the Power of Engineered Systems to Organizations of All Sizes

Oracle Press Releases - Tue, 2016-06-21 06:41
Press Release
Oracle Delivers the Power of Engineered Systems to Organizations of All Sizes Oracle’s expanded portfolio of database appliances are now available for enterprise and midmarket organizations, and provide customers with a pathway to the cloud

Redwood Shores, Calif.—Jun 21, 2016

Today, Oracle announced two new models of its Oracle Database Appliance (ODA) bringing all of the performance and reliability of Oracle's Engineered Systems to small and mid-sized businesses for the first time.

In addition, as organizations seek opportunities to transition to the cloud, Oracle’s new database appliances provide a bridge between on-premise systems and the cloud. In the future, organizations that plan to connect their ODAs to the Oracle Cloud can seamlessly back up their data automatically.

"Bringing in Oracle Database Appliances definitely reduced the complexity of our environment and allowed us to support our 25X data growth over the past few years," said Jacqueline Hufford-Jenson, Senior Manager, Database Administration, LifeLock, a fast-growing, mid-market company.  "Today, using the Oracle Database Appliance straight out of the box, we get significantly reduced latency, which has played a role in helping us grow as a company.”

Oracle’s portfolio of database appliances, like all Oracle Engineered Systems, enables organizations to simplify their journey to the cloud. Customers can backup or archive their critical data, and easily move their workloads to the Oracle Cloud whenever needed. Central to these new versions of the ODA is support for the Oracle Database Standard Edition 2 and Oracle Database Enterprise Edition.

“We’re excited to bring the power, simplicity and capabilities of Oracle’s Engineered Systems in at a price point that allows every organization to save time and money,” said Jim Gargan, senior vice president of Oracle Converged Infrastructure. “With the family of database appliances, Oracle offers built-in expertise for single-instance database and high-availability deployments, while providing a bridge between on-premise systems and the cloud, enabling all of our customers to capitalize on their investment.”

The Oracle Database Appliance is designed to run single-instance databases, database consolidation of multiple databases, and high-availability designs.

Added features include:

  • Starting price of $18,000 for new, entry-level database appliance for single or small database instances;
  • Reducing time to value, customers can deploy in as little as 30 minutes;
  • NVMe Flash drives for ultimate performance and reliability;
  • Integration to Oracle Cloud, allowing businesses to backup and archive their critical data as well as migrate workloads to the cloud when ready.

“Since Oracle owns the entire stack, its systems are fully hardware-software integrated, pre-tested, and optimized from silicon to applications. They are quickly deployed, extend smoothly to Oracle’s cloud, and the various systems seem to work well together. The latest family members, Oracle Database Appliance X6-2S and X6-2M, should help small and medium-sized businesses put a fully operational entry-level Oracle database in place—apparently no screwdrivers required,” said Peter Rutten, Analyst, IDC Computing Platforms Group.

Contact Info
Katie Barron
Oracle
+1 202-904-1138
katie.barron@oracle.com
Brian Lake
Burson-Marsteller
+1 415.591.4084
Brian.lake@bm.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1 202-904-1138

Brian Lake

  • +1 415.591.4084

Oracle Mobile Application Accelerator (MAX): Constructing Mobile Apps for Business

As you may already know, Oracle Mobile Cloud Service 2.0 was released, and with this release we got a new capability known as Oracle Mobile Application Accelerator (MAX). This tool was built to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pester: Cannot bind argument to parameter ‘Actual’ because it is an empty string.

Matt Penny - Tue, 2016-06-21 03:23

I’m just getting started with Pester and I got this error

   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1

The code I’m testing is very simple – it just separates a ‘Property Name’ and a ‘Property Value’.

So, when it’s working it does this:

get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
DEBUG: 09:15:37.6806 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}

PropertyName PropertyValue
------------ -------------
Weighting    103          

When I ran it from Pester I got this

GetHugoNameAndValue 06/21/2016 08:45:19 $ invoke-pester
Describing get-HugoNameAndValue
DEBUG: 08:45:56.3377 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [-] returns name and value 189ms
   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1
Tests completed in 189ms
Passed: 0 Failed: 1 Skipped: 0 Pending: 0

My Pester code was:

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.Value
        $value | Should Be '103'
    }
}

The problem here was simply that I’d got the name of the Property wrong. It was ‘PropertyName’ not just ‘Name’

So I changed the Pester

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.PropertyValue
        $value | Should Be '103'
    }
}

….and then it worked

invoke-pester
Describing get-HugoNameAndValue
DEBUG: 09:22:21.2291 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [+] returns name and value 99ms
Tests completed in 99ms
Passed: 1 Failed: 0 Skipped: 0 Pending: 0

Categories: DBA Blogs

database option - multi tenant part 2

Pat Shuff - Tue, 2016-06-21 02:07
Yesterday we looked at the concept behind multi-tenant and talked about the economics and operational benefits of using the option. Today we are going to look at examples and technical details. Fortunately, this is a hot topic and there are a ton of interviews, tutorials, and videos showing you how to do this. The best place to start is Oracle Technology Network - multitenant. This site lists seven offerings from Oracle Education, six online tutorials, and four video demos. Unfortunately, most books are a little light on this topic and cover it lightly in a chapter buried in the high number chapters. The most recent books cover this topic directly Two of these books are pre-order and the third is only a few months old. The other books talk about it as an abstract term with little or no examples. Safari Books does not have many that cover this subject because the topic is so new and few books have been published on the topic.

The Oracle Base Blog has a series of postings about multitenant and does a really good job of showing diagrams and sample code. There is a significant amount of information at this site (24 posts) looking at the subject in depth. I normally provide a variety of links to other bloggers but I think that this work is good enough to deserve top billing by itself.

Internal to Oracle the GSE Demo pages have a few demos relating to multi-tenant.

  • PaaS - Data Management (Solutions Demo) has a hands on tutorial in the cloud
  • DB12c Multi-Tenant Workshop by William Summerhill is on retriever.us.oracle.com
  • Oracle Database 12c multitenant and in-memory workshop using OPC by Ramulu Posham on retriever.us.oracle.com

For the rest of this blog I am going to go through the workshop by Ramulu Posham because it is the most complete and does everything 100% in the cloud. We could do this on the Oracle Public Cloud using DBaaS, or a database installed in IaaS on Oracle, Amazon, or Azure. We can not do this on Amazon RDS because they disable multi-tenant and prohibit it from working.

The schedule for the workshop is

  • 9:00 - 9:15 intro
  • 9:15 -10:15 cloud intro
  • 10:30 - 2:00 multi-tenant workshop
The workshop consists of creating two pluggable database instances in the cloud and look at pluggable creation, cloning, and snap cloning. The assumption is that you have a public cloud account and can create two 12c databases in the cloud with less than 100 GB of disk space. You can do this on two 1 OCPU 7.5 GB instance but require High Performance or Extreme Performance Edition to get multi-tenant to work. The only software that we will need for our Windows 2012 IaaS instance will be Swing Bench which helps put a load on the database and allows us to look at utilization of resources for a container database and our pluggable instances.

The flow of the workshop is shown in the following slide. We are going to create a database with data in the container and another database and put both instances in a pluggable database on one instance.

Some of the more interesting slides from the presentation are shown below. The file location slide helped me understand where resources get allocated. The redo logs, for example, are part of the container database and not each pluggable. You setup Data Guard for all pluggables by configuring the container and replication happens automatically. The discussion on cloning a database is interesting because you don't need to copy all of the data. You only copy the header information and reference the same data between the original and the clone. Changes are tracked with file links as they are updated on both sides. The managing slide helped me understand that there is still a DBA for each pluggable as well as a master DBA for the container. Seeing that in a picture helped me understand it better. There are also multiple slides on resource management and shares. I pulled a representative slide as well as the summary benefits slide. This is what is covered in the first hour prior to the hands on labs starting.

To start the lab, we create a 12c High Performance instance called salessvc$GC where $GC is a substitute for each lab participant. We will use 01 as our example so we will create salessvc01.

Note that we call the database instance salessvc01, the ORACLE_SID salesc01, and the pluggable container sales01. We can not have the ORACLE_SID and the pluggable instance the same because it will confuse the listener so those names must be different. The creation takes between 30 minutes and an hour on our demo accounts. While we are waiting we will create a second instance with the name cmrsvc01 with similar parameters using the SID of crms01 and a pluggable container of crm01.

Once we have the ip address of the two instances we can create an ssh tunnel for ports 443, 5500, 80, and 1521. This is done by creating an ssh tunnel in our putty client. The presentation material for the labs go through with very good screen shots for all of these steps. We have covered all of this before and are just summarizing the steps rather than detailing each step. Refer to previous blog entries or the workshop notes on how to do this.

The sales instance looks like the screen shots below. We configure the ports and look at the directory structure in the /u02/app/oracle/oradata directory to verify that the sales01 pluggable database was created under the container database salesc01.

Once we have the database created and ports configured we download and launch SwingBench to load data into the database and drive loads to test response time and sql performance.

We need to download SwingBench and Java 8 to execute the code properly. Once we download SwingBench we unzip it and install it with a java command.

The only true trick in the install is that we must execute lsnrctl status on the database to figure out what the listener is looking for in the connection string. We do this then type in localhost:1521 and the connection string to populate the database with SwingBench.

We repeat this process for the cmrc01 instance, repeat the SwingBench install, and unplug the soe database from the crmc01 pluggable database to the salessvc01 database service and plug it in as a pluggable. The big issue here is having to unplug and copy the xml file. It requires uploading the private key and allowing ssh between the two instances. Once this is done the SwingBench is run against both instances to see if performance improves or decreases with two pluggables on the same instance. The instructions do a good job of walking you through all of this.

Overall, this is a good workshop to go through. It describes how to create pluggable containers. It describes how to unplug and clone PDBs. It is a good hands on introduction and even brings in performance and a sample program to generate a synthetic load.

Start problems with Nodemanager

Darwin IT - Tue, 2016-06-21 02:05
Since my previous long running assignment, I'm involved in a few OSB 11g to 12c upgrade trajects, where I have been working on automatic installs. Hence, my articles about automatic installs, patching and domain configuration.

When I create a new domain, 12cR2 (12.2.1), using my scripts, I'm not able to use the nodemanager of the domain to start the servers. Actually, I can't connect to it. I get it running alright, but connecting to it fails with a message like:
WLSTException: Error occured while performing nmConnect : Cannot connect to Node Manager. : Access to domain 'osb_domain' for user 'weblogic' denied.

When you google the standard solution is to change the nodemanager password on the security tab of the domain in the console.Like in this article. Actually a very good suggestion, but it did not work for me. It turns out that apparently performing an nmEnroll() did the job in that case. What you need to do is:
  • Start the admin server using the startWeblogic.sh script in the domain root.
  • Verify and correct the nodemanager settings under Environment->Machines-><your machine>, ensuring it is inline with the file nodemanager.properties in {domainHome}/nodemanager.
  • Start wlst.sh
  • Connect to the adminserver using: connect({adminuser}, {adminpassword} ,'{adminHost}:{adminPort}')
  • Perform: nmEnroll({domainHome}, {domainHome}/nodemanager)
Here I assume the use of a per-domain nodemanager, where {domainHome}/nodemanager is the nodemanager home within the domain home.

Then I was able to connect to the nodemanager, start the AdminServer and then the OSB Server.


At my customer, they have been struggling in configuring a 'standalone' nodemanager, as they did in the 11g situation. Nodemanager can be started, and connected to. But doning an nmStart of the admin server got:

wls:/nm/osb_domain> nmStart('Adminserver')

Starting server Adminserver ...

Traceback (innermost last):

File "", line 1, in ?

File "", line 188, in nmStart

File "", line 553, in raiseWLSTException

WLSTException: Error occurred while performing nmStart : Error Starting server Adminserver : Received error message from Node Manager Server: [Server start command for WebLogic server 'Adminserver' failed due to: [No such file or directory]. Please check Node Manager log and/or server 'Adminserver' log for detailed information.]. Please check Node Manager log for details.

Use dumpStack() to view the full stacktrace :

wls:/nm/osb_domain

This is also in 12cR2 (12.2.1), with a domain created with the same script. Sharp eyes may notice that the adminserver name is not default: it has a lowercase 's' in stead of a uppercase. They've been fiddling a round with naming of the admin server. What we finally did was to keep the un-default naming, but cleansed the server folder by removing the data, cache and tmp folder. We also removed the logs folder to be able to see if new logs were made from starting from the nodemanager. We configured the per-domain nodemanager and then we did the same as above, performing an nmEnroll() against the domain-nodemanager. After that the 'Adminserver' was startable.

ConclusionI hardly ever had the need to use nmEnroll(), not with a new domain and in 11g at least not even using a seperate nodemanager. From colleagues I did not hear the need to use it in 12c. So why did I need it to solve the sketched problems? I haven't sorted that out, I hope I once get a finger around it. For the moment, take advantage of these experiences.

Links for 2016-06-20 [del.icio.us]

Categories: DBA Blogs

Monitoring AWS Costs

Jeff Kemp - Tue, 2016-06-21 00:38

I’ve been running my Apex sites on Amazon EC2 for many years now, and I’ve gone through a number of infrastructure upgrades and price changes over time. I have some alerts set up, e.g. if a server starts getting very busy or if my estimated charges go over a threshold. Today I got an alert saying my estimated monthly bill will be over $100 which is unusual.

One of the most useful reports in AWS is the Instance Usage Reports (found under Dashboard > Reports > EC2 Instance Usage Report). I tell it to report Cost, grouped by Instance Type, which gives me the following:

aws_instance_usage_report

As you can see, my daily cost was about $1.58 per day, and this shot up on the 16th (note: these rates are for the Sydney region). I was running Oracle on an m1.medium SUSE Linux instance until June 16, when I upgraded it to an m3.medium instance. I have a Reserved Instance (RI) for m1.medium, but not for m3.medium, which is why the cost has shot up. That RI will expire soon; I will purchase an m3.medium RI which will bring the cost of that instance back down to about $1 per day. Until I do that, I will be charged the “On Demand” rate of $4.63 per day.

I’m also running two t2.nano Amazon Linux instances as my frontend Apache servers. Even though they are the smallest available instance type (nano), they barely register over 1% CPU most of the time. I’ve moved all the DNS entries across to one of those nano instances now, so I will soon decommission one which will save me a few extra dollars per month.

As an Apex developer, outsourcing the hardware-related worries to AWS has been the best decision I’ve made. I’ve only suffered a couple of significant outages to date, and in both instances all my servers were still running without issue when connectivity was restored. I can spin up new instances whenever I want, e.g. to test upgrades (you might notice from the graph that I did a test upgrade on an m3.medium instance on June 14).

In case you’re wondering, the total time I needed to take down my Apex instance, take a snapshot, spin up the new instance, and swap the IP address across to it, was about 30 minutes. And that included about 10 minutes lost because I accidentally picked an incorrect option at one point. Not only that, but my upgrade also included changing from magnetic disk to SSD, which seems a bit faster. Overall I’m pretty happy with all that.


Filed under: Oracle

[EM-01901] Internal error in EM Patch Advisor for Patchng using EM12cR5

Arun Bavera - Mon, 2016-06-20 23:33

I am trying to patch the DB 12.1.0.2.160419 ( APR2016_PSU ) on OEL 6 with Patch 20807398 from Note:2034610.1 using EM12cR5.

OMS side Patches

Agent Side Patches

OMS PSU APR Patch: 22570344
OMS System Side Bundle Patch: 23218275
Patch: BUG:21364690

Agent-Side 12.1.0.5.160531: (May 31, 2016) Patch 23124338
Agent-Side Monitoring 12.1.0.8.160531:(May 31, 2016) Patch 23217639
* Agent-Side Discovery 12.1.0.8.5: (November 30, 2015) Patch 22135601

I get the below error, which is pointing to OMS HOME, I am analyzing Patch 20807398 on a DB Home!

image

EMlogs shows following exception:

2016-06-21 02:21:29,930 [RJob Step 115760] FATAL patch.InitSADB logp.251 - [EM-01901] Internal error in EM Patch Advisor

java.lang.NumberFormatException: For input string: "6000000000093016041902"

at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)

at java.lang.Long.parseLong(Long.java:422)

at java.lang.Long.valueOf(Long.java:525)

I have reviewed these notes:

ERROR: CHK-012: An Unexpected Exception Occurred when Patching Database (Doc ID 1951435.1)

EM 12c: Enterprise Manager 12c Cloud Control Patch Plan Fails when Checking Target Properties with Error: [EM-01901] Internal error in EM Patch A (Doc ID 1946028.1)

Solution:

Download the patch manually to SoftwareLibrary , seems like this issue is with downloading patch automatically from EM.

Categories: Development

Go Mobile with Oracle JET As Easy As 1 2 3

Andrejus Baranovski - Mon, 2016-06-20 18:38
Oracle JET allows to build and run mobile hybrid applications. It is using Cordova to run on mobile device and provide access to device services. This is cool and what is great about it - it allows to get you started with mobile development in minutes. Besides all this - it is free.

I will describe steps I followed, to generate JET mobile hybrid app and run it in local browser (in the next posts I will describe how to run it on simulator and actual device).

First of all you must install Node.js and npm on your machine. This will allow to run shell environment to execute various commands related to JET app generation, setup, build and deployment. Read this article and you will learn how to do it - Installing Node.js and updating npm.

Next follow Step 1 and install Cordova in Step 5 from JET Get Started list.

You are ready to generate JET mobile hybrid app at this point. If you are on Mac OS, don't forget to use sudo, otherwise there will be permission related errors. Run Yeoman to generate the app:

yo oraclejet:hybrid JETMobileApp --template=navBar --platforms=ios

At this stage you can choose predefined JET template, here I'm generating it with navBar template for  iOS platform. It must complete with Done, without errors message:


Make sure to navigate to app folder with cd AppName:


Build generated app with grunt. Command I was using to build it for iOS (you can see from the log, it is producing *.app file, which can be deployed to mobile device as application):

grunt build:dev --platform=ios

It must complete with Done, without errors:


Run application to test in local web browser. See more options (deploy to simulator or device) here - Serve a Hybrid Mobile Application with Grunt. Use grunt command:

grunt serve --platform=ios --web=true --disableLiveReload=true

I'm running it with disableLiveReload=true for a reason. It seems like live reload initialization takes long time to start. Template based JET mobile hybrid app is started (in web browser, for testing):


Generated project can be opened in NetBeans, simply select project from the folder:


Under src folder you will find JET content. Try to change text in any of the generated pages:


Rebuild and serve application, you should see changes deployed for customers page in this case:

HOWTO solve any problem recursively, PL/SQL edition…

RDBMS Insight - Mon, 2016-06-20 17:47
PROCEDURE solve (my_problem IN varchar2) IS
BEGIN
  my_idea := have_great_idea (my_problem) ;
  my_code := start_coding (my_idea) ;
  IF i_hit_complications (my_idea)
  THEN 
    new_problem := the_complications (my_idea);
    solve (new_problem);
  ELSE
    NULL; --we will never get here
  END IF;
END solve;

This abuse of recursion was inspired by @ThePracticalDev !

Categories: DBA Blogs

Cloud Integration made easy (in Cloud)

Peeyush Tugnawat - Mon, 2016-06-20 16:32

Integration has always been a critical (but challenging) requirement for an enterprise given the diverse systems and applications in a typical IT landscape.

Today, Software as a Service (SaaS) adoption is driving the need for a cloud based integration solution. When businesses run their applications in the cloud, connecting information and processes between them and to on-premise systems is critical.

Integration is also expected to match the SaaS criteria – instant provisioning, rapid development, maintenance-free and scalable. Oracle Integration Cloud Service is a pure cloud based platform for highly productive, simplified integration in the cloud. 

Oracle Integration Cloud Service (ICS) is making Cloud integration extremely easy for not only Oracle SaaS applications but also various third party SaaS applications.

See what ICS can do for you at https://cloud.oracle.com/integration

When changing CURSOR_SHARING takes effect?

Yann Neuhaus - Mon, 2016-06-20 13:15

I usually don’t advice to set CURSOR_SHARING=FORCE but imagine your application requires it, you forgot it (or tried to do without) on migration and then everything is slow. You want to change it, but when does it take effect? New execution? New parsing? New session?

EXACT

I have the default value where parent cursor is shared only when sql_text is the same:

SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string EXACT

And I check with a query that the predicate is not changed:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

FORCE

I change at system (=instance) level

SQL> alter system set cursor_sharing=force;
System altered.
 
SQL> show parameter cursor_sharing
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cursor_sharing string FORCE

I tested without session cached cursors:

SQL> alter session set session_cached_cursors=0;
Session altered.

and even from another session

SQL> connect / as sysdba
Connected.

But the predicate still has its predicate:

SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"='X')

No invalidation, no new cursor. Same old statement.

FLUSH SHARED_POOL

Only when I flush the shared_pool I can execute the statement with literals replaced:

SQL> alter system flush shared_pool;
System altered.
 
SQL> select * from dual where dummy='X';
 
D
-
X
 
SQL> select * from table(dbms_xplan.display_cursor) where plan_table_output like '%filter%';
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
1 - filter("DUMMY"=:SYS_B_0)

If you fear a hard parse fest, you can flush specific cursors. I’ve documented the procedure in a previous post.

Autotrace

As a side note, do not rely on autotrace for that

SQL> set autotrace on explain
SQL> select * from dual where dummy='X';
 
D
-
X
 
Execution Plan
----------------------------------------------------------
Plan hash value: 272002086
 
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DUMMY"='X')

Just one more thing that is special with autotrace…

Conclusion

I don’t know exactly how cursor_sharing=force is managed. I thought that the literal replacement occurred before searching for parent cursor. Don’t hesitate to comment here if you know the ‘why’ behind this behavior. My goal here was just to test what has to be done in order to have immediate effect of cursor_sharing change.

 

Cet article When changing CURSOR_SHARING takes effect? est apparu en premier sur Blog dbi services.

ASM iostats

Yann Neuhaus - Mon, 2016-06-20 12:15

A few screenshots and a link here. Sysadmins do not like ASM because they don’t have the tools they like to manage the disks. For example, they don’t want to run SQL queries to check performance, and asmcmd iostat is quite limited. Here is a nice way to get I/O statistics easily from command line.

The perl script is from Bertrand Drouvot (do not miss his twitter profile picture) and is easily downloadable from his blog:
https://bdrouvot.wordpress.com/2013/10/04/asm-metrics-are-a-gold-mine-welcome-to-asm_metrics-pl-a-new-utility-to-extract-and-to-manipulate-them-in-real-time/

It’s only queries on ASM instance, so no risk.

I order to show the relevance, I took screenshots from this script and the XtremIO console from a system where all ASM disks, and only them, are on the XtremIO brick so you can compare statistics from the storage array and from the ASM instance.

Bandwidth

ASMIOScreenshot 2016-06-20 15.18.55

IOPS

ASMIOScreenshot 2016-06-20 15.19.08

Latency

ASMIOScreenshot 2016-06-20 15.19.17

 

Cet article ASM iostats est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: distributed availability groups

Yann Neuhaus - Mon, 2016-06-20 10:49

This time, I will talk about distributed availability groups. What’s that? In short, a group of availability groups. Sounds good right? But in which cases may we need such architecture? Firstly, let’s say that distributed availability groups will run on the top of two distinct availability groups meaning that they reside on two distinct WSFCs with their own quorum and voting management. Referring to the Microsoft documentation here, we may think that this new feature will be mainly used in DR scenario but I’m not sure to understand Microsoft about this sentence:

You want the data to continually replicate to the DR site, but you don’t want a potential network problem or issue at the DR site to bring down your primary site

Indeed, we don’t need this new feature to avoid a DR site or a network failure bringing down the primary site. We may simply exclude cluster nodes at the DR site for voting. Maybe I missed something here and I will probably go back when I will get more information.

Moreover, I may not image my customer using an additional cluster just only for DR purpose. Supposing that the DR site is costless from SQL Server license perspective, we need to maintain a “complex” architecture (WSFC) only for that.

After discussing with one another French MVP Christophe Laporte (@Conseilit), we began to draw some pictures of potential scenarios where using DAG may be a good solution. Firstly, let’s say a customer that have many applications. Some of them may run on the primary site and other ones on the DR site because there is a high latency between the two datacenters. In this specific context, you may implement one availability group on each datacenter for HA purpose and add a distributed availability group for DR recovery.

blog 96 - 7 - DAG scenario

Let’s say now a big company that bought another business that includes an existing datacenter infrastructure with availability groups used by the old company. They may want to introduce HA + DR by using a distributed availability group at the both sides without performing any big changes. Of course, we have to keep in mind the cost of such architecture …

Probably one another advantage of distributed availability groups is that the primary replica has only to send log blocks to one primary replica on another availability group. Let’s imagine a traditional implementation with 2 synchronous replicas at the primary site and 4 or maybe 6 asynchronous replicas at the DR site used for scale-out reporting purpose. In this case, even if we are in a situation where all the read-only replicas are asynchronous, the failure of one may impact the primary replica because the transaction log file won’t be truncated by backup log operations until we fix the issue.

blog 96 - 8 - DAG scenario

We may have potentially up to 6 replicas that may lead to transaction log issue management in this specific context. Let’s say now we change the game by including all of the read-only replicas in one specific availability group at the DR site that is included itself in a DAG. The failure of one read-only replica on the DR site may impact only the primary on the DR availability group.

 

blog 96 - 9 - DAG scenario

I believe that others scenarios are possible and we will discover some of them through experience. Please feel free to comment or add your thoughts J

Ok it’s time to implement our first DAG. On my lab environment I implemented two additional virtual machines and then I configured another WSFC that includes my two virtual machines. So I finally get an environment that includes two WSFCs with two nodes on each. The first is already installed and used for direct seeding (see direct seeding at the beginning). We will also leverage direct seeding when implementing a DAG in order to replicate the WideWorldImporters between the two availability groups.

But before installing the DAG itself, let’s install the second availability group.

:CONNECT WIN20123SQL16\SQL16

Use master;

-- primary replica
CREATE AVAILABILITY GROUP [testGrp2]   
FOR    
REPLICA ON 
N'WIN20123SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20123SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = MANUAL,  
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20124SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE =MANUAL,   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = AUTOMATIC);   
GO  

ALTER AVAILABILITY GROUP [testGrp2]    
ADD LISTENER 'lst-testgrp2' 
( 
	WITH IP ( ('192.168.5.121', '255.255.255.0') ) , 
	PORT = 1433);    
GO 

:CONNECT WIN20124SQL16\SQL16

USE master;

-- secondary replica
ALTER AVAILABILITY GROUP [testGrp2] JOIN   
ALTER AVAILABILITY GROUP [testGrp2] GRANT CREATE ANY DATABASE  
GO

And finally the distributed availability group

:CONNECT WIN20121SQL16\SQL16

USE master;

-- Primary cluster 
--DROP AVAILABILITY GROUP [distributedtestGrp]  
CREATE AVAILABILITY GROUP [distributedtestGrp]  
WITH (DISTRIBUTED)   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
	AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
	FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO   

:CONNECT WIN20123SQL16\SQL16

USE master;

-- secondary cluster
ALTER AVAILABILITY GROUP [distributedtestGrp]   
JOIN   
AVAILABILITY GROUP ON  
'testGrp' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp:5022',    
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
),   
'testGrp2' 
WITH    
(   
	LISTENER_URL = 'tcp://lst-testgrp2:5022',   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    FAILOVER_MODE = MANUAL,   
    SEEDING_MODE = AUTOMATIC   
);    
GO

 You may notice the special syntax DISTRIBUTED. In addition, the replicas have been replaced by the listeners of each availability group.

blog 96 - 13 - DAG SSMS

One interesting thing here is that we can’t manage the DAG from SSMS. All options are greyed. So let’s have a look at some DMVs.

select 
	ag.name as group_name,
	ag.is_distributed,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

 

blog 96 - 12 - DMV config

Replica names are availability group names for DAG.

We may get information about direct seeding between the two availability groups in the same manner than previously:

 

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 11 - WideWorldImporters

 

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

blog 96 - 10 - WideWorldImporters

Ok let’s perform a basic test that consists in creating and inserting data into the WideWorldImporters database from each listener.

:CONNECT lst-testgrp

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

:CONNECT lst-testgrp2

use [WideWorldImporters];

create table dbo.test (id int identity);
go

insert dbo.test default values;
go

 

Connecting to lst-testgrp…
(1 row(s) affected)
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
Msg 3906, Level 16, State 2, Line 14
Failed to update database “WideWorldImporters” because the database is read-only.
Msg 3906, Level 16, State 2, Line 18
Failed to update database “WideWorldImporters” because the database is read-only.
Disconnecting connection from lst-testgrp2…

Ok, as expected the availability group in DR is in READ ONLY mode. I’m not able to create or update anything from there.

Let’s perform a last test after switching over the DAG from testGrp to testGrp2. From the primary availability group:

ALTER AVAILABILITY GROUP [distributedtestGrp] FORCE_FAILOVER_ALLOW_DATA_LOSS

 

blog 96 - 13 - DAG SSMS after failover

 

This time I cannot insert data from the first availability group testGrp

Connecting to lst-testgrp…
Msg 976, Level 14, State 1, Line 2
The target database, ‘WideWorldImporters’, is participating in an availability group and is currently not accessible for queries. Either data movement is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in the availability group, enable read access to one or more secondary availability replicas in the group.  For more information, see the ALTER AVAILABILITY GROUP statement in SQL Server Books Online.
Disconnecting connection from lst-testgrp…

Connecting to lst-testgrp2…
(1 row(s) affected)
Disconnecting connection from lst-testgrp2…

In this blog we have introduced two new features shipped with SQL Server 2016 and availability groups. At a first glance, they seem to be pretty cool features and will extend the scope of availability group capabilities. I think we will see over time the pros and cons during our other tests and implementation at customer shops. If so, other blog posts will coming soon.

Stay tuned!

 

 

 

 

 

 

 

Cet article SQL Server 2016 AlwaysOn: distributed availability groups est apparu en premier sur Blog dbi services.

SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding

Yann Neuhaus - Mon, 2016-06-20 10:26

In this blog I would like to talk about two new features shipped with SQL Server 2016 for AlwaysOn availability groups. The first one concerns the new direct seeding capability and the seconde one introduces distributed availability groups (DAG). I will talk about the second one in a next blog post.

First of all, let’s talk about direct seeding.  At the first glance, this feature seems very interesting because it simplifies a lot the process of adding a high available database.  Indeed, with previous versions, adding a database to an availability group from an application perspective requires some extra steps from the database administrator in order to be highly available like backup/restore or initializing replication process. Let’s say a software editor wants to install or add a database (SharePoint for instance). 

Enabling seeding mode for the concerned replicas reduce the amount of work of adding the databases by automating a little bit more the initialization process. We just have to add the database to the availability group and it’s over: no extra backup / restore tasks, no need to configure a file share for backup / restore and no manual initialization. Of course, CREATE and ALTER AVAILABILITY GROUP syntax includes a new per replica basis parameter named SEEDING_MODE that has two settings: MANUAL and AUTOMATIC. The former means that we will use the legacy method ton initialize databases to secondaries (by using backup / restore) and the latter will use the new automated initialization method that will consist in seeding database data across the replication network.
Let’s have a quick demo of this new feature. I will configure a basic availability group with two replicas (WIN20121SQL16\SQL16 and WIN20122SQL16\SQL16). You may notice the new parameter SEEDING_MODE = AUTOMATIC meaning that we will use the new automated method for initializing databases for this availability group.  You may also notice that we don’t need any more to create a “dummy” database before creating an availability group. 

On the primary:

-- primary replica
CREATE AVAILABILITY GROUP [testGrp]   
FOR   
REPLICA ON 
N'WIN20121SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5022',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC)
,   
N'WIN20122SQL16\SQL16' 
WITH (ENDPOINT_URL = N'TCP://WIN20122SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC);   
GO

System views and DMVs have been updated accordingly. For example the sys.availability_replicas provides a new seeding_mode column.

select 
	ag.name as group_name,
	ar.replica_server_name as replica_name,
	ar.endpoint_url,
	ar.availability_mode_desc,
	ar.failover_mode_desc,
	ar.primary_role_allow_connections_desc as allow_connections_primary,
	ar.secondary_role_allow_connections_desc as allow_connections_secondary,
	ar.seeding_mode_desc as seeding_mode
from sys.availability_replicas as ar
join sys.availability_groups as ag
	on ar.group_id = ag.group_id;
go

blog 96 - 1 - availability group and replicas state

Let’s complete the configuration of the availability group by giving gextra permissions to create databases on secondaries to the availability group itself.

ALTER AVAILABILITY GROUP [testGrp] JOIN   
ALTER AVAILABILITY GROUP [testGrp] GRANT CREATE ANY DATABASE  
GO

At this point we just have to add the new WideWorldImporters database to the testGrp availability group and our job is over!

-- primary replica add database WideWorldImporters to testGrp
ALTER AVAILABILITY GROUP [testGrp] ADD DATABASE [WideWorldImporters];
GO

The system will then seed database data to the secondary. We may get information of seeding progress from two new DMVs sys.dm_hadr_automatic_seeding and sys.dm_hadr_physical_seeding_stats

sys.dm_hadr_automatic_seeding DMV gives information about successful or failed database seedings and corresponding error messages. Furthermore sys.dm_hadr_physical_seeding_stats DMV provides currently running seeding information like estimated completion time and related statistics about I/O and network usage.

Here what I found after adding the database to the availability group:

select 
	ag.name as aag_name,
	ar.replica_server_name,
	d.name as database_name,
	has.current_state,
	has.failure_state_desc as failure_state,
	has.error_code,
	has.performed_seeding,
	has.start_time,
	has.completion_time,
	has.number_of_attempts
from sys.dm_hadr_automatic_seeding as has
join sys.availability_groups as ag
	on ag.group_id = has.ag_id
join sys.availability_replicas as ar
	on ar.replica_id = has.ag_remote_replica_id
join sys.databases as d
	on d.group_database_id = has.ag_db_id

 

blog 96 - 2 - seeding sessions info

select 
	local_physical_seeding_id,
	remote_physical_seeding_id,
	local_database_name,
	@@servername as local_machine_name,
	remote_machine_name,
	role_desc as [role],
	transfer_rate_bytes_per_second,
	transferred_size_bytes / 1024 as transferred_size_KB,
	database_size_bytes / 1024 as database_size_KB,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), start_time_utc) as start_time,
	DATEADD(mi, DATEDIFF(mi, GETUTCDATE(), GETDATE()), estimate_time_complete_utc) as estimate_time_complete,
	total_disk_io_wait_time_ms,
	total_network_wait_time_ms,
	is_compression_enabled
from sys.dm_hadr_physical_seeding_stats

 

blog 96 - 3 - seeding progress info

The WideWorldImportes database is not so big by default (roughly 455MB). The estimated duration to seed database is approximatively 1’14’’.

Let’s compare this little math by including the transfer rate that is estimated to ~ 42MB/s (transfer_rate_bytes_per_second column value) and the total size of data to transfer. We may deduce 455 / 42 ~=10s to transfer all the data.

Let’s now have a look at the SQL error log on the secondary. We may see information related to the seeding data operation.

blog 96 - 6 - import error log secondary 2

blog 96 - 6 - import error log secondary 2

The seeding operation has occurred between 19:15:53 and 19:16:08 so approximatively 15 seconds that is not so far to our previous math. The estimated duration to seed data based on start and estimated completion time seems to be less accurate than the transfer throughput but I think I need to perform further tests and investigations before coming to any conclusion.

In short, a pretty cool feature but my guess is that using direct seeding may not be suitable for all scenarios. One customer example that is off the top of my head is big databases that run inside availability groups. I had the opportunity to migrate SAP databases (~1TB) to SQL Server 2014 and availability groups : using direct seeding would not be use in this case. Indeed, direct seeding does not use compression by default (you can turn on compression with traceflag 1462) and we may easily imagine that seeding all data across the network may take a long time. We may change this default behaviour by using a special trace flag but at the cost of high CPU consumption. It is probably not a good idea if we plan to use the application immediately after adding the SAP to the concerned availability group. In addition, using log stream compression may hurt performance with heavily OLTP workload because it introduces latency by design. There is a tradeoff to find here … Therefore using backup / restore process seems to be the best option by playing with optimization techniques like compression, increasing the number of backup media files or changing MAXTRANSFERSIZE and BUFFERCOUNT parameter values.

One another scenario that comes to mind concerns databases migration with thousands of databases at the same time. Which is the fastest way to migrate all these databases? Adding them to the availability group and let the system to seed database data across the network to each secondary or perform a copy of database files on each secondary and chose skip initial data synchronization? At this point I need to investigate further to answer.

One another important thing is that direct seeding is not part of the availability group wizard GUI. You have probably noticed that I don’t use it in this blog post and this is an intended behaviour. A connect item has already been created by Brent Ozar here. Please feel free to vote!

See you!

 

 

Cet article SQL Server 2016 AlwaysOn: Add DBs in AGs with direct seeding est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator