Feed aggregator

IGNORE_ROW_ON_DUPKEY_INDEX like hint for Foreign Key Constraint Violation.

Tom Kyte - Wed, 2017-04-05 02:06
Hi, Like we have IGNORE_ROW_ON_DUPKEY_INDEX hint which will say to ignore any duplicates that are there in my select query, do we have any hint to say ignore records which violated foreign key constraints and load them? For example: if I have ...
Categories: DBA Blogs

how to replace the line break with tab space in database in all the column of the database

Tom Kyte - Wed, 2017-04-05 02:06
We have one database which has 10 table contains 10 columns in each table, which contain Line break data(\n) in 5 Columns I Want all the 5 different columns data to be replaced with spaces if it contains line break. I want it in a single query at sch...
Categories: DBA Blogs

Extra line at the end of file while calculating checksum in oracle

Tom Kyte - Wed, 2017-04-05 02:06
Hi Tom, This is the file generated by my procedure: <code> "FV","3.0" "FH","20170404","001","1","loftpaypal@loft.com" "BH","1","USD","loftpaypal@loft.com" "1","RF","03F359879C231283P","USD","13.99","","591111895" "2","RF","0FA889952A037821...
Categories: DBA Blogs

Cannot "select *" table from other Server with schema name prefix

Tom Kyte - Wed, 2017-04-05 02:06
Hi Tom, My department has 2 Oracle database servers at different city, let's say db1 and db2. Data exchange between database is only using Oracle jobs which either refreshing materialized views or delete-insert tables. Few days ago there was ne...
Categories: DBA Blogs

ERR_CONNECTION_REFUSED Error.

Tom Kyte - Wed, 2017-04-05 02:06
Hi Friendz. I am unable to connect to localhost for apex 4.2 version. I am getting error "ERR_CONNECTION_REFUSED" when i ran local host on chrom. (http://localhost:8080/apex/apex_admin) [ This site can?t be reached localhost refused to ...
Categories: DBA Blogs

Literals with two single quotes unexpectedly working

Tom Kyte - Wed, 2017-04-05 02:06
Hi Tom, after many years using Oracle, I fortuitously found a amazing statement, with strange literals accepted in SQL and PL/SQL, although their syntax are incorrect. Test case is in LiveSQL link. I tested it on 10.2.0.4 and 12.1.0.2 versions...
Categories: DBA Blogs

Specify values that are allowed in comma delimited string using regex

Tom Kyte - Wed, 2017-04-05 02:06
Hello, i want to know if it is possible to specify what kind of values are allowed when i insert comma separated string. e.g : the value that are allowed: subcat1,subcat2,subcat3,subcat4. I have the table project with the column subcategories and ...
Categories: DBA Blogs

oracle interval partitioning 11.2.0.4

Tom Kyte - Wed, 2017-04-05 02:06
Hi, I have 4 tables in 4 schemas with monthly interval partitioning: Schemas: C, D, F, G Tables: T1, T2, T3, T4 So, I have these 4 tables in each schema (C, D, F, G). On April 1st, Oracle failed to create partitions (April) for same table (tab...
Categories: DBA Blogs

How to Perform a Security Audit of an Oracle Database Training in Athens, Greece

Pete Finnigan - Wed, 2017-04-05 02:06
I will be teaching my two days class How to Perform a Security Audit of an Oracle Database in Athens, Greece on May 16th and 17th 2017 organised by Oracle University. This is a great class that helps you understand....[Read More]

Posted by Pete On 04/04/17 At 02:45 PM

Categories: Security Blogs

Google Big Querry and Oracle Smart Scan

Pakistan's First Oracle Blog - Tue, 2017-04-04 23:26
Marveling at the technology is my pastime and lately there are 2 technologies which truly have made me say ' Simply Wow.' One is Google's Big Query and the other one is Oracle's Exadata Smart Scan.

I have been managing data in different databases for a long time to appreciate how critical it is for the client to get the results out of their data as fast as possible. It's all about the returning results at the end after issuing a query or clicking a button.

End user or developer don't really care as how many terabytes of data is there. DBAs and data architects might love to boast about the humongous volumes of data they store and support but there is nothing to write home about, if that data cannot be retrieved as quickly as possible.

When I first migrated a 9TB database to Oracle Exadata few years back and ran a heavy report first time, it returned results in a jiffy, while my jaws dropped. This report used to take at least 70 minutes before without smart scan. I had to bring the developer to double check whether the results were correct or not . Oracle's Exadata smart scan is phenomenal.

I got similar jaw-dropping experience yesterday when I saw Google Cloud Platform's product Big Query in action during an Onboard session Sydney. A SQL with regex was run on multi terabyte of dataset with lots of sorting and it returned the results from the cloud in few seconds. Best thing about Big Query is that the all-familiar SQL is used and nothing fancy is needed. You get your petabytes of data warehouse in Google's cloud and then use your SQL to analyze that dataset. Sweet part is the agility and transparency with which that data is returned.

Simply beautiful.
Categories: DBA Blogs

Bushy Join Trees in Oracle 12.2

Pythian Group - Tue, 2017-04-04 10:05

There are multiple optimizer features introduced in the Oracle Database 12.2 Release. Bushy Join Trees is the one that I’ll be demonstrating in this post.

First, I’ll create four tables with two indexes:

create table t1 as select rownum n1, rownum n2 from dual connect by  level <= 1000000;
create table t2 as select rownum n1, rownum n2 from dual connect by  level <= 100;
create table t3 as select rownum n1, rownum n2 from dual connect by  level <= 1000000;
create table t4 as select rownum n1, rownum n2 from dual connect by  level <= 100;

create index idx_t1 on t1(n1);
create index idx_t3 on t3(n1);

Now, I’ll run the next query:

select * from t1, t2, t3, t4 where t1.n1 = t2.n1 and t3.n1 = t4.n1 and t1.n2=t3.n2;

The most efficient “Bushy” execution plan for this query looks like the next one:

    -----HJ----- 
    |           |
    |           |
---NL---    ---NL---
|      |    |       |
T1     T2   T3      T4

This plan joins T1 and T2 tables by Nested Loop using idx_t1 index, joins T3 and T4 tables by Nested Loop using idx_t3 index and finally joins results of the previous joins by the Hash Join.
But Oracle has never been able to generate such execution plan automatically. You had to rewrite this query with subqueries and bunch of hints in order to force this kind of execution plan.

The following example shows the typical execution plan that Oracle can generate:

Execution Plan
----------------------------------------------------------
Plan hash value: 1007837908

-----------------------------------------------------------------------------------------
| Id  | Operation		       | Name	| Rows	| Bytes | Cost (%CPU)| Time	|
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	       |	|     1 |    32 |   888   (1)| 00:00:01 |
|*  1 |  HASH JOIN		       |	|     1 |    32 |   888   (1)| 00:00:01 |
|*  2 |   HASH JOIN		       |	|   100 |  2600 |   885   (1)| 00:00:01 |
|   3 |    NESTED LOOPS 	       |	|   100 |  1600 |   303   (0)| 00:00:01 |
|   4 |     NESTED LOOPS	       |	|   100 |  1600 |   303   (0)| 00:00:01 |
|   5 |      TABLE ACCESS FULL	       | T2	|   100 |   600 |     3   (0)| 00:00:01 |
|*  6 |      INDEX RANGE SCAN	       | IDX_T1 |     1 |	|     2   (0)| 00:00:01 |
|   7 |     TABLE ACCESS BY INDEX ROWID| T1	|     1 |    10 |     3   (0)| 00:00:01 |
|   8 |    TABLE ACCESS FULL	       | T3	|  1000K|  9765K|   579   (1)| 00:00:01 |
|   9 |   TABLE ACCESS FULL	       | T4	|   100 |   600 |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T3"."N1"="T4"."N1")
   2 - access("T1"."N2"="T3"."N2")
   6 - access("T1"."N1"="T2"."N1")

We can see that a full T3 table scan and T3 table can be significantly large.

Oracle 12.2 has introduced new BUSHY_JOIN hint and bunch of hidden “_optimizer_bushy” parameters:
_optimizer_bushy_join
_optimizer_bushy_fact_min_size
_optimizer_bushy_fact_dim_ratio
_optimizer_bushy_cost_factor.

_optimizer_bushy_join parameter is ‘off’ by default and you have to set it to ‘on’ or to use a BUSHY_JOIN hint.

Let’s try with a hint:

select   /*+ qb_name(main) BUSHY_JOIN(@"MAIN" ( "T1"@"MAIN"   "T2"@"MAIN" )) */
 * from t1, t2, t3, t4 where t1.n1 = t2.n1 and t3.n1 = t4.n1 and t1.n2=t3.n2;

100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1929967733

----------------------------------------------------------------------------------------------------
| Id  | Operation		       | Name		   | Rows  | Bytes | Cost (%CPU)| Time	   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	       |		   |   100 |  6800 |   606   (0)| 00:00:01 |
|*  1 |  HASH JOIN		       |		   |   100 |  6800 |   606   (0)| 00:00:01 |
|   2 |   NESTED LOOPS		       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|   3 |    NESTED LOOPS 	       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL	       | T4		   |   100 |   600 |	 3   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN	       | IDX_T3 	   |	 1 |	   |	 2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS BY INDEX ROWID | T3		   |	 1 |	10 |	 3   (0)| 00:00:01 |
|   7 |   VIEW			       | VW_BUSHY_D96D1B60 |   100 |  5200 |   303   (0)| 00:00:01 |
|   8 |    NESTED LOOPS 	       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|   9 |     NESTED LOOPS	       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|  10 |      TABLE ACCESS FULL	       | T2		   |   100 |   600 |	 3   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN	       | IDX_T1 	   |	 1 |	   |	 2   (0)| 00:00:01 |
|  12 |     TABLE ACCESS BY INDEX ROWID| T1		   |	 1 |	10 |	 3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ITEM_1"="T3"."N2")
   5 - access("T3"."N1"="T4"."N1")
  11 - access("T1"."N1"="T2"."N1")

We can see VW_BUSHY_D96D1B60 internal view at step 7, and this is a definitely a “bushy” plan. The feature still is not enabled by default, but you don’t need to rewrite the query for a proper plan.

Categories: DBA Blogs

Oracle Analytics Cloud: Product Overview

Rittman Mead Consulting - Tue, 2017-04-04 10:00
 Product Overview

We at Rittman Mead are always helping our customer solving their problems, many times we heard them

  • being unsure about the sizing of their server
  • being worried about the upfront cost of the licensing
  • having recurring nightmares about patching
  • willing to try the cloud but couldn't find the right option to replace their on-premises system

This is their lucky day: Oracle officially launched Oracle Analytics Cloud (OAC), a new PaaS (Platform as a Service) providing a complete and elastic Business Intelligence platform in the cloud, customizable and managed by you but all on the Oracle Cloud!

 Product Overview

If you haven't been on a remote island you may have noticed that in recent years Oracle's main focus has been around the Cloud. Several products have been launched covering a vast spectrum of functionalities: Data Management, Application Development, Business Analytics and Security are only some of the areas covered by the Software/Platform/Infrastructure as a Service offering.

 Product Overview

In the Business Analytics area, we at Rittman Mead started thinking long time ago on how to host Oracle's BI on-premises (OBIEE) in the Cloud and worked closely with Oracle since the beta phase of their first PaaS product: BI Cloud Service (BICS). Effectively we put our hands on all the cloud products in the BA family like Big Data Discovery (both on premises and cloud), Data Visualization Cloud Service, Big Data Preparation Service.

Business Intelligence Cloud Products

Until few weeks ago Oracle's main Business Analytics cloud products were BI Cloud Service (BICS) and Data Visualization Cloud Service (DVCS). As mentioned in our blog both tools aimed initially at departmental use-cases: the simplicity of the data model interface and the lack of admin configuration options stopped them from being a compelling story for hosting a full enterprise Business Intelligence solution.

 Product Overview

New features like BICS Data Sync, Remote Data Connector and RPD lift and shift addressed almost all the limitations but the lack of detailed admin/maintenance capabilities represent a stopper for moving complex environments in the cloud. Still BICS and DVCS are perfect for their aim: business users analysing sets of data without needing to wait the IT to provision a server or to care about upfront licensing costs.

Oracle Analytics Cloud

Oracle Analytics Cloud extends the watermark in every direction by providing a product that is:

  • Complete functionality: most of the tools, procedures and options provided on-premises are now available in OAC.
  • Combining all the offering of BICS, DV, BIEE and Essbase: OAC includes the features of Oracle's top BI products.
  • Licensing Tailored: the many options available (discussed in a later post) can be chosen depending on analytical needs, timeframe of service, required performances
  • Easily Scalable: do you want to expand your BI solution to the double of the users without loosing performances? Just buy some more horsepower!
  • Fully Accessible: SSH connection available to the server makes it easy to change settings as needed, REST API and Clients are provided for all lifecycle operations
  • Customizable: settings, images, networking, VPN all settings are available
  • Scriptable: settings like scaling, instance creation and deletion, start and stop can be easily scripted via the REST-APIs
  • Fully Customer Managed: Oracle provides the automation to backup and patch but the customer decides when to run them.
What's The Difference?

So what's the difference between Oracle Analytics Cloud and the "old" DVCS and BICS? How is OACS going to change Oracle's BI offer in the cloud?

The great deal of using OACS is control: BICS/DVC limiting factors around admin options and development are solved providing a tool capable of hosting a full enterprise BI solution. Even if the platform is managed by Oracle SSH access is provided meaning that instance configurations can be changed. No more upfront server sizing decisions, now the size of the instance is decided during creation time and can be changed later in the process if the demand changes.

The REST-APIs will enable the scripting of the full lifecycle of the instance, providing a way to automate the BI enterprise workflow even in complex environments where concurrent development is needed. Patching and Backups are not a problem anymore with the automated processes provided.

Direct RPD online editing is available with the Admin tool. The old BICS Data Modeler is still there for simple models, but Admin Tool can be used in case of complex RPDs.

 Product Overview

The front-end is like the BICS and OBIEE 12c one, some new visualization have been added to Visual Analyzer in line with the new additions to Data Visualization Desktop: Parallel Coordinates, Chord, Network, Sankey diagrams are now available.

 Product Overview

A new console is now available in Visual Analyzer allowing settings like Mail or Deliveries that before were only accessible via Weblogic Console, Enterprise Manager or config files.

 Product Overview

Finally Essbase is now available in the cloud too with a new web interface!
Summarizing, if you wanted to go Cloud, but were worried about missing options, now Oracle Analytics Cloud provides all you need to host a full Enterprise BI solution.

In the next few days I'll be analysing various aspects of Oracle Analytics Cloud Suite, so keep in touch!

If you need assistance in checking if Oracle Analytics Cloud suits your needs or in planning your migration to the cloud don't hesitate to contact us

Categories: BI & Warehousing

Webcast: Introducing Oracle Content and Experience Cloud

WebCenter Team - Tue, 2017-04-04 08:47
Oracle Live Webcast Introducing Oracle Content and Experience Cloud:

Drive Engagement by Unleashing the Power of Content
Register Now cta-arrow 93% of marketers have siloed teams and technologies by channels Join this webcast to learn about Oracle Content and Experience Cloud -- a Digital Experience platform that drives omni-channel content management and delivers engaging experiences to your customers, partners, and employees. Hear how Oracle Content and Experience Cloud can help you:
  • Easily find, access, use, reuse and collaborate on content anytime, anywhere and on any device
  • Drive consistent, compliant and contextual experiences across multiple channels
  • Centralize content management across digital channels and enterprise applications
Empower your marketing teams to unleash the power of content.

Register today.


Sincerely,
Content and Experience Cloud Team
Introducing Oracle Content and Experience Cloud Webcast slap-hr Offer May 2, 2017
10:00 AM PDT /
01:00 PM EDT
Register Now cta-arrow Featured Speaker slab-hr David Le Strat David Le Strat
Senior Director, Product Management
Oracle
David Le Strat is Senior Director of Product Management for the Digital Experience and Content and Experience Cloud product portfolios at Oracle. In his role, David is responsible for product strategy, definition and delivery and go-to-market.
Stay Connected Facebook Linkedin Twitter Youtube Blog #OracleDX

Webcast: Introducing Oracle Content and Experience Cloud

WebCenter Team - Tue, 2017-04-04 08:47
Oracle Live Webcast Introducing Oracle Content and Experience Cloud:

Drive Engagement by Unleashing the Power of Content
; background-repeat: repeat-x;"> Register Now cta-arrow 93% of marketers have siloed teams and technologies by channels Join this webcast to learn about Oracle Content and Experience Cloud -- a Digital Experience platform that drives omni-channel content management and delivers engaging experiences to your customers, partners, and employees. Hear how Oracle Content and Experience Cloud can help you:
  • Easily find, access, use, reuse and collaborate on content anytime, anywhere and on any device
  • Drive consistent, compliant and contextual experiences across multiple channels
  • Centralize content management across digital channels and enterprise applications
Empower your marketing teams to unleash the power of content.

Register today.


Sincerely,
Content and Experience Cloud Team
Introducing Oracle Content and Experience Cloud Webcast slap-hr Offer May 2, 2017
10:00 AM PDT /
01:00 PM EDT
; background-repeat: repeat-x;"> Register Now cta-arrow Featured Speaker slab-hr David Le Strat David Le Strat
Senior Director, Product Management
Oracle
David Le Strat is Senior Director of Product Management for the Digital Experience and Content and Experience Cloud product portfolios at Oracle. In his role, David is responsible for product strategy, definition and delivery and go-to-market.
Stay Connected Facebook Linkedin Twitter Youtube Blog #OracleDX

Inline Queries Vs. Store Procedure

Tom Kyte - Tue, 2017-04-04 07:46
what are the disadvantages of using Inline Queries instead of using Store Procedure? what is the performance impact on Inline Queries Vs. Store Procedure? thanks
Categories: DBA Blogs

Identity Vs Sequence

Tom Kyte - Tue, 2017-04-04 07:46
Hi , Good day. We can have an alternative to sequence as below. This is quite similar to Identity column in SQL server. CREATE TABLE identity_test_tab ( id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY, description VARCHAR2...
Categories: DBA Blogs

How I learned to appreciate MongoDB

Amis Blog - Tue, 2017-04-04 05:11

Last week at our company we organized a session about NoSQL in general and MongoDB in particular, as you can read here. The MongoDB focus was presented by me and I would like to take you with me on my trip to actually appreciating the NoSQL database world (and MongoDB in particular).

Coming from RDBMS I’m used to ACID. Business models reside within the database, data never gets lost, transactions are isolated, you’ve got your read consistency, strictly defined tables, foreign keys et cetera. MongoDB on the other hand I saw as an unstructured pile of data, not stuctured, no such thing as transactions, it’s ‘eventually consistent’ (that sounds like a leap of faith), no joins hence no foreign keys… You get the picture.

I am a DBA. My major concern is to have all data available or at least recoverable, no matter what. But the world is changing. Developers more and more look at the database as a storage engine; business logica is programmed in the application. We, DBA’s, try to teach them to use the database better, but is that really necessary? There’s no law..

The world is changing fast and so are businesses. It’s not unusual to deploy a new release every night. If developers need to redesign the database every few days, then maybe the structure of data is not that important. If we collect the number of hits on our website, is it a disaster if out of 10,000 hits we occasionally miss 1?

It takes a lot of discussion and ‘yes, but..’ for this other look at data to finally settle in. At least, for me it did. What finally won me over was an online course at MongoDB University that sort of mitigated the pain. Because, once you let go of the ACID model, you gain a lot of flexibility in terms of database design and infrastructure design. Scaling out becomes a very easy operation for instance. Resilience against hardware failure is a piece of cake. And due the lack of the RDBMS legacy, the engine can focus almost entirely on reading and writing data which leads to lightning fast performance.

In the next paragraphs i will show some examples of the resilience and general behaviour of MongoDB, losely compared to Oracle. It is handson so I will also get you started, as minimal as possible, with mongoDB in general.

I will not go into the way you read and write data. Only some actions will be shown that are needed for the examples. But in general:

db = database and can be compared to a schema in Oracle.

A db contains collections, which can be compared to tables.

A collections contains documents which can be compared to rows.

Joins are not possible, so all data you need should be in the same collection.

Collections consist of key:value pairs, as many as you like within one collection.

So, you can have the database ‘trades’ with collection ‘customers’ with documents like

{“name”:”Larry”,”company”:”Oracle”,”hobby”:”sailing”}

Getting started.

Go to the MongoDB site here and download the appropriate version. The entire handson can be run on your typical Windows or Linux laptop or Virtual Box. The installation is very easy, no instructions needed from my part.

The software will be installed in C:\Program Files\MongoDB\Server\3.4\bin. Version can change over time off course.

Add that to your path in your environment variables.

In linux, add the path where you unpacked the tar ball, followed by bin, to your $PATH.

Starting an instance is also very easy. Open a command box, create a directory \data\db1 and start the instance with

mongod --dbpath \data\db1

On windows you should leave this box open. When you close it the instance shuts down. Better would be to create a service but for this demonstration, this will do.

Stop the database by pressing ^C.

On linux you can fork the process so you don’t have to open a terminal for every instance:

mongod --dbpath /data/db1 --fork --logpath a.log

End it by killing the process.

From now on I will continue in Windows, Linux users can follow the instructions with minor adjustments, like / instead of \

Also make sure to use a different logpath for each instance.

Resilience against hardware failure.

In Oracle we have 2 kinds of resilience against hardware failure. Instance failure can be mitigated by RAC, storage failure by data guard. Besides, if a single instance crashes you can recover all data, provided you have a decent backup strategy.

MongoDB uses a different approach called replica sets. Each instance (or member as it’s called) has its own storage and can be replicated to another instance (or many) with its own storage too. Only one instance can read and write, that is the primary instance. The others only allow you to read data.

In production this is a no brainer: should a single instance fail, then you can’t recover all data like in Oracle, no matter how often you make backups.

All instances vote who will be the primary. This can be manipulated by setting the priority parameter. I will not go into that here but just demonstrate a simple replica set.

Open a command box and type:

mkdir \data\db1\r1
mkdir \data\db1\r1
mkdir \data\db1\r1
mongod --smallfiles --oplogSize 50 --port 27001 --dbpath \data\db1\r1 --replSet r

Leave it open and open a second Command box and type:

mongod --smallfiles --oplogSize 50 --port 27002 --dbpath \data\db1\r2 --replSet r

Leave it open and open a third Command box and type:

mongod --smallfiles --oplogSize 50 --port 27003 --dbpath \data\db1\r3 --replSet r

Open fourth command box This will be used to actually talk to the database using the mongo shell. We will then initiate the replica set.

mongo –-port 27003
rs.initiate(
           { _id:'r',
             members:[
                     { _id:1, host:'localhost:27001' },
                     { _id:2, host:'localhost:27002', "arbiterOnly" : true },
                     { _id:3, host:'localhost:27003' }
                     ]
           }
)
rs.status()

I introduced a special member, the Arbiter. This is a member without data, it only helps to have an uneven number of members which is necessary to always get a majority of votes when it comes to choosing the Primary member.

In the output you can see that we have 3 members: a Secondary on port 27001, an Arbiter on port 27002 and a Primary on port 27003. You can also see by the prompt that we are connected to the Primary.

We will now create a collection called ‘simple’ and insert some data. Also, the writeConcern phrase makes sure data is written to at least 2 members. If there are more members they will be ‘eventually consistent’, meaning that they will synchronize but not immediately.

db.simple.insert( { _id : 1 }, { writeConcern : { w : 2 } } )
db.simple.insert( { _id : 2 }, { writeConcern : { w : 2 } } )
db.simple.insert( { _id : 3 }, { writeConcern : { w : 2 } } )

Go to your secondary member and try to read the data. This involves giving your self permission to read from the secondary as I’ll show:

exit
mongo --port 27001
r:SECONDARY> db.simple.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
r:SECONDARY> rs.slaveOk()
r:SECONDARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }

This looks okay. Not featured here: if I stop the Secondary, add data on the Primary and restart the Secondary, it synchronizes as expected. Just one thing: the writeConcern for 2 members can not be used since we only have 1 member.

Now it becomes interesting. I’ll stop the Secondary, write some data on the Primary, stop the Primary and start the Secondary. Would the data written whilst the Secondary was down still be visible? If not, would it be recoverable?

r:SECONDARY> exit
bye

Go to your first box and stop the Secondary with ^C.

Go to the mongoshell box and connect to port 27003, the Primary and add some more data:

mongo --port 27003
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27003/test
r:PRIMARY> db.simple.insert( { _id : 4 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.insert( { _id : 5 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.insert( { _id : 6 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
{ "_id" : 4 }
{ "_id" : 5 }
{ "_id" : 6 }

r:PRIMARY> exit
bye

Now stop the primary in your 3rd box with ^C and restart the Secondary in your 1st box. Then go to the mongoshell box and connect to port 27001

mongo --port 27001
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27001/test
r:PRIMARY> rs.status()
{
"set" : "r",
"date" : ISODate("2017-03-20T19:12:43.425Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "localhost:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 25,
"optime" : Timestamp(1490035617, 1),
"optimeDate" : ISODate("2017-03-20T18:46:57Z"),
"electionTime" : Timestamp(1490037141, 1),
"electionDate" : ISODate("2017-03-20T19:12:21Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:27002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 24,
"lastHeartbeat" : ISODate("2017-03-20T19:12:43.354Z"),
"lastHeartbeatRecv" : ISODate("2017-03-20T19:12:43.167Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 3,
"name" : "localhost:27003",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-03-20T19:12:43.354Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"configVersion" : -1
}
],
"ok" : 1
}
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
r:PRIMARY>db.simple.insert( { _id : 7 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
{ "_id" : 7 }
r:PRIMARY>

So, member 1 now has become the Primary, but we hit data loss: it never had a chance to synchronize and they do not share any storage to read from.

What would happen if we restart the 3rd member? After all, that one does have the lost data stored, somewhere.

Start up the 3rd member (in the 3rd box)

In the output you will see it transitions to Secondary and it performs a rollback: the lost data is actually rolled back. And the good news: it is stored. Under its data directory \data\db1\r3 it created a directory called rollback which contains a .bson file. This file can be examend and/or imported in the database as I’ll show.

Go to the 4th box and exit mongoshell. Then:

cd \data\db1\r3\rollback
C:\data\db1\r3\rollback>bsondump test.simple.2017-03-20T19-32-31.0.bson
{"_id":4.0}
{"_id":5.0}
{"_id":6.0}
2017-03-20T20:45:06.412+0100 3 objects found
C:\data\db1\r3\rollback>mongorestore --port 27001 --db test --collection simple test.simple.2017-03-20T19-32-31.0.bson
2017-03-20T20:47:59.880+0100 checking for collection data in test.simple.2017-03-20T19-32-31.0.bson
2017-03-20T20:47:59.886+0100 restoring test.simple from file test.simple.2017-03-20T19-32-31.0.bson
2017-03-20T20:48:00.463+0100 no indexes to restore
2017-03-20T20:48:00.463+0100 finished restoring test.simple (3 documents)
2017-03-20T20:48:00.463+0100 done

C:\data\db1\r3\rollback>mongo --port 27001
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27001/test
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
{ "_id" : 7 }
{ "_id" : 4 }
{ "_id" : 5 }
{ "_id" : 6 }
r:PRIMARY>

Okay, Oracle would have done everything by itself. But at what cost? It needs to maintain redologs and archived redo logs. It only has 1 member to query, the Primary database. Yes, you can have a read only Standby database with Active Data Guard since 11G, but that’s a licensed option. It’s robust, nevertheless. I only want to say that the alternative is different but not all bad. Not at all.

;

;

Scaling out, also known as sharding

In the previous paragraph we covered HA and recoverablility. Now let’s have a look at scaling out, best compared to RAC.

RAC enables you to add CPU power and memory to a database. It also enables you to distribute different kinds of workloads over different machines, for instance reporting on one node and OLTP on another node. That distribution can be compared to smart using of replica sets explained above.

Adding CPU power and memory is something else. MongoDB heavily relies on memory to perform. And they made it very easy for you to add more nodes to your cluster. This is done by sharding.

Sharding can best be described as range based partitioning. Sharding is done on a per collection base. A shard cluster consists of 1 (!) or more nodes that automatically partitions a collection and distributes it evenly over all cluster members.

Let’s have a closer look.

First of all, each mongod needs to know it is part of a shard cluster. That is accomplished with a –shardsvr startup parameter. It is also very wise to explicitely declare the port number with the –port parameter. Finally it needs its own storage, the –dbpath parameter. Example:

mongod  --shardsvr --port 27020 --dbpath \data\db1\s1
mongod  --shardsvr --port 27021 --dbpath \data\db1\s2

Next we need a config server. This is also a mongodb, but instead of data, it has a special database that contains all information there is to know about the cluster. Especially which members are known and the relation between the members and the partitions, the shards in mongo language.

As of Mongo 3.4 config servers need to be in a replica set instead of standalone. For demonstration of develoment, it is allowed to have a set of only 1 member.

In a production environment you typically create 3 config servers, for now we’ll create just one:

mongod --configsvr --replSet c --dbpath \data\db1\conf --port 27019

Start the mongo shell in another command box so we can configure the relica set “c”:

rs.initiate(
{
_id: "c",
configsvr: true,
members: [
{ _id : 0, host : "localhost:27019" }
]
}
)

Finally we need at least one mongos which is a routing service and serves as the front end to which the users connect. The mongos has no persitant data, it reads the config server and distributes client requests over the shards.

It needs to know where to find the config server so we tell it with a parameter configReplSetName/hostname:port:

mongos --configdb c/localhost:27019

We can now open a mongo shell. It will by default connect to port 27017 and, lo and behold, a mongos automatically runs on port 27017. Since we are all running on the same host, connecting is very easy.

In the shell we will add shard servers to the cluster. Next we will enable sharding for a specific database.

mongo
mongos> sh.addShard( "localhost:27020")
mongos> sh.addShard( "localhost:27021")
mongos> sh.enableSharding("test")

The only thing we have done is enable sharding for a db. But nothing is harded yet. For that to happen we need to decide which collection(s) will be sharded and on what key. This key needs to have an index on the shard key. And then finally nothing needs to be done anymore.

So what did I learn?
Sure, you lack the robustnes of an RDBMS. Sure, you can not join and sure, therefor you store way more bytes then usual. But it’s fast, it’s easy and it serves many purposes. And last but not least, it takes some serious out of the box thinking for a DBA to actually appriciate this new world: you have to let go of some fundamental principles on which your world was based for the last ten, twenty or more years.

And finally a disclaimer: These examples have been over simplified. In the real world you’d use many hosts. You’d use 3 config servers, many mongos instances and off course a replicated shard cluster.
Apart from that, there are many ways to make the behaviour more sophisticated and robust. Chack out the official documentation, it’s quite good in my opinion and challenges you to many experiments.

The post How I learned to appreciate MongoDB appeared first on AMIS Oracle and Java Blog.

OUD – Oracle Unified Directory 11.1.2.3 Tuning, It is not always the servers fault

Yann Neuhaus - Tue, 2017-04-04 03:41

The default configuration which is shipped with OUD is not meant to be ready for enterprise usage. The default settings of OUD are targeted at evaluators and developers who run equipment with limited resources, and so it is quite likely that you run into performance issues if you don’t change anything, before going into production. The OUD performance depends on a lot of things like

  • Network configuration/routing/firewalls/bonding
  • OUD version and configuration (Replication, TLS)
  • Java version and Java runtime memory configuration
  • DNS Lookup times
  • Name Service Cache Daemon
  • And many more …

However, it is not always the servers fault. Sometimes the client is causing the issue. But how do I know, if it is the client or the server. In the following example it takes about 10 seconds to resolve the connect string DBIT122_LDAP. That is enormous. Far too long from being acceptable. Where is the tnsping spending so much time?

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time tnsping DBIT122_LDAP

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 04-APR-2017 08:43:06

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBIT122)))
OK (10 msec)

real    0m10.177s
user    0m0.017s
sys     0m0.018s

To exclude, that it is the servers fault, just check the OUD access log where you can see any ldap request against the OUD.

[dbafmw@dbidg01 logs]$ tail -50f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
...
[04/Apr/2017:08:43:39 +0200] CONNECT conn=5 from=192.168.56.202:30826 to=192.168.56.201:1389 protocol=LDAP
[04/Apr/2017:08:43:39 +0200] BIND REQ conn=5 op=0 msgID=1 type=SIMPLE dn="" version=3
[04/Apr/2017:08:43:39 +0200] BIND RES conn=5 op=0 msgID=1 result=0 authDN="" etime=0
[04/Apr/2017:08:43:39 +0200] SEARCH REQ conn=5 op=1 msgID=2 base="cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[04/Apr/2017:08:43:39 +0200] SEARCH RES conn=5 op=1 msgID=2 result=0 nentries=1 etime=2
[04/Apr/2017:08:43:39 +0200] UNBIND REQ conn=5 op=2 msgID=3
[04/Apr/2017:08:43:39 +0200] DISCONNECT conn=5 reason="Client Disconnect"
...
...

The important entry to look for is the etime after the search request. The etime filed is the elapsed time in milliseconds which the server spent processing the request. In the above case, it is 2 milliseconds, so quite fast. If you would see here large elapsed times here, then this would be a good indicator for issues on the server side.

Now, that we know that the server is ok, let’s move to client side. The first thing I am trying to do, is to see how fast the ldapsearch is. I am using the ldapsearch which comes with 12cR2 and I will use the same search criteria which tnsping is using to search for the connect string. The ldapsearch syntax from the OUD binaries differs a little bit with ldapsearch syntax which is shipped with 12cR2. Why should Oracle make them the same, it would be too easy. ;-) Ok, let’s check the ldapsearch.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time ldapsearch -v -h dbidg01 -p 1389 -b "cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" \
-s base "(objectclass=*)" "objectclass,orclNetDescString,orclNetDescName,orclVersion"

ldap_open( dbidg01, 1389 )
filter pattern: (objectclass=*)
returning: objectclass,orclNetDescString,orclNetDescName,orclVersion
filter is: ((objectclass=*))
cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com
1 matches

real    0m0.020s
user    0m0.005s
sys     0m0.004s

I don’t see any issues here. My ldapsearch came back in a blink of an eye. So .. where are the other 10 seconds? We need more information. We can either use strace or we can activate tracing on the client side. Something less known in the Oracle world is the tnsping tracing, which can be activated too. My tnsping is slow, and so I want only the tnsping to be traced and nothing else. To do so, we need to specify two parameters in the sqlnet.ora file. The TNSPING.TRACE_DIRECTORY and the TNSPING.TRACE_LEVEL. The tnsping trace level can have 4 different values like the sqlnet tracing.

  • 0 or OFF – No Trace output
  • 4 or USER – User trace information
  • 10 or ADMIN – Administration trace information
  • 16 or SUPPORT – Worldwide Customer Support trace information

Because I want to have the full trace output, I go for level 16 which is the support tracing.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] cat sqlnet.ora | grep TNSPING

TNSPING.TRACE_DIRECTORY = /u01/app/oracle/network/trc
TNSPING.TRACE_LEVEL = SUPPORT

Ok. Let’s do it again and see the outcome.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time tnsping DBIT122_LDAP

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 04-APR-2017 09:44:44

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBIT122)))
OK (10 msec)

real    0m10.191s
user    0m0.013s
sys     0m0.016s
oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122]

If we look at the trace file, we see that Oracle found 3 directory paths in the following order, TNSNAMES, EZCONNECT and LDAP.

[04-APR-2017 09:44:44:569] nnfgsrsp: Obtaining path parameter from names.directory_path or native_names.directory_path
[04-APR-2017 09:44:44:569] nnfgsrdp: entry
[04-APR-2017 09:44:44:569] nnfgsrdp: Setting path:
[04-APR-2017 09:44:44:569] nnfgsrdp: checking element TNSNAMES
[04-APR-2017 09:44:44:569] nnfgsrdp: checking element EZCONNECT
[04-APR-2017 09:44:44:569] nnfgsrdp: checking element LDAP

Switching to the TNSNAMES adapter is very fast and Oracle see’s it immediately that the query is unsuccessful, and so it is switching to the next adapter.

[04-APR-2017 09:44:44:569] nnfgrne: Switching to TNSNAMES adapter
[04-APR-2017 09:44:44:569] nnftboot: entry
[04-APR-2017 09:44:44:569] nlpaxini: entry
[04-APR-2017 09:44:44:569] nlpaxini: exit
[04-APR-2017 09:44:44:569] nnftmlf_make_local_addrfile: entry
[04-APR-2017 09:44:44:569] nnftmlf_make_local_addrfile: construction of local names file failed
[04-APR-2017 09:44:44:569] nnftmlf_make_local_addrfile: exit
[04-APR-2017 09:44:44:569] nlpaxini: entry
[04-APR-2017 09:44:44:569] nlpaxini: exit
[04-APR-2017 09:44:44:569] nnftmlf_make_system_addrfile: entry
[04-APR-2017 09:44:44:569] nnftmlf_make_system_addrfile: system names file is /u01/app/oracle/network/admin/tnsnames.ora
[04-APR-2017 09:44:44:569] nnftmlf_make_system_addrfile: exit
[04-APR-2017 09:44:44:569] nnftboot: exit
[04-APR-2017 09:44:44:569] nnftrne: entry
[04-APR-2017 09:44:44:569] nnftrne: Original name: DBIT122_LDAP
[04-APR-2017 09:44:44:569] nnfttran: entry
[04-APR-2017 09:44:44:569] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408
[04-APR-2017 09:44:44:569] nnfgrne: Query unsuccessful, skipping to next adapter

Now, Oracle is switching to the EZCONNECT adapter.

[04-APR-2017 09:44:44:569] nnfgrne: Switching to EZCONNECT adapter
[04-APR-2017 09:44:44:569] nnfhboot: entry
[04-APR-2017 09:44:44:569] nnfhboot: exit
[04-APR-2017 09:44:44:569] snlinGetAddrInfo: entry
[04-APR-2017 09:44:54:664] snlinGetAddrInfo: getaddrinfo() failed with error -2
[04-APR-2017 09:44:54:664] snlinGetAddrInfo: exit
[04-APR-2017 09:44:54:665] snlinGetAddrInfo: entry
[04-APR-2017 09:44:54:727] snlinGetAddrInfo: getaddrinfo() failed with error -2
[04-APR-2017 09:44:54:727] snlinGetAddrInfo: exit
[04-APR-2017 09:44:54:727] nnfhrne: Error forming address for DBIT122_LDAP, errcode 406
[04-APR-2017 09:44:54:727] nnfgrne: Query unsuccessful, skipping to next adapter

Ok. Here we go. Between “snlinGetAddrInfo: entry” and “snlinGetAddrInfo: getaddrinfo() failed with error -2″  10 seconds have been gone. Oracle thinks that the DBIT122_LDAP is an easy connect string, and tries to resolve the name, which fails.

So I need to switch the entries in the directory path in the sqlnet.ora file, to NAMES.DIRECTORY_PATH= (TNSNAMES,LDAP,EZCONNECT). After I have done that, the tnsping comes back successfully and very fast.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time tnsping DBIT122_LDAP

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 04-APR-2017 10:25:39

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBIT122)))
OK (0 msec)

real    0m0.018s
user    0m0.007s
sys     0m0.006s
Conclusion

It is not always the OUD servers fault when you hit performance issues. I might be on the client as well and it can have a severe impact.

 

Cet article OUD – Oracle Unified Directory 11.1.2.3 Tuning, It is not always the servers fault est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 14 – optimizer hints

Yann Neuhaus - Tue, 2017-04-04 01:23

This is a question that comes up quite often: How can I use optimizer hints in PostgreSQL as I can do it in Oracle? Well, you cant, and the reasons are this:

  • Poor application code maintainability: hints in queries require massive refactoring.
  • Interference with upgrades: today’s helpful hints become anti-performance after an upgrade.
  • Encouraging bad DBA habits slap a hint on instead of figuring out the real issue.
  • Does not scale with data size: the hint that’s right when a table is small is likely to be wrong when it gets larger.
  • Failure to actually improve query performance: most of the time, the optimizer is actually right.
  • Interfering with improving the query planner: people who use hints seldom report the query problem to the project.


But this does not mean that you cant influence the optimizer (or “planner” in PostgreSQL wording), it is just not working in the same way. Lets have a look.

On of the reasons that the planner does not choose an index over a sequential scan is that the parameter effective_cache_size is not set properly. To understand what it does you have to know that PostgreSQL works together with the operating system file cache/disk cache very well. It is not required, as you do it in Oracle, to give most of the available memory of the server to the database. Usually you start with 25% of the total available memory and give that to PostgreSQL by setting the parameter shared_buffers to that value. When pages fall out of that region it is still likely that they are available in the disk cache and can be retrieved from there without going down to disk. And this is what effective_cache_size is about: Setting this parameter does not consume more memory but is telling PostgreSQL how big the total cache of the system really is, so shared_buffers plus disk cache. This gets taken into consideration by the planner. A good starting point is 50 to 75% of the available memory. Lets do a quick test to show how this behaves. Lets generate some data:

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 ( a int );
with generator as 
 ( select a.*
     from generate_series ( 1, 5000000 ) a
    order by random()
 )
insert into t1 ( a ) 
     select a
       from generator;
create index i1 on t1(a);
analyze verbose t1;
select * from pg_size_pretty ( pg_relation_size ('t1' ));
select * from pg_size_pretty ( pg_total_relation_size('t1'));
postgres=# \i a.sql
DROP TABLE
CREATE TABLE
INSERT 0 5000000
CREATE INDEX
psql:a.sql:12: INFO:  analyzing "public.t1"
psql:a.sql:12: INFO:  "t1": scanned 22124 of 22124 pages, containing 5000000 live rows and 0 dead rows; 30000 rows in sample, 5000000 estimated total rows
ANALYZE
 pg_size_pretty 
----------------
 173 MB
(1 row)
 pg_size_pretty 
----------------
 280 MB
(1 row)
postgres=# show shared_buffers ;
 shared_buffers 
----------------
 128MB
(1 row)

The table without the index is big enough to not fit into shared_buffers (173MB) and even bigger of course including the index (280MB). When we set effective_cache_size to a very low value we get costs of 40.55 for the statement below (almost no disk cache):

postgres=# SET effective_cache_size TO '1 MB';
SET
postgres=# explain SELECT * FROM t1 ORDER BY  a limit 10;
                                     QUERY PLAN                                      
-------------------------------------------------------------------------------------
 Limit  (cost=0.43..40.55 rows=10 width=4)
   ->  Index Only Scan using i1 on t1  (cost=0.43..20057243.41 rows=5000000 width=4)
(2 rows)

Setting this to a more realistic value decreases the costs because it is expected to find the index in the disk cache:

postgres=# SET effective_cache_size TO '5 GB';
SET
postgres=# explain SELECT * FROM t1 ORDER BY  a limit 10;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Limit  (cost=0.43..0.87 rows=10 width=4)
   ->  Index Only Scan using i1 on t1  (cost=0.43..218347.46 rows=5000000 width=4)
(2 rows)

This is the first “hint” you can set to influence the optimizer/planner. But there are many others. What PostgreSQL allows you to do is to enable or disable features of the planner:

postgres=# select name from pg_settings where name like 'enable%';
         name         
----------------------
 enable_bitmapscan
 enable_hashagg
 enable_hashjoin
 enable_indexonlyscan
 enable_indexscan
 enable_material
 enable_mergejoin
 enable_nestloop
 enable_seqscan
 enable_sort
 enable_tidscan

Using the same data from above we could disable the index only scan:

postgres=# set enable_indexonlyscan=false;
SET
postgres=# explain (analyze,buffers) SELECT * FROM t1 ORDER BY  a limit 10;
                                                       QUERY PLAN                                                        
-------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.43..0.87 rows=10 width=4) (actual time=0.019..0.058 rows=10 loops=1)
   Buffers: shared hit=13
   ->  Index Scan using i1 on t1  (cost=0.43..218347.46 rows=5000000 width=4) (actual time=0.017..0.036 rows=10 loops=1)
         Buffers: shared hit=13
 Planning time: 0.057 ms
 Execution time: 0.084 ms
(6 rows)

postgres=# set enable_indexonlyscan=true;
SET
postgres=# explain (analyze,buffers) SELECT * FROM t1 ORDER BY  a limit 10;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.43..0.87 rows=10 width=4) (actual time=0.025..0.072 rows=10 loops=1)
   Buffers: shared hit=13
   ->  Index Only Scan using i1 on t1  (cost=0.43..218347.46 rows=5000000 width=4) (actual time=0.023..0.048 rows=10 loops=1)
         Heap Fetches: 10
         Buffers: shared hit=13
 Planning time: 0.068 ms
 Execution time: 0.105 ms
(7 rows)

But the documentation clearly states: “If the default plan chosen by the optimizer for a particular query is not optimal, a temporary solution is to use one of these configuration parameters to force the optimizer to choose a different plan”. For testing and troubleshooting this can be handy.

Another way to influence the optimizer/planner is to set the planner cost constants:

 postgres=# select name from pg_settings where name like '%cost%' and name not like '%vacuum%';
         name         
----------------------
 cpu_index_tuple_cost
 cpu_operator_cost
 cpu_tuple_cost
 parallel_setup_cost
 parallel_tuple_cost
 random_page_cost
 seq_page_cost"
(7 rows)

What they mean is pretty well documented and how you need to set them (if you need to change them at all) depends on your hardware and application. There are others as well, such as the *collapse_limit* parameters and the parameters for the Genetic Query Optimizer.

Conclusion: There are several ways you can influence the optimizer/planner in PostgreSQL it is just not by using hints.

 

Cet article Can I do it with PostgreSQL? – 14 – optimizer hints est apparu en premier sur Blog dbi services.

Pivotal Cloud Foundry Cloud Service Brokers for AWS, Azure and GCP

Pas Apicella - Tue, 2017-04-04 00:10
Pivotal Cloud Foundry (PCF) has various cloud service brokers for all the public clouds we support which include AWS, Azure and GCP. You can download and install those service brokers on premise or off premise giving you the capability to use Cloud services where it makes sense for your on premise or off premise cloud native applications.

https://network.pivotal.io/

The three cloud service brokers are as follows:





In the example below we have a PCF install running on vSphere and it has the AWS service broker tile installed as shown by the Ops Manager UI


Once installed this PCF instance can then provision AWS services and you can do that one of two ways.

1. Using Apps Manager UI as shown below


2. Use the CF CLI tool and invoking "cf marketplace" to list the service and then "cf create-service" to actually create an instance of the service.



Once provisioned within a SPACE of PCF you can then bind and use the service from applications as you normally would to consume the service reading the VCAP_SERVICES ENV variable and essentially access AWS services from your on premise installation of PCF in the example above.

More Information

GCP service broker:
https://network.pivotal.io/products/gcp-service-broker

AWS service broker:
https://network.pivotal.io/products/pcf-service-broker-for-aws

Azure service broker:
https://network.pivotal.io/products/microsoft-azure-service-broker


Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator