Feed aggregator

Literals with two single quotes unexpectedly working

Tom Kyte - Wed, 2017-04-05 02:06
Hi Tom, after many years using Oracle, I fortuitously found a amazing statement, with strange literals accepted in SQL and PL/SQL, although their syntax are incorrect. Test case is in LiveSQL link. I tested it on 10.2.0.4 and 12.1.0.2 versions...
Categories: DBA Blogs

Specify values that are allowed in comma delimited string using regex

Tom Kyte - Wed, 2017-04-05 02:06
Hello, i want to know if it is possible to specify what kind of values are allowed when i insert comma separated string. e.g : the value that are allowed: subcat1,subcat2,subcat3,subcat4. I have the table project with the column subcategories and ...
Categories: DBA Blogs

oracle interval partitioning 11.2.0.4

Tom Kyte - Wed, 2017-04-05 02:06
Hi, I have 4 tables in 4 schemas with monthly interval partitioning: Schemas: C, D, F, G Tables: T1, T2, T3, T4 So, I have these 4 tables in each schema (C, D, F, G). On April 1st, Oracle failed to create partitions (April) for same table (tab...
Categories: DBA Blogs

How to Perform a Security Audit of an Oracle Database Training in Athens, Greece

Pete Finnigan - Wed, 2017-04-05 02:06
I will be teaching my two days class How to Perform a Security Audit of an Oracle Database in Athens, Greece on May 16th and 17th 2017 organised by Oracle University. This is a great class that helps you understand....[Read More]

Posted by Pete On 04/04/17 At 02:45 PM

Categories: Security Blogs

Google Big Querry and Oracle Smart Scan

Pakistan's First Oracle Blog - Tue, 2017-04-04 23:26
Marveling at the technology is my pastime and lately there are 2 technologies which truly have made me say ' Simply Wow.' One is Google's Big Query and the other one is Oracle's Exadata Smart Scan.

I have been managing data in different databases for a long time to appreciate how critical it is for the client to get the results out of their data as fast as possible. It's all about the returning results at the end after issuing a query or clicking a button.

End user or developer don't really care as how many terabytes of data is there. DBAs and data architects might love to boast about the humongous volumes of data they store and support but there is nothing to write home about, if that data cannot be retrieved as quickly as possible.

When I first migrated a 9TB database to Oracle Exadata few years back and ran a heavy report first time, it returned results in a jiffy, while my jaws dropped. This report used to take at least 70 minutes before without smart scan. I had to bring the developer to double check whether the results were correct or not . Oracle's Exadata smart scan is phenomenal.

I got similar jaw-dropping experience yesterday when I saw Google Cloud Platform's product Big Query in action during an Onboard session Sydney. A SQL with regex was run on multi terabyte of dataset with lots of sorting and it returned the results from the cloud in few seconds. Best thing about Big Query is that the all-familiar SQL is used and nothing fancy is needed. You get your petabytes of data warehouse in Google's cloud and then use your SQL to analyze that dataset. Sweet part is the agility and transparency with which that data is returned.

Simply beautiful.
Categories: DBA Blogs

Bushy Join Trees in Oracle 12.2

Pythian Group - Tue, 2017-04-04 10:05

There are multiple optimizer features introduced in the Oracle Database 12.2 Release. Bushy Join Trees is the one that I’ll be demonstrating in this post.

First, I’ll create four tables with two indexes:

create table t1 as select rownum n1, rownum n2 from dual connect by  level <= 1000000;
create table t2 as select rownum n1, rownum n2 from dual connect by  level <= 100;
create table t3 as select rownum n1, rownum n2 from dual connect by  level <= 1000000;
create table t4 as select rownum n1, rownum n2 from dual connect by  level <= 100;

create index idx_t1 on t1(n1);
create index idx_t3 on t3(n1);

Now, I’ll run the next query:

select * from t1, t2, t3, t4 where t1.n1 = t2.n1 and t3.n1 = t4.n1 and t1.n2=t3.n2;

The most efficient “Bushy” execution plan for this query looks like the next one:

    -----HJ----- 
    |           |
    |           |
---NL---    ---NL---
|      |    |       |
T1     T2   T3      T4

This plan joins T1 and T2 tables by Nested Loop using idx_t1 index, joins T3 and T4 tables by Nested Loop using idx_t3 index and finally joins results of the previous joins by the Hash Join.
But Oracle has never been able to generate such execution plan automatically. You had to rewrite this query with subqueries and bunch of hints in order to force this kind of execution plan.

The following example shows the typical execution plan that Oracle can generate:

Execution Plan
----------------------------------------------------------
Plan hash value: 1007837908

-----------------------------------------------------------------------------------------
| Id  | Operation		       | Name	| Rows	| Bytes | Cost (%CPU)| Time	|
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	       |	|     1 |    32 |   888   (1)| 00:00:01 |
|*  1 |  HASH JOIN		       |	|     1 |    32 |   888   (1)| 00:00:01 |
|*  2 |   HASH JOIN		       |	|   100 |  2600 |   885   (1)| 00:00:01 |
|   3 |    NESTED LOOPS 	       |	|   100 |  1600 |   303   (0)| 00:00:01 |
|   4 |     NESTED LOOPS	       |	|   100 |  1600 |   303   (0)| 00:00:01 |
|   5 |      TABLE ACCESS FULL	       | T2	|   100 |   600 |     3   (0)| 00:00:01 |
|*  6 |      INDEX RANGE SCAN	       | IDX_T1 |     1 |	|     2   (0)| 00:00:01 |
|   7 |     TABLE ACCESS BY INDEX ROWID| T1	|     1 |    10 |     3   (0)| 00:00:01 |
|   8 |    TABLE ACCESS FULL	       | T3	|  1000K|  9765K|   579   (1)| 00:00:01 |
|   9 |   TABLE ACCESS FULL	       | T4	|   100 |   600 |     3   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T3"."N1"="T4"."N1")
   2 - access("T1"."N2"="T3"."N2")
   6 - access("T1"."N1"="T2"."N1")

We can see that a full T3 table scan and T3 table can be significantly large.

Oracle 12.2 has introduced new BUSHY_JOIN hint and bunch of hidden “_optimizer_bushy” parameters:
_optimizer_bushy_join
_optimizer_bushy_fact_min_size
_optimizer_bushy_fact_dim_ratio
_optimizer_bushy_cost_factor.

_optimizer_bushy_join parameter is ‘off’ by default and you have to set it to ‘on’ or to use a BUSHY_JOIN hint.

Let’s try with a hint:

select   /*+ qb_name(main) BUSHY_JOIN(@"MAIN" ( "T1"@"MAIN"   "T2"@"MAIN" )) */
 * from t1, t2, t3, t4 where t1.n1 = t2.n1 and t3.n1 = t4.n1 and t1.n2=t3.n2;

100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1929967733

----------------------------------------------------------------------------------------------------
| Id  | Operation		       | Name		   | Rows  | Bytes | Cost (%CPU)| Time	   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	       |		   |   100 |  6800 |   606   (0)| 00:00:01 |
|*  1 |  HASH JOIN		       |		   |   100 |  6800 |   606   (0)| 00:00:01 |
|   2 |   NESTED LOOPS		       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|   3 |    NESTED LOOPS 	       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL	       | T4		   |   100 |   600 |	 3   (0)| 00:00:01 |
|*  5 |     INDEX RANGE SCAN	       | IDX_T3 	   |	 1 |	   |	 2   (0)| 00:00:01 |
|   6 |    TABLE ACCESS BY INDEX ROWID | T3		   |	 1 |	10 |	 3   (0)| 00:00:01 |
|   7 |   VIEW			       | VW_BUSHY_D96D1B60 |   100 |  5200 |   303   (0)| 00:00:01 |
|   8 |    NESTED LOOPS 	       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|   9 |     NESTED LOOPS	       |		   |   100 |  1600 |   303   (0)| 00:00:01 |
|  10 |      TABLE ACCESS FULL	       | T2		   |   100 |   600 |	 3   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN	       | IDX_T1 	   |	 1 |	   |	 2   (0)| 00:00:01 |
|  12 |     TABLE ACCESS BY INDEX ROWID| T1		   |	 1 |	10 |	 3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("ITEM_1"="T3"."N2")
   5 - access("T3"."N1"="T4"."N1")
  11 - access("T1"."N1"="T2"."N1")

We can see VW_BUSHY_D96D1B60 internal view at step 7, and this is a definitely a “bushy” plan. The feature still is not enabled by default, but you don’t need to rewrite the query for a proper plan.

Categories: DBA Blogs

Oracle Analytics Cloud: Product Overview

Rittman Mead Consulting - Tue, 2017-04-04 10:00
 Product Overview

We at Rittman Mead are always helping our customer solving their problems, many times we heard them

  • being unsure about the sizing of their server
  • being worried about the upfront cost of the licensing
  • having recurring nightmares about patching
  • willing to try the cloud but couldn't find the right option to replace their on-premises system

This is their lucky day: Oracle officially launched Oracle Analytics Cloud (OAC), a new PaaS (Platform as a Service) providing a complete and elastic Business Intelligence platform in the cloud, customizable and managed by you but all on the Oracle Cloud!

 Product Overview

If you haven't been on a remote island you may have noticed that in recent years Oracle's main focus has been around the Cloud. Several products have been launched covering a vast spectrum of functionalities: Data Management, Application Development, Business Analytics and Security are only some of the areas covered by the Software/Platform/Infrastructure as a Service offering.

 Product Overview

In the Business Analytics area, we at Rittman Mead started thinking long time ago on how to host Oracle's BI on-premises (OBIEE) in the Cloud and worked closely with Oracle since the beta phase of their first PaaS product: BI Cloud Service (BICS). Effectively we put our hands on all the cloud products in the BA family like Big Data Discovery (both on premises and cloud), Data Visualization Cloud Service, Big Data Preparation Service.

Business Intelligence Cloud Products

Until few weeks ago Oracle's main Business Analytics cloud products were BI Cloud Service (BICS) and Data Visualization Cloud Service (DVCS). As mentioned in our blog both tools aimed initially at departmental use-cases: the simplicity of the data model interface and the lack of admin configuration options stopped them from being a compelling story for hosting a full enterprise Business Intelligence solution.

 Product Overview

New features like BICS Data Sync, Remote Data Connector and RPD lift and shift addressed almost all the limitations but the lack of detailed admin/maintenance capabilities represent a stopper for moving complex environments in the cloud. Still BICS and DVCS are perfect for their aim: business users analysing sets of data without needing to wait the IT to provision a server or to care about upfront licensing costs.

Oracle Analytics Cloud

Oracle Analytics Cloud extends the watermark in every direction by providing a product that is:

  • Complete functionality: most of the tools, procedures and options provided on-premises are now available in OAC.
  • Combining all the offering of BICS, DV, BIEE and Essbase: OAC includes the features of Oracle's top BI products.
  • Licensing Tailored: the many options available (discussed in a later post) can be chosen depending on analytical needs, timeframe of service, required performances
  • Easily Scalable: do you want to expand your BI solution to the double of the users without loosing performances? Just buy some more horsepower!
  • Fully Accessible: SSH connection available to the server makes it easy to change settings as needed, REST API and Clients are provided for all lifecycle operations
  • Customizable: settings, images, networking, VPN all settings are available
  • Scriptable: settings like scaling, instance creation and deletion, start and stop can be easily scripted via the REST-APIs
  • Fully Customer Managed: Oracle provides the automation to backup and patch but the customer decides when to run them.
What's The Difference?

So what's the difference between Oracle Analytics Cloud and the "old" DVCS and BICS? How is OACS going to change Oracle's BI offer in the cloud?

The great deal of using OACS is control: BICS/DVC limiting factors around admin options and development are solved providing a tool capable of hosting a full enterprise BI solution. Even if the platform is managed by Oracle SSH access is provided meaning that instance configurations can be changed. No more upfront server sizing decisions, now the size of the instance is decided during creation time and can be changed later in the process if the demand changes.

The REST-APIs will enable the scripting of the full lifecycle of the instance, providing a way to automate the BI enterprise workflow even in complex environments where concurrent development is needed. Patching and Backups are not a problem anymore with the automated processes provided.

Direct RPD online editing is available with the Admin tool. The old BICS Data Modeler is still there for simple models, but Admin Tool can be used in case of complex RPDs.

 Product Overview

The front-end is like the BICS and OBIEE 12c one, some new visualization have been added to Visual Analyzer in line with the new additions to Data Visualization Desktop: Parallel Coordinates, Chord, Network, Sankey diagrams are now available.

 Product Overview

A new console is now available in Visual Analyzer allowing settings like Mail or Deliveries that before were only accessible via Weblogic Console, Enterprise Manager or config files.

 Product Overview

Finally Essbase is now available in the cloud too with a new web interface!
Summarizing, if you wanted to go Cloud, but were worried about missing options, now Oracle Analytics Cloud provides all you need to host a full Enterprise BI solution.

In the next few days I'll be analysing various aspects of Oracle Analytics Cloud Suite, so keep in touch!

If you need assistance in checking if Oracle Analytics Cloud suits your needs or in planning your migration to the cloud don't hesitate to contact us

Categories: BI & Warehousing

Webcast: Introducing Oracle Content and Experience Cloud

WebCenter Team - Tue, 2017-04-04 08:47
Oracle Live Webcast Introducing Oracle Content and Experience Cloud:

Drive Engagement by Unleashing the Power of Content
Register Now cta-arrow 93% of marketers have siloed teams and technologies by channels Join this webcast to learn about Oracle Content and Experience Cloud -- a Digital Experience platform that drives omni-channel content management and delivers engaging experiences to your customers, partners, and employees. Hear how Oracle Content and Experience Cloud can help you:
  • Easily find, access, use, reuse and collaborate on content anytime, anywhere and on any device
  • Drive consistent, compliant and contextual experiences across multiple channels
  • Centralize content management across digital channels and enterprise applications
Empower your marketing teams to unleash the power of content.

Register today.


Sincerely,
Content and Experience Cloud Team
Introducing Oracle Content and Experience Cloud Webcast slap-hr Offer May 2, 2017
10:00 AM PDT /
01:00 PM EDT
Register Now cta-arrow Featured Speaker slab-hr David Le Strat David Le Strat
Senior Director, Product Management
Oracle
David Le Strat is Senior Director of Product Management for the Digital Experience and Content and Experience Cloud product portfolios at Oracle. In his role, David is responsible for product strategy, definition and delivery and go-to-market.
Stay Connected Facebook Linkedin Twitter Youtube Blog #OracleDX

Webcast: Introducing Oracle Content and Experience Cloud

WebCenter Team - Tue, 2017-04-04 08:47
Oracle Live Webcast Introducing Oracle Content and Experience Cloud:

Drive Engagement by Unleashing the Power of Content
; background-repeat: repeat-x;"> Register Now cta-arrow 93% of marketers have siloed teams and technologies by channels Join this webcast to learn about Oracle Content and Experience Cloud -- a Digital Experience platform that drives omni-channel content management and delivers engaging experiences to your customers, partners, and employees. Hear how Oracle Content and Experience Cloud can help you:
  • Easily find, access, use, reuse and collaborate on content anytime, anywhere and on any device
  • Drive consistent, compliant and contextual experiences across multiple channels
  • Centralize content management across digital channels and enterprise applications
Empower your marketing teams to unleash the power of content.

Register today.


Sincerely,
Content and Experience Cloud Team
Introducing Oracle Content and Experience Cloud Webcast slap-hr Offer May 2, 2017
10:00 AM PDT /
01:00 PM EDT
; background-repeat: repeat-x;"> Register Now cta-arrow Featured Speaker slab-hr David Le Strat David Le Strat
Senior Director, Product Management
Oracle
David Le Strat is Senior Director of Product Management for the Digital Experience and Content and Experience Cloud product portfolios at Oracle. In his role, David is responsible for product strategy, definition and delivery and go-to-market.
Stay Connected Facebook Linkedin Twitter Youtube Blog #OracleDX

Inline Queries Vs. Store Procedure

Tom Kyte - Tue, 2017-04-04 07:46
what are the disadvantages of using Inline Queries instead of using Store Procedure? what is the performance impact on Inline Queries Vs. Store Procedure? thanks
Categories: DBA Blogs

Identity Vs Sequence

Tom Kyte - Tue, 2017-04-04 07:46
Hi , Good day. We can have an alternative to sequence as below. This is quite similar to Identity column in SQL server. CREATE TABLE identity_test_tab ( id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY, description VARCHAR2...
Categories: DBA Blogs

How I learned to appreciate MongoDB

Amis Blog - Tue, 2017-04-04 05:11

Last week at our company we organized a session about NoSQL in general and MongoDB in particular, as you can read here. The MongoDB focus was presented by me and I would like to take you with me on my trip to actually appreciating the NoSQL database world (and MongoDB in particular).

Coming from RDBMS I’m used to ACID. Business models reside within the database, data never gets lost, transactions are isolated, you’ve got your read consistency, strictly defined tables, foreign keys et cetera. MongoDB on the other hand I saw as an unstructured pile of data, not stuctured, no such thing as transactions, it’s ‘eventually consistent’ (that sounds like a leap of faith), no joins hence no foreign keys… You get the picture.

I am a DBA. My major concern is to have all data available or at least recoverable, no matter what. But the world is changing. Developers more and more look at the database as a storage engine; business logica is programmed in the application. We, DBA’s, try to teach them to use the database better, but is that really necessary? There’s no law..

The world is changing fast and so are businesses. It’s not unusual to deploy a new release every night. If developers need to redesign the database every few days, then maybe the structure of data is not that important. If we collect the number of hits on our website, is it a disaster if out of 10,000 hits we occasionally miss 1?

It takes a lot of discussion and ‘yes, but..’ for this other look at data to finally settle in. At least, for me it did. What finally won me over was an online course at MongoDB University that sort of mitigated the pain. Because, once you let go of the ACID model, you gain a lot of flexibility in terms of database design and infrastructure design. Scaling out becomes a very easy operation for instance. Resilience against hardware failure is a piece of cake. And due the lack of the RDBMS legacy, the engine can focus almost entirely on reading and writing data which leads to lightning fast performance.

In the next paragraphs i will show some examples of the resilience and general behaviour of MongoDB, losely compared to Oracle. It is handson so I will also get you started, as minimal as possible, with mongoDB in general.

I will not go into the way you read and write data. Only some actions will be shown that are needed for the examples. But in general:

db = database and can be compared to a schema in Oracle.

A db contains collections, which can be compared to tables.

A collections contains documents which can be compared to rows.

Joins are not possible, so all data you need should be in the same collection.

Collections consist of key:value pairs, as many as you like within one collection.

So, you can have the database ‘trades’ with collection ‘customers’ with documents like

{“name”:”Larry”,”company”:”Oracle”,”hobby”:”sailing”}

Getting started.

Go to the MongoDB site here and download the appropriate version. The entire handson can be run on your typical Windows or Linux laptop or Virtual Box. The installation is very easy, no instructions needed from my part.

The software will be installed in C:\Program Files\MongoDB\Server\3.4\bin. Version can change over time off course.

Add that to your path in your environment variables.

In linux, add the path where you unpacked the tar ball, followed by bin, to your $PATH.

Starting an instance is also very easy. Open a command box, create a directory \data\db1 and start the instance with

mongod --dbpath \data\db1

On windows you should leave this box open. When you close it the instance shuts down. Better would be to create a service but for this demonstration, this will do.

Stop the database by pressing ^C.

On linux you can fork the process so you don’t have to open a terminal for every instance:

mongod --dbpath /data/db1 --fork --logpath a.log

End it by killing the process.

From now on I will continue in Windows, Linux users can follow the instructions with minor adjustments, like / instead of \

Also make sure to use a different logpath for each instance.

Resilience against hardware failure.

In Oracle we have 2 kinds of resilience against hardware failure. Instance failure can be mitigated by RAC, storage failure by data guard. Besides, if a single instance crashes you can recover all data, provided you have a decent backup strategy.

MongoDB uses a different approach called replica sets. Each instance (or member as it’s called) has its own storage and can be replicated to another instance (or many) with its own storage too. Only one instance can read and write, that is the primary instance. The others only allow you to read data.

In production this is a no brainer: should a single instance fail, then you can’t recover all data like in Oracle, no matter how often you make backups.

All instances vote who will be the primary. This can be manipulated by setting the priority parameter. I will not go into that here but just demonstrate a simple replica set.

Open a command box and type:

mkdir \data\db1\r1
mkdir \data\db1\r1
mkdir \data\db1\r1
mongod --smallfiles --oplogSize 50 --port 27001 --dbpath \data\db1\r1 --replSet r

Leave it open and open a second Command box and type:

mongod --smallfiles --oplogSize 50 --port 27002 --dbpath \data\db1\r2 --replSet r

Leave it open and open a third Command box and type:

mongod --smallfiles --oplogSize 50 --port 27003 --dbpath \data\db1\r3 --replSet r

Open fourth command box This will be used to actually talk to the database using the mongo shell. We will then initiate the replica set.

mongo –-port 27003
rs.initiate(
           { _id:'r',
             members:[
                     { _id:1, host:'localhost:27001' },
                     { _id:2, host:'localhost:27002', "arbiterOnly" : true },
                     { _id:3, host:'localhost:27003' }
                     ]
           }
)
rs.status()

I introduced a special member, the Arbiter. This is a member without data, it only helps to have an uneven number of members which is necessary to always get a majority of votes when it comes to choosing the Primary member.

In the output you can see that we have 3 members: a Secondary on port 27001, an Arbiter on port 27002 and a Primary on port 27003. You can also see by the prompt that we are connected to the Primary.

We will now create a collection called ‘simple’ and insert some data. Also, the writeConcern phrase makes sure data is written to at least 2 members. If there are more members they will be ‘eventually consistent’, meaning that they will synchronize but not immediately.

db.simple.insert( { _id : 1 }, { writeConcern : { w : 2 } } )
db.simple.insert( { _id : 2 }, { writeConcern : { w : 2 } } )
db.simple.insert( { _id : 3 }, { writeConcern : { w : 2 } } )

Go to your secondary member and try to read the data. This involves giving your self permission to read from the secondary as I’ll show:

exit
mongo --port 27001
r:SECONDARY> db.simple.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
r:SECONDARY> rs.slaveOk()
r:SECONDARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }

This looks okay. Not featured here: if I stop the Secondary, add data on the Primary and restart the Secondary, it synchronizes as expected. Just one thing: the writeConcern for 2 members can not be used since we only have 1 member.

Now it becomes interesting. I’ll stop the Secondary, write some data on the Primary, stop the Primary and start the Secondary. Would the data written whilst the Secondary was down still be visible? If not, would it be recoverable?

r:SECONDARY> exit
bye

Go to your first box and stop the Secondary with ^C.

Go to the mongoshell box and connect to port 27003, the Primary and add some more data:

mongo --port 27003
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27003/test
r:PRIMARY> db.simple.insert( { _id : 4 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.insert( { _id : 5 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.insert( { _id : 6 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
{ "_id" : 4 }
{ "_id" : 5 }
{ "_id" : 6 }

r:PRIMARY> exit
bye

Now stop the primary in your 3rd box with ^C and restart the Secondary in your 1st box. Then go to the mongoshell box and connect to port 27001

mongo --port 27001
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27001/test
r:PRIMARY> rs.status()
{
"set" : "r",
"date" : ISODate("2017-03-20T19:12:43.425Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "localhost:27001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 25,
"optime" : Timestamp(1490035617, 1),
"optimeDate" : ISODate("2017-03-20T18:46:57Z"),
"electionTime" : Timestamp(1490037141, 1),
"electionDate" : ISODate("2017-03-20T19:12:21Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "localhost:27002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 24,
"lastHeartbeat" : ISODate("2017-03-20T19:12:43.354Z"),
"lastHeartbeatRecv" : ISODate("2017-03-20T19:12:43.167Z"),
"pingMs" : 0,
"configVersion" : 1
},
{
"_id" : 3,
"name" : "localhost:27003",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-03-20T19:12:43.354Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"configVersion" : -1
}
],
"ok" : 1
}
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
r:PRIMARY>db.simple.insert( { _id : 7 } )
WriteResult({ "nInserted" : 1 })
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
{ "_id" : 7 }
r:PRIMARY>

So, member 1 now has become the Primary, but we hit data loss: it never had a chance to synchronize and they do not share any storage to read from.

What would happen if we restart the 3rd member? After all, that one does have the lost data stored, somewhere.

Start up the 3rd member (in the 3rd box)

In the output you will see it transitions to Secondary and it performs a rollback: the lost data is actually rolled back. And the good news: it is stored. Under its data directory \data\db1\r3 it created a directory called rollback which contains a .bson file. This file can be examend and/or imported in the database as I’ll show.

Go to the 4th box and exit mongoshell. Then:

cd \data\db1\r3\rollback
C:\data\db1\r3\rollback>bsondump test.simple.2017-03-20T19-32-31.0.bson
{"_id":4.0}
{"_id":5.0}
{"_id":6.0}
2017-03-20T20:45:06.412+0100 3 objects found
C:\data\db1\r3\rollback>mongorestore --port 27001 --db test --collection simple test.simple.2017-03-20T19-32-31.0.bson
2017-03-20T20:47:59.880+0100 checking for collection data in test.simple.2017-03-20T19-32-31.0.bson
2017-03-20T20:47:59.886+0100 restoring test.simple from file test.simple.2017-03-20T19-32-31.0.bson
2017-03-20T20:48:00.463+0100 no indexes to restore
2017-03-20T20:48:00.463+0100 finished restoring test.simple (3 documents)
2017-03-20T20:48:00.463+0100 done

C:\data\db1\r3\rollback>mongo --port 27001
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27001/test
r:PRIMARY> db.simple.find()
{ "_id" : 1 }
{ "_id" : 2 }
{ "_id" : 3 }
{ "_id" : 7 }
{ "_id" : 4 }
{ "_id" : 5 }
{ "_id" : 6 }
r:PRIMARY>

Okay, Oracle would have done everything by itself. But at what cost? It needs to maintain redologs and archived redo logs. It only has 1 member to query, the Primary database. Yes, you can have a read only Standby database with Active Data Guard since 11G, but that’s a licensed option. It’s robust, nevertheless. I only want to say that the alternative is different but not all bad. Not at all.

;

;

Scaling out, also known as sharding

In the previous paragraph we covered HA and recoverablility. Now let’s have a look at scaling out, best compared to RAC.

RAC enables you to add CPU power and memory to a database. It also enables you to distribute different kinds of workloads over different machines, for instance reporting on one node and OLTP on another node. That distribution can be compared to smart using of replica sets explained above.

Adding CPU power and memory is something else. MongoDB heavily relies on memory to perform. And they made it very easy for you to add more nodes to your cluster. This is done by sharding.

Sharding can best be described as range based partitioning. Sharding is done on a per collection base. A shard cluster consists of 1 (!) or more nodes that automatically partitions a collection and distributes it evenly over all cluster members.

Let’s have a closer look.

First of all, each mongod needs to know it is part of a shard cluster. That is accomplished with a –shardsvr startup parameter. It is also very wise to explicitely declare the port number with the –port parameter. Finally it needs its own storage, the –dbpath parameter. Example:

mongod  --shardsvr --port 27020 --dbpath \data\db1\s1
mongod  --shardsvr --port 27021 --dbpath \data\db1\s2

Next we need a config server. This is also a mongodb, but instead of data, it has a special database that contains all information there is to know about the cluster. Especially which members are known and the relation between the members and the partitions, the shards in mongo language.

As of Mongo 3.4 config servers need to be in a replica set instead of standalone. For demonstration of develoment, it is allowed to have a set of only 1 member.

In a production environment you typically create 3 config servers, for now we’ll create just one:

mongod --configsvr --replSet c --dbpath \data\db1\conf --port 27019

Start the mongo shell in another command box so we can configure the relica set “c”:

rs.initiate(
{
_id: "c",
configsvr: true,
members: [
{ _id : 0, host : "localhost:27019" }
]
}
)

Finally we need at least one mongos which is a routing service and serves as the front end to which the users connect. The mongos has no persitant data, it reads the config server and distributes client requests over the shards.

It needs to know where to find the config server so we tell it with a parameter configReplSetName/hostname:port:

mongos --configdb c/localhost:27019

We can now open a mongo shell. It will by default connect to port 27017 and, lo and behold, a mongos automatically runs on port 27017. Since we are all running on the same host, connecting is very easy.

In the shell we will add shard servers to the cluster. Next we will enable sharding for a specific database.

mongo
mongos> sh.addShard( "localhost:27020")
mongos> sh.addShard( "localhost:27021")
mongos> sh.enableSharding("test")

The only thing we have done is enable sharding for a db. But nothing is harded yet. For that to happen we need to decide which collection(s) will be sharded and on what key. This key needs to have an index on the shard key. And then finally nothing needs to be done anymore.

So what did I learn?
Sure, you lack the robustnes of an RDBMS. Sure, you can not join and sure, therefor you store way more bytes then usual. But it’s fast, it’s easy and it serves many purposes. And last but not least, it takes some serious out of the box thinking for a DBA to actually appriciate this new world: you have to let go of some fundamental principles on which your world was based for the last ten, twenty or more years.

And finally a disclaimer: These examples have been over simplified. In the real world you’d use many hosts. You’d use 3 config servers, many mongos instances and off course a replicated shard cluster.
Apart from that, there are many ways to make the behaviour more sophisticated and robust. Chack out the official documentation, it’s quite good in my opinion and challenges you to many experiments.

The post How I learned to appreciate MongoDB appeared first on AMIS Oracle and Java Blog.

OUD – Oracle Unified Directory 11.1.2.3 Tuning, It is not always the servers fault

Yann Neuhaus - Tue, 2017-04-04 03:41

The default configuration which is shipped with OUD is not meant to be ready for enterprise usage. The default settings of OUD are targeted at evaluators and developers who run equipment with limited resources, and so it is quite likely that you run into performance issues if you don’t change anything, before going into production. The OUD performance depends on a lot of things like

  • Network configuration/routing/firewalls/bonding
  • OUD version and configuration (Replication, TLS)
  • Java version and Java runtime memory configuration
  • DNS Lookup times
  • Name Service Cache Daemon
  • And many more …

However, it is not always the servers fault. Sometimes the client is causing the issue. But how do I know, if it is the client or the server. In the following example it takes about 10 seconds to resolve the connect string DBIT122_LDAP. That is enormous. Far too long from being acceptable. Where is the tnsping spending so much time?

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time tnsping DBIT122_LDAP

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 04-APR-2017 08:43:06

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBIT122)))
OK (10 msec)

real    0m10.177s
user    0m0.017s
sys     0m0.018s

To exclude, that it is the servers fault, just check the OUD access log where you can see any ldap request against the OUD.

[dbafmw@dbidg01 logs]$ tail -50f /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/logs/access
...
...
[04/Apr/2017:08:43:39 +0200] CONNECT conn=5 from=192.168.56.202:30826 to=192.168.56.201:1389 protocol=LDAP
[04/Apr/2017:08:43:39 +0200] BIND REQ conn=5 op=0 msgID=1 type=SIMPLE dn="" version=3
[04/Apr/2017:08:43:39 +0200] BIND RES conn=5 op=0 msgID=1 result=0 authDN="" etime=0
[04/Apr/2017:08:43:39 +0200] SEARCH REQ conn=5 op=1 msgID=2 base="cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" scope=base filter="(objectclass=*)" attrs="objectclass,orclNetDescString,orclNetDescName,orclVersion"
[04/Apr/2017:08:43:39 +0200] SEARCH RES conn=5 op=1 msgID=2 result=0 nentries=1 etime=2
[04/Apr/2017:08:43:39 +0200] UNBIND REQ conn=5 op=2 msgID=3
[04/Apr/2017:08:43:39 +0200] DISCONNECT conn=5 reason="Client Disconnect"
...
...

The important entry to look for is the etime after the search request. The etime filed is the elapsed time in milliseconds which the server spent processing the request. In the above case, it is 2 milliseconds, so quite fast. If you would see here large elapsed times here, then this would be a good indicator for issues on the server side.

Now, that we know that the server is ok, let’s move to client side. The first thing I am trying to do, is to see how fast the ldapsearch is. I am using the ldapsearch which comes with 12cR2 and I will use the same search criteria which tnsping is using to search for the connect string. The ldapsearch syntax from the OUD binaries differs a little bit with ldapsearch syntax which is shipped with 12cR2. Why should Oracle make them the same, it would be too easy. ;-) Ok, let’s check the ldapsearch.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time ldapsearch -v -h dbidg01 -p 1389 -b "cn=DBIT122_LDAP,cn=OracleContext,dc=dbi,dc=com" \
-s base "(objectclass=*)" "objectclass,orclNetDescString,orclNetDescName,orclVersion"

ldap_open( dbidg01, 1389 )
filter pattern: (objectclass=*)
returning: objectclass,orclNetDescString,orclNetDescName,orclVersion
filter is: ((objectclass=*))
cn=dbit122_ldap,cn=OracleContext,dc=dbi,dc=com
1 matches

real    0m0.020s
user    0m0.005s
sys     0m0.004s

I don’t see any issues here. My ldapsearch came back in a blink of an eye. So .. where are the other 10 seconds? We need more information. We can either use strace or we can activate tracing on the client side. Something less known in the Oracle world is the tnsping tracing, which can be activated too. My tnsping is slow, and so I want only the tnsping to be traced and nothing else. To do so, we need to specify two parameters in the sqlnet.ora file. The TNSPING.TRACE_DIRECTORY and the TNSPING.TRACE_LEVEL. The tnsping trace level can have 4 different values like the sqlnet tracing.

  • 0 or OFF – No Trace output
  • 4 or USER – User trace information
  • 10 or ADMIN – Administration trace information
  • 16 or SUPPORT – Worldwide Customer Support trace information

Because I want to have the full trace output, I go for level 16 which is the support tracing.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] cat sqlnet.ora | grep TNSPING

TNSPING.TRACE_DIRECTORY = /u01/app/oracle/network/trc
TNSPING.TRACE_LEVEL = SUPPORT

Ok. Let’s do it again and see the outcome.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time tnsping DBIT122_LDAP

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 04-APR-2017 09:44:44

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBIT122)))
OK (10 msec)

real    0m10.191s
user    0m0.013s
sys     0m0.016s
oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122]

If we look at the trace file, we see that Oracle found 3 directory paths in the following order, TNSNAMES, EZCONNECT and LDAP.

[04-APR-2017 09:44:44:569] nnfgsrsp: Obtaining path parameter from names.directory_path or native_names.directory_path
[04-APR-2017 09:44:44:569] nnfgsrdp: entry
[04-APR-2017 09:44:44:569] nnfgsrdp: Setting path:
[04-APR-2017 09:44:44:569] nnfgsrdp: checking element TNSNAMES
[04-APR-2017 09:44:44:569] nnfgsrdp: checking element EZCONNECT
[04-APR-2017 09:44:44:569] nnfgsrdp: checking element LDAP

Switching to the TNSNAMES adapter is very fast and Oracle see’s it immediately that the query is unsuccessful, and so it is switching to the next adapter.

[04-APR-2017 09:44:44:569] nnfgrne: Switching to TNSNAMES adapter
[04-APR-2017 09:44:44:569] nnftboot: entry
[04-APR-2017 09:44:44:569] nlpaxini: entry
[04-APR-2017 09:44:44:569] nlpaxini: exit
[04-APR-2017 09:44:44:569] nnftmlf_make_local_addrfile: entry
[04-APR-2017 09:44:44:569] nnftmlf_make_local_addrfile: construction of local names file failed
[04-APR-2017 09:44:44:569] nnftmlf_make_local_addrfile: exit
[04-APR-2017 09:44:44:569] nlpaxini: entry
[04-APR-2017 09:44:44:569] nlpaxini: exit
[04-APR-2017 09:44:44:569] nnftmlf_make_system_addrfile: entry
[04-APR-2017 09:44:44:569] nnftmlf_make_system_addrfile: system names file is /u01/app/oracle/network/admin/tnsnames.ora
[04-APR-2017 09:44:44:569] nnftmlf_make_system_addrfile: exit
[04-APR-2017 09:44:44:569] nnftboot: exit
[04-APR-2017 09:44:44:569] nnftrne: entry
[04-APR-2017 09:44:44:569] nnftrne: Original name: DBIT122_LDAP
[04-APR-2017 09:44:44:569] nnfttran: entry
[04-APR-2017 09:44:44:569] nnfttran: Error querying DBIT122_LDAP of attribute A.SMD errcode 408
[04-APR-2017 09:44:44:569] nnfgrne: Query unsuccessful, skipping to next adapter

Now, Oracle is switching to the EZCONNECT adapter.

[04-APR-2017 09:44:44:569] nnfgrne: Switching to EZCONNECT adapter
[04-APR-2017 09:44:44:569] nnfhboot: entry
[04-APR-2017 09:44:44:569] nnfhboot: exit
[04-APR-2017 09:44:44:569] snlinGetAddrInfo: entry
[04-APR-2017 09:44:54:664] snlinGetAddrInfo: getaddrinfo() failed with error -2
[04-APR-2017 09:44:54:664] snlinGetAddrInfo: exit
[04-APR-2017 09:44:54:665] snlinGetAddrInfo: entry
[04-APR-2017 09:44:54:727] snlinGetAddrInfo: getaddrinfo() failed with error -2
[04-APR-2017 09:44:54:727] snlinGetAddrInfo: exit
[04-APR-2017 09:44:54:727] nnfhrne: Error forming address for DBIT122_LDAP, errcode 406
[04-APR-2017 09:44:54:727] nnfgrne: Query unsuccessful, skipping to next adapter

Ok. Here we go. Between “snlinGetAddrInfo: entry” and “snlinGetAddrInfo: getaddrinfo() failed with error -2″  10 seconds have been gone. Oracle thinks that the DBIT122_LDAP is an easy connect string, and tries to resolve the name, which fails.

So I need to switch the entries in the directory path in the sqlnet.ora file, to NAMES.DIRECTORY_PATH= (TNSNAMES,LDAP,EZCONNECT). After I have done that, the tnsping comes back successfully and very fast.

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] time tnsping DBIT122_LDAP

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 04-APR-2017 10:25:39

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used LDAP adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBIT122)))
OK (0 msec)

real    0m0.018s
user    0m0.007s
sys     0m0.006s
Conclusion

It is not always the OUD servers fault when you hit performance issues. I might be on the client as well and it can have a severe impact.

 

Cet article OUD – Oracle Unified Directory 11.1.2.3 Tuning, It is not always the servers fault est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 14 – optimizer hints

Yann Neuhaus - Tue, 2017-04-04 01:23

This is a question that comes up quite often: How can I use optimizer hints in PostgreSQL as I can do it in Oracle? Well, you cant, and the reasons are this:

  • Poor application code maintainability: hints in queries require massive refactoring.
  • Interference with upgrades: today’s helpful hints become anti-performance after an upgrade.
  • Encouraging bad DBA habits slap a hint on instead of figuring out the real issue.
  • Does not scale with data size: the hint that’s right when a table is small is likely to be wrong when it gets larger.
  • Failure to actually improve query performance: most of the time, the optimizer is actually right.
  • Interfering with improving the query planner: people who use hints seldom report the query problem to the project.


But this does not mean that you cant influence the optimizer (or “planner” in PostgreSQL wording), it is just not working in the same way. Lets have a look.

On of the reasons that the planner does not choose an index over a sequential scan is that the parameter effective_cache_size is not set properly. To understand what it does you have to know that PostgreSQL works together with the operating system file cache/disk cache very well. It is not required, as you do it in Oracle, to give most of the available memory of the server to the database. Usually you start with 25% of the total available memory and give that to PostgreSQL by setting the parameter shared_buffers to that value. When pages fall out of that region it is still likely that they are available in the disk cache and can be retrieved from there without going down to disk. And this is what effective_cache_size is about: Setting this parameter does not consume more memory but is telling PostgreSQL how big the total cache of the system really is, so shared_buffers plus disk cache. This gets taken into consideration by the planner. A good starting point is 50 to 75% of the available memory. Lets do a quick test to show how this behaves. Lets generate some data:

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 ( a int );
with generator as 
 ( select a.*
     from generate_series ( 1, 5000000 ) a
    order by random()
 )
insert into t1 ( a ) 
     select a
       from generator;
create index i1 on t1(a);
analyze verbose t1;
select * from pg_size_pretty ( pg_relation_size ('t1' ));
select * from pg_size_pretty ( pg_total_relation_size('t1'));
postgres=# \i a.sql
DROP TABLE
CREATE TABLE
INSERT 0 5000000
CREATE INDEX
psql:a.sql:12: INFO:  analyzing "public.t1"
psql:a.sql:12: INFO:  "t1": scanned 22124 of 22124 pages, containing 5000000 live rows and 0 dead rows; 30000 rows in sample, 5000000 estimated total rows
ANALYZE
 pg_size_pretty 
----------------
 173 MB
(1 row)
 pg_size_pretty 
----------------
 280 MB
(1 row)
postgres=# show shared_buffers ;
 shared_buffers 
----------------
 128MB
(1 row)

The table without the index is big enough to not fit into shared_buffers (173MB) and even bigger of course including the index (280MB). When we set effective_cache_size to a very low value we get costs of 40.55 for the statement below (almost no disk cache):

postgres=# SET effective_cache_size TO '1 MB';
SET
postgres=# explain SELECT * FROM t1 ORDER BY  a limit 10;
                                     QUERY PLAN                                      
-------------------------------------------------------------------------------------
 Limit  (cost=0.43..40.55 rows=10 width=4)
   ->  Index Only Scan using i1 on t1  (cost=0.43..20057243.41 rows=5000000 width=4)
(2 rows)

Setting this to a more realistic value decreases the costs because it is expected to find the index in the disk cache:

postgres=# SET effective_cache_size TO '5 GB';
SET
postgres=# explain SELECT * FROM t1 ORDER BY  a limit 10;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Limit  (cost=0.43..0.87 rows=10 width=4)
   ->  Index Only Scan using i1 on t1  (cost=0.43..218347.46 rows=5000000 width=4)
(2 rows)

This is the first “hint” you can set to influence the optimizer/planner. But there are many others. What PostgreSQL allows you to do is to enable or disable features of the planner:

postgres=# select name from pg_settings where name like 'enable%';
         name         
----------------------
 enable_bitmapscan
 enable_hashagg
 enable_hashjoin
 enable_indexonlyscan
 enable_indexscan
 enable_material
 enable_mergejoin
 enable_nestloop
 enable_seqscan
 enable_sort
 enable_tidscan

Using the same data from above we could disable the index only scan:

postgres=# set enable_indexonlyscan=false;
SET
postgres=# explain (analyze,buffers) SELECT * FROM t1 ORDER BY  a limit 10;
                                                       QUERY PLAN                                                        
-------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.43..0.87 rows=10 width=4) (actual time=0.019..0.058 rows=10 loops=1)
   Buffers: shared hit=13
   ->  Index Scan using i1 on t1  (cost=0.43..218347.46 rows=5000000 width=4) (actual time=0.017..0.036 rows=10 loops=1)
         Buffers: shared hit=13
 Planning time: 0.057 ms
 Execution time: 0.084 ms
(6 rows)

postgres=# set enable_indexonlyscan=true;
SET
postgres=# explain (analyze,buffers) SELECT * FROM t1 ORDER BY  a limit 10;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.43..0.87 rows=10 width=4) (actual time=0.025..0.072 rows=10 loops=1)
   Buffers: shared hit=13
   ->  Index Only Scan using i1 on t1  (cost=0.43..218347.46 rows=5000000 width=4) (actual time=0.023..0.048 rows=10 loops=1)
         Heap Fetches: 10
         Buffers: shared hit=13
 Planning time: 0.068 ms
 Execution time: 0.105 ms
(7 rows)

But the documentation clearly states: “If the default plan chosen by the optimizer for a particular query is not optimal, a temporary solution is to use one of these configuration parameters to force the optimizer to choose a different plan”. For testing and troubleshooting this can be handy.

Another way to influence the optimizer/planner is to set the planner cost constants:

 postgres=# select name from pg_settings where name like '%cost%' and name not like '%vacuum%';
         name         
----------------------
 cpu_index_tuple_cost
 cpu_operator_cost
 cpu_tuple_cost
 parallel_setup_cost
 parallel_tuple_cost
 random_page_cost
 seq_page_cost"
(7 rows)

What they mean is pretty well documented and how you need to set them (if you need to change them at all) depends on your hardware and application. There are others as well, such as the *collapse_limit* parameters and the parameters for the Genetic Query Optimizer.

Conclusion: There are several ways you can influence the optimizer/planner in PostgreSQL it is just not by using hints.

 

Cet article Can I do it with PostgreSQL? – 14 – optimizer hints est apparu en premier sur Blog dbi services.

Pivotal Cloud Foundry Cloud Service Brokers for AWS, Azure and GCP

Pas Apicella - Tue, 2017-04-04 00:10
Pivotal Cloud Foundry (PCF) has various cloud service brokers for all the public clouds we support which include AWS, Azure and GCP. You can download and install those service brokers on premise or off premise giving you the capability to use Cloud services where it makes sense for your on premise or off premise cloud native applications.

https://network.pivotal.io/

The three cloud service brokers are as follows:





In the example below we have a PCF install running on vSphere and it has the AWS service broker tile installed as shown by the Ops Manager UI


Once installed this PCF instance can then provision AWS services and you can do that one of two ways.

1. Using Apps Manager UI as shown below


2. Use the CF CLI tool and invoking "cf marketplace" to list the service and then "cf create-service" to actually create an instance of the service.



Once provisioned within a SPACE of PCF you can then bind and use the service from applications as you normally would to consume the service reading the VCAP_SERVICES ENV variable and essentially access AWS services from your on premise installation of PCF in the example above.

More Information

GCP service broker:
https://network.pivotal.io/products/gcp-service-broker

AWS service broker:
https://network.pivotal.io/products/pcf-service-broker-for-aws

Azure service broker:
https://network.pivotal.io/products/microsoft-azure-service-broker


Categories: Fusion Middleware

Fujitsu and Oracle Launch Fujitsu SPARC M12 Servers with World’s Fastest Per-Core Performance

Oracle Press Releases - Mon, 2017-04-03 21:00
Press Release
Fujitsu and Oracle Launch Fujitsu SPARC M12 Servers with World’s Fastest Per-Core Performance Providing optimized infrastructure for both on-premises and cloud environments

Tokyo, Japan, and Redwood City, Calif., U.S.—Apr 4, 2017

Fujitsu Limited and Oracle today announced the launch of Fujitsu SPARC M12, a new lineup of enterprise servers now available worldwide.
 
Featuring Fujitsu’s new SPARC64 XII processor, Fujitsu SPARC M12 servers achieve the world’s highest per CPU core performance (*1) in arithmetic processing, offering dramatic improvements for a wide range of database workloads, from mission-critical systems on premises to big data processing in the cloud.
 
The new Fujitsu SPARC M12 servers join the vibrant SPARC technology ecosystem, and together with Oracle’s SPARC systems they leverage the unique security and scalability features of the Oracle Solaris operating system, as well as the efficiency and versatility of the Oracle VM Server for SPARC virtualization software.
 
Fujitsu SPARC M12 servers are offered in two models: the two-processor Fujitsu SPARC M12-2 server, and the Fujitsu SPARC M12-2S server, which can be expanded incrementally to a maximum of 32 processors.
 
The new servers can help customers optimize investments in ICT infrastructure and create new value based on cutting-edge digital technology.
 
The forerunner of the Fujitsu SPARC M12 servers, the Fujitsu M10 server line, has been adopted by many customers as the core of their ICT infrastructure. Fujitsu M10 servers have contributed to optimized ICT investments by providing stable processing power for mission-critical databases at the foundation of corporate business, and reduced search time for information databases used in corporate marketing to 1/300th that of previous technology.
 
In recent years, as the Internet of Things (IoT), big data and artificial intelligence (AI) have become more prevalent in companies and society, there has been an ever-increasing volume of data to be processed, leading to demands for even greater performance, combined with a reduction in operational and management costs.
 
Features of the Fujitsu SPARC M12
1. Supporting customers’ digital transformation by providing high-speed processing
Performance per CPU core is a significant measure to consider when increasing data processing efficiency, and Fujitsu SPARC M12 servers offer up to 2.5 times better core performance compared to the previous Fujitsu M10 models. The Fujitsu-developed SPARC64 XII processor, with the world’s highest per core performance, maximizes processing capability in mission-critical systems and databases without increasing server complexity, contributing to the optimization of customer IT investments.
 
Software on Chip (*2), a feature carried on from the Fujitsu M10, accelerates database processing by leveraging the fact that use of in-memory databases for processing large volumes of data has been very effective in the cloud, expanding the number of simultaneous searches to twice that of the Fujitsu M10 when processing with Oracle Database In-Memory. The Fujitsu SPARC M12, which can dramatically accelerate a wide range of data processing, supports the digital transformations of customers by improving the efficiency of mission-critical systems, speeding up decision-making using big data, and improving customer services through multilateral information analyses.
 
2. Stable operations with industry-first, new cooling technology
Fujitsu cutting-edge IT expertise has been used to develop Vapor and Liquid Loop Cooling (VLLC) technology (*3). This industry-first cooling technology reliably cools the interior of the server through the phase change of liquid to vapor, roughly doubling cooling efficiency (*4), and enabling safe and stable operations when building a private cloud environment with multiple Fujitsu SPARC M12 servers installed in a data center.
 
3. Optimized operations and management costs suited for business growth
Fujitsu SPARC M12 servers provide CPU Core Activation functionality, for configuring CPU resources one core at a time from a minimum of just two cores per server. CPU Core Activation allows servers to be configured to meet business needs, with the dynamic flexibility to grow as needed.
 
The high-end Fujitsu SPARC M12-2S model offers additional scalability in the form of building block architecture to connect additional Fujitsu SPARC M12-2S server chassis to form flexible scale-up and scale-out servers. By connecting a maximum of 16 building blocks, a single system with over 3,000 CPU threads can be created. Fujitsu SPARC M12-2S servers provide an ideal foundation for private cloud implementations by mitigating initial investment risks while allowing for gradual expansion.
 
4. Supporting business continuity with stable system operation
Building on the Fujitsu mainframe heritage and the trusted RAS features (*5) found in Fujitsu M10 servers, Fujitsu SPARC M12 servers provide error detection and recovery mechanisms for error protection across all processor circuits, supporting stable operation of mission-critical processing.
 
Comment from Toshiyuki Awai, Managing Executive Officer, COO, IT Services Group, ITOCHU Techno-Solutions Corporation
ITOCHU Techno-Solutions Corporation (CTC) welcome the announcement of a new product to the SPARC platform. Leveraging our broad implementation experience and SPARC server know-how, CTC have been offering safety and confidence to customers by providing various services, such as POC, system integrations and operations or maintenance support services. I am confident that the combination of SPARC servers, equipped with the world's highest performance processor, the SPARC64 XII, and CTC's solutions and services will contribute to optimizing customers' IT investments.
 
Comment from Edward Screven, Chief Corporate Architect, Oracle Corporation
Oracle and Fujitsu have worked together for more than three decades to produce SPARC systems satisfying the demanding requirements of mission critical infrastructure. Fujitsu SPARC M12 servers featuring the new SPARC64 XII processor for extreme core performance is an exciting addition to the SPARC family, allowing customers to address their most difficult computing challenges with systems that offer both high performance and enterprise reliability.
 
Comment from Akira Kabemoto, SEVP, Fujitsu
The revolutionary technology of the Fujitsu SPARC M12 enables real-time connections between people, businesses and a variety of things, and represents a step toward achieving Fujitsu's aim of a Human Centric Intelligent Society. In addition to the Fujitsu SPARC M12, which is a result of close collaboration between Fujitsu and Oracle, Fujitsu will continue to develop cutting-edge technology that contributes to the creation of new value and supports customers in expanding their businesses.
 
Related Websites
・Sites introducing Fujitsu SPARC M12:
・Fujitsu SPARC M12 social media links:
・Oracle Hardware Social media links:
 
Glossary and Notes
1. World’s highest per CPU core performance
Comparison based on registered results per core in the SPECint_rate2006 and SPECfp_rate2006 benchmark tests.
 
SPECint_rate2006 performance results and measurement environment:
Fujitsu SPARC M12-2S
Performance result (peak): 102 per CPU core
Measurement environment: SPARC64 XII (4.25GHz) x1 core, Oracle Solaris 11.3, Version 12.6 of Oracle Developer Studio
 
SPECfp_rate2006 performance results and measurement environment:
Fujitsu SPARC M12-2S
Performance result (peak): 102 per CPU core
Measurement environment: SPARC64 XII (4.25GHz) x1 core, Oracle Solaris 11.3, Version 12.6 of Oracle Developer Studio
 
These performance results were submitted to SPEC (The Standard Performance Evaluation Corporation) on April 3, 2017.
 
2. Software on Chip
Functionality of Fujitsu SPARC M12 and Fujitsu M10 in which hardware (a processor) is used to process some of the tasks that are otherwise performed with software, such as decimal arithmetic processing, cryptographic processing, and copying.
 
3. Developed Vapor and Liquid Loop Cooling (VLLC)
A depressurization vaporization cooling method that circulates cooling fluid with a pump. Patent pending in Japan and the US.
 
4. Roughly doubling cooling efficiency
It can limit temperature increases to half that of the cooling system on the previous products (Fujitsu M10-4, Fujitsu M10-4S), when cooling a CPU with the same heat output.
 
5. RAS features
Acronym for Reliability, Availability, and Serviceability. Functions that support the stable operation of the system.
 
Contact Info About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 156,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.7 trillion yen (US$41 billion) for the fiscal year ended March 31, 2016. For more information, please see http://www.fujitsu.com.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks
Oracle and Java are registered trademarks of Oracle Corporation, its subsidiaries, and its affiliates in the US and other countries.
All SPARC trademarks, which are used under license, are trademarks of SPARC International, Inc. in the US and other countries. The SPARC64 trademark is a trademark of SPARC International, Inc. in the US and other countries, and is being used under license.
UNIX is a registered trademark of The Open Group in the US and other countries.
SPEC and the SPECint benchmark name are trademarks or registered trademarks of Standard Performance Evaluation Corporation (SPEC) in the US and other countries.
All other trademarks referenced herein are the property of their respective owners.
 
Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Public and Investor Relations Division

Partner Public Relations

Manually running a BOSH errand for Pivotal Cloud Foundry on GCP

Pas Apicella - Mon, 2017-04-03 19:33
Pivotal Ops Manager has various errands in runs for different deployments within a PCF instance. These Errands can be switched off manually when installing new Tiles or upgrading the platform, in fact in PCF 1.10 the errands themselves will only run if they need to run making it a lot faster.

Below I am going to show you how you would manually run an Errand if you needed to on a PCF instance running on GCP. These instructions would work for PCF running on AWS, Azure or even vSphere so there not specific to PCF on GCP.

1. First login to your Ops Manager VM itself

pasapicella@pas-macbook:~/pivotal/GCP/install/10/opsmanager$ ./ssh-opsman.sh
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-66-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Apr  3 23:38:57 UTC 2017

  System load:  0.0                Processes:           141
  Usage of /:   14.7% of 78.71GB   Users logged in:     0
  Memory usage: 68%                IP address for eth0: 0.0.0.0
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

5 packages can be updated.
0 updates are security updates.

Your Hardware Enablement Stack (HWE) is supported until April 2019.

*** System restart required ***
Last login: Mon Apr  3 23:38:59 2017 from 110.175.56.52
ubuntu@om-pcf-110:~$

2. Target the Bosh director which would look like this

ubuntu@om-pcf-110:~$ bosh --ca-cert /var/tempest/workspaces/default/root_ca_certificate target 10.0.0.10
Target set to 'p-bosh'

Note: You may be asked to login if you have not logged in to the bosh director which you can determine the login details from Ops Manager UI as follows

- Log into Ops Manager UI
- Click on the tile for the the the "Ops Manager Director" which would be specific to your IaaS provider, in the example below that is GCP


- Click on the credentials tab


3. Target the correct deployment. In the example below I am targeting the Elastic Runtime deployment.

ubuntu@om-pcf-110:~$ bosh deployment /var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml
Deployment set to '/var/tempest/workspaces/default/deployments/cf-c099637fab39369d6ba0.yml'

Note: You can list out the deployment names using "bosh deployments"

4. List out the errands as shown below using "bosh errands"

ubuntu@om-pcf-110:~$ bosh errands
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

+-----------------------------+
| Name                        |
+-----------------------------+
| smoke-tests                 |
| push-apps-manager           |
| notifications               |
| notifications-ui            |
| push-pivotal-account        |
| autoscaling                 |
| autoscaling-register-broker |
| nfsbrokerpush               |
| bootstrap                   |
| mysql-rejoin-unsafe         |
+-----------------------------+

5. Now in this example we are going to run the errand "push-apps-manager" and we do it as shown below

$ bosh run errand push-apps-manager

** Output **

ubuntu@om-pcf-110:~$ bosh run errand push-apps-manager
Acting as user 'director' on deployment 'cf-c099637fab39369d6ba0' on 'p-bosh'
RSA 1024 bit CA certificates are loaded due to old openssl compatibility

Director task 621
  Started preparing deployment > Preparing deployment

  Started preparing package compilation > Finding packages to compile. Done (00:00:01)

     Done preparing deployment > Preparing deployment (00:00:05)

  Started creating missing vms > push-apps-manager/32218933-7511-4c0d-b512-731ca69c4254 (0)

...

+ '[' '!' -z 'Invitations deploy log: ' ']'
+ printf '** Invitations deploy log:  \n'
+ printf '*************************************************************************************************\n'
+ cat /var/vcap/packages/invitations/invitations.log

Errand 'push-apps-manager' completed successfully (exit code 0)
ubuntu@om-pcf-110:~$


Categories: Fusion Middleware

12cR2 DBCA, Automatic Memory Management, and -databaseType

Yann Neuhaus - Mon, 2017-04-03 15:52

This post explains the following error encountered when creating a 12.2 database with DBCA:
[DBT-11211] The Automatic Memory Management option is not allowed when the total physical memory is greater than 4GB.
or when creating the database directly with the installer:
[INS-35178]The Automatic Memory Management option is not allowed when the total physical memory is greater than 4GB.
If you used Automatic Memory Management (AMM) you will have to think differently and size the SGA and PGA separately.

ASMM

Automatic Shared Memory Management, or ASMM is what you do when setting SGA_TARGET and not setting MEMORY_TARGET. Basically, you define the size of the SGA you want to allocate at startup and that will be available for the instance, most of it being buffer cache and shared pool. I’ll not go into the detail of SGA_TARGET and SGA_MAX_SIZE because on the most common platforms, all is allocated at instance startup. Then, in addition to this shared area used by all instance processes, each processes can allocate private memory, and you control this with PGA_AGGREGATE_TARGET.
The total size of SGA and PGA for all instances in a system must reside in physical memory for the simple reason that they are mostly used to avoid I/O (a large buffer cache avoids physical reads and optimizes physical writes, a large PGA avoids reads and writes to tempfiles).

AMM

Because you don’t always know how much to allocate to each (SGA and PGA) Oracle came with a feature where you define the whole MEMORY_TARGET, part of this will be dynamically allocated to SGA or PGA. This is called Automatic Memory Management (AMM). It’s a good idea on the paper: it is automatic, which means that you don’t have to think about it, and it is dynamic, which means that you don’t waste physical memory because of bad sizing.

But it is actually a bad idea when going to implementation, at least on the most common platforms.
SGA and PGA are different beasts that should not be put in the same cage:

  • SGA is big, static, shared, allocated once at startup
  • PGA is small chunks constantly allocated and deallocated, private to processes

First, it is not so easy because you have to size the /dev/shm correctly or you will get the following at startup:
ORA-00845: MEMORY_TARGET not supported on this system
In addition to that, because the whole memory is prepared to contain the whole SGA you see misleading numbers in ‘show sga’.

Second there are lot of bugs, resizing overhead, etc.

And finally, you cannot use large pages when you are in AMM, and in modern system (lot of RAM, lot of processes) having all processes mapping the SGA with small pages of 4k is a big overhead.

So, as long as you have more than few GB on a system, you should avoid AMM and set SGA_TARGET and PGA_AGGREGATE_TARGET independently. Forget MEMORY_TARGET. Forget /dev/shm. Forget also the following documentation at http://docs.oracle.com/database/122/ADMIN/managing-memory.htm#ADMIN00207 which mentions that Oracle recommends that you enable the method known as automatic memory management.
Actually, AMM is not recommended for systems with more than a few GB of physical memory, and most system have more than few GB of physical memory. If you try to use AMM on a system with less than 4GB you get a warning in 12cR1 and it is an error in 12cR2:
CaptureAMM002
I got this when trying to create a database with AMM on a system with more than 4GB of physical memory.

CaptureAMM001
This does not depend on the size of MEMORY_TARGET you choose, or the size of /dev/shm, but only the size of available physical memory:
[oracle@VM104 ~]$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 0 3.9G 0% /dev/shm
 
[oracle@VM104 ~]$ free -h
total used free shared buff/cache available
Mem: 7.8G 755M 5.0G 776M 2.1G 6.2G
Swap: 411M 0B 411M

No choice: it is a hard stop

If you are not convinced, then please have a look at MOS Doc ID 2244817.1 which explains this decision:

  • It is not something new: DBCA used to give similar warning message but in 12.2.0.1 it is an error message
  • Reason behind: Because database creation fails some times and in some cases database wont be functional after some times

So, do you want to create a database which may not be functional after some times?

So, what size for SGA and PGA?

Then, if you were thinking that AMM was cool, your next question not is: what size to allocate to SGA and PGA?

Don’t panic.

You are in this situation because you have several GB of RAM. Current servers have lot of memory. You don’t have to size it to the near 100MB. Start with some values, run with it. Look at the performance and the memory advisors. Are you doing too much physical I/O on tables where you expect data to be in cache? Then increase the SGA, and maybe set a minimum for the buffer cache. Do you see lot of hard parse because your application runs lot of statements and procedures? Then increase the SGA and maybe set a minimum for the shared pool. Do you run lot of analytic queries that full scan tables and have to hash and sort huge amount of data? Then decrease the SGA and increase the PGA_AGGREGATE_TARGET.

Where to start?

If you don’t know where to start, look at the DBCA database types:

#-----------------------------------------------------------------------------
# Name : databaseType
# Datatype : String
# Description : used for memory distribution when memoryPercentage specified
# Valid values : MULTIPURPOSE|DATA_WAREHOUSING|OLTP
# Default value : MULTIPURPOSE
# Mandatory : NO
#-----------------------------------------------------------------------------

Those types define the ratio between SGA and PGA. Then why not start with what is recommended by Oracle?

I’ve created the 3 types of instances with the following:
dbca -silent -totalMemory 10000 -databaseType MULTIPURPOSE -generateScripts -scriptDest /tmp/MULT ...
dbca -silent -totalMemory 10000 -databaseType DATA_WAREHOUSING -generateScripts -scriptDest /tmp/DWHG ...
dbca -silent -totalMemory 10000 -databaseType OLTP -generateScripts -scriptDest /tmp/OLTP ...

And here are the settings generated by DBCA
$ grep target /tmp/*/init.ora
DWHG/init.ora:sga_target=6000m
DWHG/init.ora:pga_aggregate_target=4000m
MULT/init.ora:sga_target=7500m
MMULT/init.ora:pga_aggregate_target=2500m
OLTP/init.ora:sga_target=8000m
OLTP/init.ora:pga_aggregate_target=2000m

Here is the summary:

SGA PGA OLTP 80% 20% Multi-Purpose 75% 25% Data Warehousing 60% 40%

(percentages are relative to eachother, here. Donc’ use 100% of physical memory for the Oracle instances because the system needs some memory as well)

This gives an idea where to start. Servers have lot of memory but you don’t have to use all of it. If you have a doubt, leave some free memory to be available for the filesystem cache. Usually, we recommend to used direct i/o (filesystemio_options=setall) to avoid the filesystem overhead. But when you start and want to lower the risks sub-sizing SGA or PGA, then you may prefer to keep that second level of cache (filesystemio_options=async) which uses all the physical memory available. This may improve the reads from tempfiles in case your PGA is too small. This is just an idea, not a recommendation.

So what?

If you have a server with more than few GB, then set SGA and PGA separately. Start with the ratios above, and then monitor performance and advisors. Physical servers today have at least 32GB. Even with a small VM with 1GB for my labs, I prefer to set them separately, because in that case I want to be sure to have a minimum size for buffer cache and shared pool. You may have lot of small VMs with 3GB and think about setting MEMORY_TARGET. But using large pages is a recommendation here because the hypervisor will have lot of memory to map, so ASMM is still the recommandation.

Once you know the size of all SGA, look at Hugepagesize in /proc/meminfo, set the number of hugepages in /etc/sysctl.conf, run sysctl -p and your instances will use available large pages for the SGA.

 

Cet article 12cR2 DBCA, Automatic Memory Management, and -databaseType est apparu en premier sur Blog dbi services.

tkprof output questions

Tom Kyte - Mon, 2017-04-03 13:26
Recently we ran a 10046 trace on some scripts running long and generating the tkprof output from that trace file. The top two sqls are shown below and I am looking for some clarifications. 1. Dynamic sampling is shown for this query but I am not ...
Categories: DBA Blogs

Bitmap index and OLTP

Tom Kyte - Mon, 2017-04-03 13:26
Hi Tom, In one of the interview, interviewer asked me about the scenario where bitmap index is useful in OLTP system. and I was totally clueless..... I wasn't able to think any of the scenario. Later I googled every possible combination to ge...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator