Skip navigation.

DBA Blogs

Cool Picture of Instance/Database from 12c Concepts

Bobby Durrett's DBA Blog - Tue, 2014-01-07 18:00

I just think this is a cool picture of an Oracle 12c instance and database from Oracle’s 12c Concepts manual (not my own work):

Instance and Database from 12c Concepts

This is from the concepts manual found here: url

- Bobby

Categories: DBA Blogs

Finished reading Oracle Core

Bobby Durrett's DBA Blog - Tue, 2014-01-07 12:37

Just finished reading the book by Jonathan Lewis titled “Oracle Core: Essential Internals for DBAs and Developers“.  I think I picked it up at the Collaborate 13 conference in Denver last April but haven’t had time (or taken the time) to read it.

Reading a book like Oracle Core can be a challenge because it is pretty dense with example scripts and outputs including dumps in hex.  So, I decided to take the strategy of pushing myself to crash through the book without carefully following every example.  I may only have absorbed about 10% of the material but if I didn’t jam through it I would have gotten 0%!

I picked up Oracle Core because I had read another book by the same author titled “Cost-Based Oracle Fundamentals” which has paid for itself 100 times over in terms of helping me tune queries.  I highly recommend Cost-Based Oracle Fundamentals without reservation.  But, like Oracle Core it can be a challenge to just sit down and read it and follow every SQL example and output.  Probably it would be worth making a first pass focusing on just the English language text and skimming the examples, maybe delving into the examples of most interest.

In the case of Oracle Core I haven’t yet put it to practical use but I’m glad to have at least skimmed through it.  Now I know what’s in it and can refer back to it when needed.

Next I hope to start reading up on Oracle 12c since I plan to get my certification this year.  But, I wanted to finish Oracle Core before I moved on, even if I only read it at a high level.

- Bobby

 

 

Categories: DBA Blogs

Oracle GoldenGate (Streams) process disabled – Why?

DBASolved - Tue, 2014-01-07 12:24

It is amazing what Oracle Enterprise Manager 12c will report once it is configured for a product.  Once such product is Oracle GoldenGate.  I have stepped into a project where I’m running Oracle GoldenGate between many different environments for production purposes.   Just trying to get a handle around what is going on has been a task.  In talking with the customer, they were starting to implement Oracle Enterprise Manager 12c.   Once OEM was setup, we added the Oracle GoldenGate plug-in and started to monitor the replication environments.

Monitoring the Oracle GoldenGate environments, I noticed a warning in the Incident Manager.  The warning that was noticed was: “Status for Streams process OGG$_CGGMONX9AC55691 is DISABLED”.    I got to thinking, what is this message about?  Much more, how do I resolve this warning (I like OEM to be quite.  I started to look around MOS for answers, to my surprise, not much is written about his message.

image

Oracle GoldenGate, classic capture, doesn’t report these types of messages within Oracle Enterprise Manager 12c.  Classic Capture mostly reports the up and down status of Oracle GoldenGate processes.  This message had to be coming from some integrated version of the extract (first clue was the word Streams).  Keeping that Streams may be used in some way, the DBA_CAPTURE table should be able to shine a bit of light on this warning.

From a SQL*Plus prompt or an SQL IDE (prefer SQL Developer), the DBA_CAPTURE view can be queried.  

image 

From looking at the STATUS column, I verified that I’ve found the correct record.  The PURPOSE column shows that this extract (capture) is being used for Streams.  What!?, wait a minute, I’m using Oracle GoldenGate. 

Yes, Oracle GoldenGate is being used.  If there is information in the DBA_CAPTURE view it is because the Extract has been registered (integrated) with the database some how.  The status is DISABLED, an indicator that this extract was registered for logretention:

GGSCI> stop extract cggmonx
GGSCI>
dblogin userid ggate password ggate
GGSCI> register extract cggmonx logretention
GGSCI> start extract cggmonx

Now, that it is understood that the extract has been registered for log retention, what does this actually mean?

According to the Oracle GoldenGate 11g Reference Guide, an extract can be registered in one of two modes.

1. Database  – Enables integrated capture for the Extract group. In this mode,
Extract integrates with the database logmining server to receive
change data in the form of logical change records (LCR). Extract
does not read the redo logs. Extract performs capture processing,
filtering, transformation, and other requirements

2. Logretention – Enables an Extract group, classic capture mode, to work with
Oracle Recovery Manager (RMAN) to retain the logs that Extract
needs for recovery

As indicated a few lines up, this extract has been registered with logretention.  This means that the extract creates an underlying Streams capture process and prevents RMAN from removing any archivelogs that may be needed for replication of data.  As part of creating the underlying Streams structure, Oracle creates a queue under the Oracle GoldenGate owner (The queue name can also be found in the DBA_CAPTURE view).

image

Now that the root problem of the DISABLE message in Oracle Enterprise Manager 12c has been identified, how can this message be resolved?

The simplest way is to unregister the extract from the database/logretention knowing that Oracle GoldenGate configuration is using Classic Capture.  Keep in mind that when unregistering the extract, retention of archivelogs will not be enforced when RMAN backs them up and possibility removes them.  Make sure you RMAN retention policies are what you expect them to be. 

To unregister an extract that is using logretention, use the steps below:

GGSCI> stop extract cggmonx
GGSCI>
dblogin userid ggate password ggate
GGSCI> unregister extract cggmonx logretention
GGSCI> start extract cggmonx

 

Enjoy!

twitter: @dbasolved

blog; http://dbasolved.com


Filed under: Golden Gate, OEM
Categories: DBA Blogs

schema validation scripts and alter session set current_schema ... make me not so grumpy

Grumpy old DBA - Tue, 2014-01-07 07:14
Believe it or not many DBAs/Developers are unaware of ( well or have forgotten ) how to "switch into" a different schema.

PLSQL has had the option for a long time

alter session set current_schema = SOME_SCHEMA_NAME;

This does not give you full schema owner capabilities ( well depends on what your login session capabilities have ) but can be very useful.  For instance in some kind of script to validate that all the expected objects exist and are at the corrrect version you could use it like this.

set echo off
set feedback on
set heading off
set linesize 168
set serveroutput on size unlimited
set term on

alter session set current_schema = FIRST_SCHEMA_BEING_CHECKED;

BEGIN
  validate_objects.bv_show_valid_messages := TRUE;
  validate_objects.bv_stop_on_error := FALSE;

  dbms_output.put_line(chr(13));
  dbms_output.put_line('===============================================================================');
  dbms_output.put_line(chr(13));
 
  -- a bunch of calls against a validation package ... check that tables exists / views exists / foreign keys exists / indexes exist / packages procedures functions exists / data exists

 
  -- at the end check that all objects are valid in the schema ...  
 
END;
/
Categories: DBA Blogs

12c Online Partitioned Table Reorganisation Part I (Prelude)

Richard Foote - Mon, 2014-01-06 22:07
First post for 2014 !! Although it’s generally not an overly common activity with Oracle databases, reorganising a table can be somewhat painful, primarily because of the associated locking implications and the impact it has on indexes. If we look at the following example: So we have a table with a couple of indexes. We […]
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Twelve: Concluding Remarks

Iggy Fernandez - Mon, 2014-01-06 21:10
Day One: Disruptive Innovation Day Two: Requirements and Assumptions Day Three: Functional Segmentation Day Four: Sharding Day Five: Replication and Eventual Consistency Day Six: The False Premise of NoSQL Day Seven: Schemaless Design Day Eight: Oracle NoSQL Database Day Nine: NoSQL Taxonomy Day Ten: Big Data Day Eleven: Mistakes of the relational camp Day Twelve: […]
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Eleven: Mistakes of the relational camp

Iggy Fernandez - Sun, 2014-01-05 21:35
On the eleventh day of Christmas, my true love gave to me Eleven pipers piping. (Yesterday: Big Data in a Nutshell)(Tomorrow: Concluding Remarks) Over a lifespan of four and a half decades, the relational camp made a series of strategic mistakes that made NoSQL possible. The mistakes started very early. The biggest mistake is enshrined […]
Categories: DBA Blogs

Partner Webcast – Oracle Engineered Systems & Partner Service Opportunities

The old way to buy servers, storage and networking equipment in different cycles has reached its end. Today we see a trend towards buying integrated systems that are tested, certified, sold and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Ten: Big Data in a Nutshell

Iggy Fernandez - Sun, 2014-01-05 01:11
On the tenth day of Christmas, my true love gave to me Ten lords a-leaping. (Yesterday: NoSQL Taxonomy)(Tomorrow: Mistakes of the relational camp) The topic of Big Data is often brought up in NoSQL discussions so let’s give it a nod. In 1998, Sergey Brin and Larry Page invented the PageRank algorithm for ranking web […]
Categories: DBA Blogs

Programming Elastic MapReduce Using AWS Services to Build an End-to-End Application

Surachart Opun - Fri, 2014-01-03 22:59
Amazon Elastic MapReduce (Amazon EMR) is a web service that makes it easy to quickly and cost-effectively process vast amounts of data. Amazon EMR uses Hadoop, an open source framework, to distribute your data and processing across a resizable cluster of Amazon EC2 instances.
Anyway, You are looking for a book about programming Elastic MapReduce. I mention a book titles - Programming Elastic MapReduce Using AWS Services to Build an End-to-End Application By Kevin Schmidt, Christopher Phillips.
Programming Elastic MapReduce Using AWS Services to Build an End-to-End Application By Kevin Schmidt, Christopher PhillipsThis book will give readers the best practices for using Amazon EMR and various AWS and Apache technologies. Readers will learn much more about.
  • Get an overview of the AWS and Apache software tools used in large-scale data analysis
  • Go through the process of executing a Job Flow with a simple log analyzer
  • Discover useful MapReduce patterns for filtering and analyzing data sets
  • Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow
  • Learn the basics for using Amazon EMR to run machine learning algorithms
  • Develop a project cost model for using Amazon EMR and other AWS tools
 A book gives readers how to use Amazon EC2 Services Management Console and learn more about it. Readers will get good examples in a book. However, It will be good, if readers can create an AWS Account and use it with examples in a book. Illustration and example in a book, that is very helpful and make a book easy to read and follow each example.



Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

The Twelve Days of NoSQL: Day Nine: NoSQL Taxonomy

Iggy Fernandez - Fri, 2014-01-03 04:23
On the ninth day of Christmas, my true love gave to me Nine ladies dancing. (Yesterday: Oracle NoSQL Database)(Tomorrow: Big Data in a Nutshell) NoSQL databases can be classified into the following categories: Key-value stores: The archetype is Amazon Dynamo of which DynamoDB is the commercial successor. Key-value stores basically allow applications to “put” and […]
Categories: DBA Blogs

Demo Links To All Older Posts Now Accessible (Chains)

Richard Foote - Thu, 2014-01-02 22:05
OK, for a quite some time (too long probably !!!) people have been sending me emails and leaving comments that they have been unable to access a number of the demos to my older posts and those listed in my Presentations and Demos page. I previously would write an article but include a demo that […]
Categories: DBA Blogs

Top Ten Posts So Far

Bobby Durrett's DBA Blog - Thu, 2014-01-02 15:04

Just for fun I’ve pasted in a table listing the top 10 most viewed posts on this blog as links and including total number views since this blog began in March 2012.  I based this on WordPress’s statistics so I’m not sure exactly how the blog software collects the numbers but it is fun to get some positive feedback.  Hopefully it means people are getting something out of it.  I’m certainly enjoying putting it together.  Here are the links ordered by views as listed on the right:

cell single block physical read 3,738 REGEXP_LIKE Example 2,822 Finding query with high temp space usage using ASH views 2,232 DBA_HIST_ACTIVE_SESS_HISTORY 2,097 CPU queuing and library cache: mutex X waits 1,801 DBMS_SPACE.SPACE_USAGE 1,748 Resource Manager wait events 1,566 Fast way to copy data into a table 1,166 Delphix First Month 1,074 use_nl and use_hash hints for inner tables of joins 1,047

Anyway, I thought I would list the top ten posts on this blog if you want to read the ones that have the most views and possibly are the most useful.

- Bobby

 

 

 

Categories: DBA Blogs

Statistics gathering and SQL Tuning Advisor

Pythian Group - Thu, 2014-01-02 07:50

Our monitoring software found a long running job on one of our client’s databases. The job was an Oracle’s auto task running statistics gathering for more than 3 hours. I was curious to know why it took so long and used a query to ASH to find out the most common SQL during the job run based on the module name. Results were surprising to me: top SQL was coming with SQL Tuning Advisor comment.

Here is the SQL I used:

SQL> select s.sql_id, t.sql_text, s.cnt
  2  from
  3    (select *
  4     from
  5      (
  6        select sql_id, count(*) cnt
  7        from v$active_session_history
  8        where action like 'ORA$AT_OS_OPT_SY%'
  9        group by sql_id
 10        order by count(*) desc
 11      )
 12     where rownum <= 5
 13    ) s,
 14    dba_hist_sqltext t
 15  where s.sql_id = t.sql_id;

SQL_ID        SQL_TEXT                                                                                CNT
------------- -------------------------------------------------------------------------------- ----------
020t65s3ah2pq select substrb(dump(val,16,0,32),1,120) ep, cnt from (select /*+ no_expand_table        781
byug0cc5vn416 /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)          43
bkvvr4azs1n6z /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)          21
46sy4dfg3xbfn /* SQL Analyze(1) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t)        1559

So most queries are coming with “SQL Analyze” comment right in the beginning of SQL which is running from DBMS_STATS call, which is confusing. After some bug search I have found a MOS Doc ID 1480132.1 which includes a PL/SQL stack trace from a DBMS_STATS procedure call, and it is going up to DBMS_SQLTUNE_INTERNAL, which means DBMS_STATS has a call to the SQL Tuning package; very odd:

SQL> select * from dba_dependencies where name = 'DBMS_STATS_INTERNAL' and referenced_name = 'DBMS_SQLTUNE_INTERNAL';

OWNER                          NAME                           TYPE               REFERENCED_OWNER       REFERENCED_NAME
------------------------------ ------------------------------ ------------------ ------------------------------ ----------------------------------
REFERENCED_TYPE    REFERENCED_LINK_NAME                                                                                                     DEPE
------------------ -------------------------------------------------------------------------------------------------------------------------------
SYS                            DBMS_STATS_INTERNAL            PACKAGE BODY       SYS                    DBMS_SQLTUNE_INTERNAL
PACKAGE                                                                                                                                     HARD

Turns out, this call has nothing to do with the SQL Tuning. It is just a call to a procedure in this package, which happen to look like an SQL from SQL Tuning Advisor. I have traced a GATHER_TABLE_STATS call in a test database first with SQL trace and then with DBMS_HPROF and here is how the call tree looks like:

SELECT RPAD(' ', (level-1)*2, ' ') || fi.owner || '.' || fi.module AS name,
       fi.function,
       pci.subtree_elapsed_time,
       pci.function_elapsed_time,
       pci.calls
FROM   dbmshp_parent_child_info pci
       JOIN dbmshp_function_info fi ON pci.runid = fi.runid AND pci.childsymid = fi.symbolid
WHERE  pci.runid = 1
CONNECT BY PRIOR childsymid = parentsymid
  START WITH pci.parentsymid = 27;
NAME                                     FUNCTION                       SUBTREE_ELAPSED_TIME FUNCTION_ELAPSED_TIME                CALLS
---------------------------------------- ------------------------------ -------------------- --------------------- --------------------
...
SYS.DBMS_STATS_INTERNAL                  GATHER_SQL_STATS                           21131962                 13023                    1
  SYS.DBMS_ADVISOR                       __pkg_init                                       88                    88                    1
  SYS.DBMS_SQLTUNE_INTERNAL              GATHER_SQL_STATS                           21118776                  9440                    1
    SYS.DBMS_SQLTUNE_INTERNAL            I_PROCESS_SQL                              21107094              21104225                    1
      SYS.DBMS_LOB                       GETLENGTH                                        37                    37                    1
      SYS.DBMS_LOB                       INSTR                                            42                    42                    1
      SYS.DBMS_LOB                       __pkg_init                                       15                    15                    1
      SYS.DBMS_SQLTUNE_INTERNAL          I_VALIDATE_PROCESS_ACTION                        74                    39                    1
        SYS.DBMS_UTILITY                 COMMA_TO_TABLE                                   35                    35                    1
      SYS.DBMS_SQLTUNE_UTIL0             SQLTEXT_TO_SIGNATURE                            532                   532                    1
      SYS.DBMS_SQLTUNE_UTIL0             SQLTEXT_TO_SQLID                                351                   351                    1
      SYS.XMLTYPE                        XMLTYPE                                        1818                  1818                    1
    SYS.DBMS_SQLTUNE_UTIL0               SQLTEXT_TO_SQLID                                528                   528                    1
    SYS.DBMS_UTILITY                     COMMA_TO_TABLE                                   88                    88                    1
    SYS.DBMS_UTILITY                     __pkg_init                                       10                    10                    1
    SYS.SQLSET_ROW                       SQLSET_ROW                                       33                    33                    1
    SYS.XMLTYPE                          XMLTYPE                                        1583                  1583                    1
  SYS.DBMS_STATS_INTERNAL                DUMP_PQ_SESSTAT                                  73                    73                    1
  SYS.DBMS_STATS_INTERNAL                DUMP_QUERY                                        2                     2                    1
...

So there is a procedure DBMS_SQLTUNE_INTERNAL.GATHER_SQL_STATS which is being called by DBMS_STATS_INTERNAL, and this procedure actually runs a SQL like this:

/* SQL Analyze(0) */ select /*+  full(t)    no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring no_substrb_pad  */to_char(count("ID")),to_char(substrb(dump(min("ID"),16,0,32),1,120)),to_char(substrb(dump(max("ID"),16,0,32),1,120)),to_char(count("X")),to_char(substrb(dump(min("X"),16,0,32),1,120)),to_char(substrb(dump(max("X"),16,0,32),1,120)),to_char(count("Y")),to_char(substrb(dump(min("Y"),16,0,32),1,120)),to_char(substrb(dump(max("Y"),16,0,32),1,120)),to_char(count("PAD")),to_char(substrb(dump(min("PAD"),16,0,32),1,120)),to_char(substrb(dump(max("PAD"),16,0,32),1,120)) from "TIM"."T1" t  /* NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL*/

Which is basically approximate NDV calculation. So, nothing to be afraid of, it’s just the way the code is organized: DBMS_STATS uses API of SQL Tuning framework when you are using DBMS_STATS.AUTO_SAMPLE_SIZE as the ESTIMATE_PERCENT (which is the default & recommended value in 11g+).

Categories: DBA Blogs

SQL Server 2014 CTP2 – Memory Optimization Advisor

Pythian Group - Thu, 2014-01-02 07:49

Today I am going to discuss one of the amazing new features, Memory Optimization Advisor of SQL Server 2014 CTP2. I discussed architectural details of SQL Server 2014 In-Memory Optimizer in my last blog post here.

We will discuss Memory Optimization Advisor implementation with a demo in this blog post. This tool helps customers in migrating disk-based tables to memory optimized tables with ease.

Memory Optimization Advisor – In Action

I will use AdventureWorks2008 database .For the demo purpose I have created a copy of table Employee named EmpTempTbl with columns – OrganizationNode, SalariedFlag, CurrentFlag removed.

Let us explore the wizard step by step.

Step # 1: Open Management Studio, connect to instance and the database AdventureWorks2008 and Right click the table EmpTempTbl . Now choose Memory Optimization Advisor.

Step # 2: The Advisor tool will be launched with a page describing feature of the tool. Click Next.

Step #3: On this screen the tool will check if the selected table is fit for migration as a Memory Optimized Table. The tool will report immediately if anything is wrong with the table which is preventing it from migrating as a Memory Optimized Table.

We have the option of generating Report using Generate Report button.

If you click on Generate button then wizard will provide with an option to save the report (HTML File) anywhere on the local disk. In my case, all is green so Click Next

Step #4: This screen shows migration warnings about the limitations of memory optimized object, and a link which will explain the limitations in detail. Click Next.

Step #5: Wizard will take us to below screen  which let us select the options for memory optimization.

  • Memory-Optimized Filegroup :  you can have just one per instance and must create one before moving disk based table to memory optimize table else you will get error.
  • Logical file name Here you can change the logical file name.
  • Path: Points to the location where you will save your logical file.
  • Estimated Current Memory Cost (MB): The estimated current memory cost for this table.

We have two checkboxes on this page and are described as below

  • Option to copy Data from the disk table to the new memory optimized table during the migration process
  • Option to be able to change the durability of the table just Schema (schema_only), in this case data will be lost after each SQL Server service restart. However by default Schema and Data(schema_and_data) is applied.

Step #6: Next screen in the wizard allow you to decide the name of the primary key, its members, its type. You can choose between nonclustered Hash index and nonclustered index here.

I have selected one integer column and one character column for Primary Key.

I selected Char Data Type to show here that we don’t have any other option then BIN2 collation for memory optimized tables. Click Next.

Step #7: This screen of the wizard will list out the summary Migration Actions. You have the option to script those operations by clicking the Script button .I scripted it out. Click Migrate.

Step #8: The migration process can take longer as it depends on number of objects. In this case, it succeeded. Click OK

Now our table is In-Memory. Let’s check the properties of the table to verify it. On SSMS you would be able to see the old table which is now renamed as EmpTempTbl_old and the new table is created under the Tables folder of the database.

Right Click on the newly created table and Go to Properties. You can see that the option Memory Optimized is set to true and the Durability is set to SchemaAndData.

This is a very user friendly tool with explanations and warnings, which will help users to streamline issues well before implementing In-Memory technology. As this blog is written with CTP version of SQL Server 2014 thus things might change during future release of SQL Server 2014.

Categories: DBA Blogs

Log Buffer #352, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2014-01-02 07:44

Jingle bells are all the way towards the happy new year. Whether the database bloggers are basking at white, sandy beaches or warming up indoors in white, snowy mornings, they are gearing up with their blog posts to start the new year with a bang. Log Buffer shares that enthusiasm with them.
Oracle:

When diagnosing ASM issues, it helps to know a bit about the setup – disk group names and types, the state of disks, ASM instance initialisation parameters and if any rebalance operations are in progress.

The ADF framework provides parameters for a bounded task flow parameters are provided in ADF, developer can give them default values with using JDeveloper.

If you like to increase space in linux volume group, you can do it on while machine up.

How to Find Your Oracle Voice in the Oracle Sales Cloud?

ORA-14696: MAX_STRING_SIZE migration is incomplete for pluggable database

SQL Server:

How to Compare Rows within Partitioned Sets to Find Overlapping Dates.

Free eBook: SQL Server Transaction Log Management

A Greg Larsen classic – he discusses how the SQL Server optimizer tries to parameterize a query if it can, as well as how you can build your own parameterized query.

Learn how to create a Windows\SQL Server 2008 virtual cluster.

This technical note provides guidance for Reporting Services. The focus of this technical note is to optimize your Reporting Services architecture for better performance and higher report execution throughput and user loads.

MySQL:

Busy database-backed websites often hit scalability limits in the database first. In tuning MySQL, one of the first things to look at is the max_connections parameter, which is often too low.

One of the routine tasks for a DBA is renaming database schemas, and as such MySQL added a command to carry out that purpose called “RENAME DATABASE <database_name>”.

MySQL Connector/Java supports multi-master replication topographies as of version 5.1.27, allowing you to scale read load to slaves while directing write traffic to multi-master (or replication ring) servers.

This post shows you how to perform a schema upgrade using the Rolling Schema Upgrade (RSU) method.

Several users have reported that certain queries with IN predicates can’t use index scans even though all the columns in the query are indexed.

Categories: DBA Blogs

The Twelve Days of NoSQL: Day Eight: Oracle NoSQL Database

Iggy Fernandez - Thu, 2014-01-02 01:56
On the eighth day of Christmas, my true love gave to me Eight maids a-milking. (Yesterday: Schemaless Design)(Tomorrow: NoSQL Taxonomy) In May 2011, Oracle Corporation published a scathing indictment of NoSQL, the last words being “Go for the tried and true path. Don’t be risking your data on NoSQL databases.” Just a few months later however, […]
Categories: DBA Blogs

12c buffer cache flushing in a CDB / PDB environment

Grumpy old DBA - Wed, 2014-01-01 20:42
Here is an interesting post from Thomas Saviors blog aka ( My Oracle Life ).  It shows some complicated things going on flushing the database buffer cache from various CON_IDs within a CDB.

12c buffer cache flushing

Obviously only the people doing design work at Oracle for 12c ( 12.1 and 12.2 heck maybe 13 ) know what the overall plan is for next set of features/changes for a container database.  To me it seems very dangerous to know have a way of isolating one PDBs impact on other PDBs for the memory areas in the SGA ( shared pool / buffer cache etc ).  Maybe this will be changing soon?

There are some interesting CDB presentations going to occur at Hotsos 2014 ( best practices etc ) that I will be attending that may give me a better idea of what other people are worrying about and planning on using in this new CDB/PDB universe.

Thanks to Thomas for pointing out his post!

Categories: DBA Blogs

The Twelve Days of NoSQL: Day Seven: Schemaless Design

Iggy Fernandez - Wed, 2014-01-01 01:55
On the seventh day of Christmas, my true love gave to me Seven swans a-swimming. (Yesterday: The False Premise of NoSQL)(Tomorrow: Oracle NoSQL Database) As we discussed on Day One, NoSQL consists of “disruptive innovations” that are gaining steam and moving upmarket. So far, we have discussed functional segmentation (the pivotal innovation), sharding, asynchronous replication, […]
Categories: DBA Blogs

Happy New Year 2014 to all of you!

The Oracle Instructor - Tue, 2013-12-31 02:53

My best wishes go out to you and your families – may the new year be a great one for you!

Special thanks to all the visitors of The Oracle Instructor – WordPress has crafted this Annual Report for 2013, if you’re interested :-)


Categories: DBA Blogs