Skip navigation.

DBA Blogs

Oracle Development Tools for Windows & .NET

Interoperability between Oracle and Microsoft technologies around Oracle database support on Windows and with .NET application development have been substantial for years. The latest...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Online Move of Datafiles for Pluggable Databases

The Oracle Instructor - Wed, 2014-05-07 09:22

From my present Oracle Database 12c New Features course in Zürich: We have introduced the handy new functionality that you can move datafiles online in 12c. That is at first glance having an issue for pluggable databases:

 

SQL> @whoami
USER: SYS
SESSION ID: 253
CURRENT_SCHEMA: SYS
INSTANCE NAME: cdb1
CDB NAME: cdb1
CONTAINER NAME: CDB$ROOT
DATABASE ROLE: PRIMARY
OS USER: oracle
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/system01.dbf
/u01/app/oracle/oradata/cdb1/sysaux01.dbf
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf
/u01/app/oracle/oradata/cdb1/users01.dbf
/u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
/u01/app/oracle/oradata/pdb1_1/system01.dbf
/u01/app/oracle/oradata/pdb1_1/sysaux01.dbf
/u01/app/oracle/oradata/pdb1_1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/pdb1_1/example01.dbf

10 rows selected.

SQL> alter database move datafile '/u01/app/oracle/oradata/pdb1_1/example01.dbf' to '/home/oracle/example01.dbf';
alter database move datafile '/u01/app/oracle/oradata/pdb1_1/example01.dbf' to '/home/oracle/example01.dbf'
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "11"

The error message is quite useless in this case. It works, but you need to be in the PDBs container where the datafile belongs to:

SQL> alter session set container=pdb1_1;

Session altered.

SQL> alter database move datafile '/u01/app/oracle/oradata/pdb1_1/example01.dbf' to '/home/oracle/example01.dbf';

Database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/undotbs01.dbf
/u01/app/oracle/oradata/pdb1_1/system01.dbf
/u01/app/oracle/oradata/pdb1_1/sysaux01.dbf
/u01/app/oracle/oradata/pdb1_1/SAMPLE_SCHEMA_users01.dbf
/home/oracle/example01.dbf

Don’t believe it (even error messages may lie!), test it :-)


Tagged: 12c New Features
Categories: DBA Blogs

12c Multi-tenant Environment and DBRM for Container Database

Pythian Group - Wed, 2014-05-07 09:15

In multi-tenant environment, Database Resource Manager (DBRM), at the Container Database (CDB) level enables us to manage the resources like CPU usage and number of parallel execution servers among the plugabble databases (PDBs). Then within each PDB, DBRM enables us to manage resources like CPU, parallelism and managing the runaway queries which exceed the certain thresholds of CPU, physical IO, logical IO or elapsed time.

In 12c version of Oracle database, at the CDB level, a CDB plan is implemented. This CDB plan allocates resources to the PDBs. A CDB plan is made up of directives, with each directive aimed at a single PDB. The directive controls the allocation of CPU and Parallel execution servers. The default CDB plan is DEFAULT_CDB_PLAN. CDB plan deals in share values and utilization limits.

Shares are like counters. More the share a PDB has, the more resources it would enjoy. The utilization limit for a PDB limits resource allocation to the PDB. By default, share value for each PDB is 1, and utilization limit is 100. Utilization limit restrains the system resource usage of a specific PDB. parallel_server_limit and PARALLEL_SERVERS_TARGET are used to limit parallel servers.

Example of a CDB Plan:

Following example enforces a CDB plan ‘cdb_plan’ for two databases ‘PROD’ and ‘DEV’. Prod has 2 shares and 100% utilization limit, whereas DEV has half of it’s shares i.e. 1 and 50% of utilization limit. Pending area is just a staging area to create, edit, and validate the plans.

exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
plan    => ‘cdb_plan’,
comment => ‘CDB plan for CDB’);
END;
/

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan                  => ‘cdb_plan’,
pluggable_database    => ‘Prod’,
shares                => 2,
utilization_limit     => 100,
parallel_server_limit => 100);
END;
/

BEGIN
DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
plan                  => ‘cdb_plan’,
pluggable_database    => ‘Dev’,
shares                => 1,
utilization_limit     => 50,
parallel_server_limit => 50);
END;
/

exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

You may use the DBA_CDB_RSRC_PLANS view to display all of the CDB resource plans defined in the CDB.

Categories: DBA Blogs

Simple tuning example but uncertain test results

Bobby Durrett's DBA Blog - Tue, 2014-05-06 16:58

Last week I came across what seemed like a simple query tuning problem.  A PeopleSoft batch job ran for many hours and when I did an AWR report I found that the top query was doing a full scan when an index should help.

Here is the query and its bad plan:

SQL_ID 1jx5w9ybgb51g
--------------------
UPDATE PS_JGEN_ACCT_ENTRY 
SET 
JOURNAL_ID = :1, 
JOURNAL_DATE = TO_DATE(:2,'YYYY-MM-DD'), 
FISCAL_YEAR = :3,
ACCOUNTING_PERIOD = :4,
GL_DISTRIB_STATUS = 'D', 
JOURNAL_LINE = :5 
WHERE 
PROCESS_INSTANCE = 6692638 AND
GL_DISTRIB_STATUS = 'J'  AND 
ACCOUNT=:6 AND 
DEPTID=:7 AND 
CURRENCY_CD=:8 AND
FOREIGN_CURRENCY=:9

Plan hash value: 1919611120

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |                    |       |       | 21649 (100)|          |
|   1 |  UPDATE            | PS_JGEN_ACCT_ENTRY |       |       |            |          |
|   2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |    58 | 21649   (5)| 00:01:27 |
-----------------------------------------------------------------------------------------

The problematic batch job ran variations of this query with different literal values for PROCESS_INSTANCE corresponding to each flat file being loaded.  Three updates of this type were in the awr report for the 16 hour period that covered the run of the batch job:

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
    16,899      5,836        3,811        4.4     3.5 4h54qqmbkynaj

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692549 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,170      2,190        1,480        4.2     1.3 a5rd6vx6sm8p3

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692572 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

     6,141      1,983        1,288        4.8     1.3 1jx5w9ybgb51g

UPDATE PS_JGEN_ACCT_ENTRY SET JOURNAL_ID = :1, JOURNAL_DATE = TO_DATE(:2,'YYYY-M
M-DD'), FISCAL_YEAR = :3, ACCOUNTING_PERIOD = :4, GL_DISTRIB_STATUS = 'D', JOURN
AL_LINE = :5 WHERE PROCESS_INSTANCE = 6692638 AND GL_DISTRIB_STATUS = 'J' AND A
CCOUNT=:6 AND DEPTID=:7 AND CURRENCY_CD=:8 AND FOREIGN_CURRENCY=:9

The batch job ran about 15 and a half hours so these three plus others like them surely combined to make up the bulk of the run time.

It made sense to me to just add an index on all the columns in the where clause – PROCESS_INSTANCE,GL_DISTRIB_STATUS,ACCOUNT,DEPTID,CURRENCY_CD,FOREIGN_CURRENCY

Just to check how selective this combination of columns might be I did a count on each grouping of these columns and came up with about 50 rows per combination:

>select max(cnt),avg(cnt),min(cnt)
  2  from
  3  (select
  4  PROCESS_INSTANCE,
  5  GL_DISTRIB_STATUS,
  6  ACCOUNT,
  7  DEPTID,
  8  CURRENCY_CD,
  9  FOREIGN_CURRENCY,
 10  count(*) cnt
 11  from sysadm.PS_JGEN_ACCT_ENTRY
 12  group by
 13  PROCESS_INSTANCE,
 14  GL_DISTRIB_STATUS,
 15  ACCOUNT,
 16  DEPTID,
 17  CURRENCY_CD,
 18  FOREIGN_CURRENCY);

  MAX(CNT)   AVG(CNT)   MIN(CNT)
---------- ---------- ----------
      9404  50.167041          1

The table itself has 3 million rows so this is pretty selective:

OWNER                TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN SAMPLE_SIZE LAST_ANALYZED       DEGREE     INSTANCES
-------------------- ------------------------------ ---------- ---------- ----------- ----------- ------------------- ---------- ----------
SYSADM               PS_JGEN_ACCT_ENTRY                3145253      82857         204     3145253 2014-04-21 21:07:02          1          1

But, the strange thing was when we added the index on our test system we didn’t see any performance improvement!  We ran the largest production file on test and it ran in ten minutes with or without the index.  Yack!

So, I tried my own test in sqlplus with the select equivalent of the update and hardcoded values instead of bind variables – quick and dirty.  I thought I had extracted some valid values although I later realized they weren’t.  Here is what I ran and notice the full scan ran just as fast as with the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.30

Execution Plan
----------------------------------------------------------
Plan hash value: 1762298626

---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER                      |                    |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   3 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
       1761  recursive calls
          0  db block gets
        557  consistent gets
         14  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = NULL;

no rows selected

Elapsed: 00:00:00.17

Execution Plan
----------------------------------------------------------
Plan hash value: 3728573827

-----------------------------------------------------------------------------------------
| Id  | Operation          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                    |     1 |   203 |     0   (0)|          |
|*  1 |  FILTER            |                    |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   2 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

It looks like I passed in a NULL for a column with a NOT NULL constraint and that is what made the full scan version just as fast as the indexed one.  The FILTER condition must have realized no rows could meet both NULL and NOT NULL conditions.  With both plans the database realized immediately that there were no rows matching this bogus collection of constants.  So, then I replaced the NULL with a zero and finally we had proof of the performance improvement of the index:

>select * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
Plan hash value: 2047014499

--------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |     1 |   203 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| PS_JGEN_ACCT_ENTRY |     1 |   203 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | PSAJGEN_ACCT_ENTRY |     1 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              "CURRENCY_CD"='USD')
       filter(TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1866  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

>
>select /*+full(PS_JGEN_ACCT_ENTRY) */ * from
  2  sysadm.PS_JGEN_ACCT_ENTRY
  3  WHERE PROCESS_INSTANCE = 6138803 AND
  4  GL_DISTRIB_STATUS = 'J'  AND ACCOUNT=1234567 AND DEPTID=567 AND CURRENCY_CD='USD' AND
  5  FOREIGN_CURRENCY = 0;

no rows selected

Elapsed: 00:00:37.11

Execution Plan
----------------------------------------------------------
Plan hash value: 1758291200

----------------------------------------------------------------------------------------
| Id  | Operation         | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                    |     1 |   203 | 12185   (2)| 00:02:27 |
|*  1 |  TABLE ACCESS FULL| PS_JGEN_ACCT_ENTRY |     1 |   203 | 12185   (2)| 00:02:27 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("PROCESS_INSTANCE"=6138803 AND "GL_DISTRIB_STATUS"='J' AND
              TO_NUMBER("ACCOUNT")=1234567 AND TO_NUMBER("DEPTID")=567 AND
              TO_NUMBER("FOREIGN_CURRENCY")=0 AND "CURRENCY_CD"='USD')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      56110  consistent gets
      55409  physical reads
          0  redo size
       1873  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

So, I can’t tell you what happened in test but I suspect that we were passing null into one of the bind variables and got a similar efficient filter due to some data that was missing running a production file on an out of date test system.  But, once I forced the equivalent to the production full scan by supplying non-null values for all the constants the value of the index became clear.  It went into production last week and this weekend’s run ran in a few minutes instead of 15 hours.

- Bobby

 

 

Categories: DBA Blogs

Partner Webcast – Oracle Internet of Things Platform: Solutions for a Connected World

Machine-to-Machine (M2M) systems known under a more popular name as Internet of Things (IoT) present a completely new paradigm in distributed computing. Every one of us is wearing at least one small...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Is your database healthy? Check now with ORACHK

Spring time is always a good time for some house cleaning and of course some health checks. ORAchk replaces the popular RACcheck tool, extending the coverage based on prioritization of top issues...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Dream DBA

Pythian Group - Mon, 2014-05-05 08:01

When I was 6 I dreamed about dinosaurs – the bigger ones, like the T-Rex. I loved them. As I reached 16, I dreamed about the girl next door, and everything else was simply invisible. When I was 26, I was already a DBA of few years, going through the final years of being a rookie. I dreamed about becoming a true DBA. That’s a night dream and a day dream. A dream which I am still living day in and day out. As I have crossed 36, the journey of dreams go on.

I believe that if you want to be a champion DBA, you need to be a dreamer. What databases are today and what they would be tomorrow, will only be shaped by the database dreamers. Without these dreamers, these databases would be just a bland labor of bits and bytes. Dreamers live in an ideal world where they strive to create and nurture the things around them. So a dreamer’s database is the stainless vision of its lofty ideals.

A dreamer DBA cherishes his visions. A dreamer DBA only hears that music which stirs the chords of heart. A dreamer DBA only adopts what’s most beautiful for his database. He only drapes the configuration of his darling database with purest of parameters, for out of them grows all delightful conditions, a heavenly optimized environment, and when he remains true to it, an ideal database system gets created.

So dream big. Dream ideally, because a database dream is the prophecy which will dawn one day, and dawn it will.

Everything initially is a dream. Rose is a dream in a seed, initially. A Rainbow Lorikeet sits like a dream in an egg, for some time. At the start, every database is a bunch of files and processes. Let your dreams flourish and evolve positively, and channel them to become great implementations. From dreams, originate realities.

If you are a DBA, mired in crisis-hit databases; you only need to worry about if you don’t dream of improvement and resolution. If in difficult circumstances, you are dreaming about refinements, solutions, and elegance; only then you will be available to achieve that. To aspire is to achieve. To dream is to get. Dreams are kind of thoughts. Keep them positive and hopeful, your acts will get aligned accordingly, and you will be the victor; a true champion DBA.

If you are looking at today’s big names in databases and wondering in awe, that’s alright. But don’t attribute that to luck, fortune or chance. Don’t say that how lucky those cool DBA kids are, because they are not. They dreamed big, and then followed their dreams, and dreams enabled them do great things. They have their share of struggles and failures, but they persisted. They put in efforts and got the results. They were able to do that, because they didn’t want their dreams to crash.

You will become what your dreams are.

Categories: DBA Blogs

My MySQL database impressions

Kubilay Çilkara - Mon, 2014-05-05 06:30
I have been in the data and database world for over a decade now, working with mainly Oracle and data integration projects with Salesforce.

I have also spent time working with MySQL a relational database with open source roots, now part of Oracle. In this post I want to talk about my impressions of MySQL and some tips and tricks I learned working with it.

First and for all, you will have to be ready to get your hands dirty. I couldn't find a package structure for database administration in MySQL - like DBMS libraries of packages and procedures in Oracle. That means you will have to do most of the things on your own. Nevertheless good news is he database starts showing an Oracle banner when you login from version 5.0 onwards and some features like on-line Schema changes, more cost based optimisation and partitioning are added in versions 5.6 - a sign of good things to come.

Some key points

  • Data Import/Export - You can use the native mysqldump utility to dump data with parameters, but it is slow. You can dump schemas and data. I couldn't get it to dump data fast (in parallel) though that is why I strongly recommend mydumper an open source utility written by guys in Oracle and MySQL to dump data using parallel threads and is very fast. Import can be done in parallel as well and it can give you that boost provided your hardware permits it. Don't try to disable constraints, drop indexes before imports as you will read in posts and suggestions on the net, mysqldump already does that for you.
  • Hot Backup - mylvmbackup seems like the de-facto script to take hot backups when the database in online. There are tools like XtraBackup from Percona too. It takes a snapshot of the disk where your datafiles and logfiles are. At restore it does a crash recovery using the logs and brings the database transactions forwards to the point of crash. Then if you have the logs after that, you can play them forwards and bring the database to a point in time after the backup. 
  • Parallel processing - Nada, there is none! I couldn't get it do anything in parallel. The only thing I managed to do in parallel was to export and import data with mydumper, that works! So if you have many CPUs you will be watching them being idle most of the time as one thread only will be chugging away. Unless you use mydumper for your import/export operations where you can make those CPUs sweat. 
  • DBMS packages - You fancy automating, do you need scripts to do repetitive tasks? Well there is no DBMS package library to help you administer the database in MySQL. Instead, you can use Percona Toolkit scripts, a consultancy specialising in helping MySQL DBAs to do great work with MySQL databases. They have a variety of scripts from comparing (diff), syncing databases, tables to extracting metadata and GRANTS structures.  
  • Hints, Explain Plan, Performance Tuning. I couldn't see much of Cost Based Optimisation in MySQL, the data dictionary (INFORMATION_SCHEMA) has metadata names but doesn't hold any dynamic statistics about objects, estimates of counts of rows in tables and indexes it holds can be up 50% wrong. The whole thing is based on heuristics, I suppose. The EXPLAIN PLAN is just a row where it says what the optimiser will do, there is no cost analysis or logical tree structure of execution plans yet.  I couldn't see much on Join orders either, no Nested Loops, HASH or MERGE joins yet. 

MySQL is a popular, relational database. The free version of this database is probably what a small website and a start-up needs. But having said that, many sites outgrow MySQL and still stay with it.

Oracle will probably turn it to a serious database too. Adding partitioning, multi threading to it in the recent releases, is a step forwards in becoming an Enterprise size and scale database.  I don't know much about the MySQL Cluster Version and MySQL Replication I know takes a load off from the reads. I want to see it doing more Performance Tuning science.

Top tools with MySQL that I used

MySQL Workbench - SQL IDE.
Mydumper - Fast logical backup and restore.
Mylvmbackup - Hot backup script
Pentaho Kettle - PDI is an all round data integration and middle-ware tool



Categories: DBA Blogs

Richard Foote’s Art Work (Art Decade)

Richard Foote - Mon, 2014-05-05 00:30
Just a quick note to say I have a new tab on the blog called “Richard Foote’s Art” where I’ll periodically upload a new piece of my art work. As the page says, they highlight both what’s in my mind (so beware) and how I like to relax in the evenings. All pieces are for […]
Categories: DBA Blogs

start python learning with "Introduction to Python"

Surachart Opun - Sat, 2014-05-03 23:53
Python is programming language, that supports object-oriented, imperative and functional programming. The key is its simplicity, easy language to learn and easy moving code from development to production more quickly. It's power tool to use with Big Data. So, I believe it's a good time to learn about Python programming language.
You can find many resources about it on the Internet. I started to learn about Python by watching "Introduction to Python" By Jessica McKellar. It's good  learning video help to start with Python.
It gives a lot of examples for Python and easy to learn. If you would like to start Python programming by your owner, starting with this and learn:
- Set up a development environment with Python and a text editor
- Explore basic data types such as integers, strings, lists, and dictionaries
- Learn how looping lets you do lots of work with a little bit of code
- Gain access to more functionality in Python with modules
- Practice reading, writing, and running your first Python programs
- Navigate the command line for writing larger programs
- Write your own functions for encapsulating useful work
- Use classes to group, name, and reuse functions and variables
- Practice what you’ve learned with the state capitals quizzer and Scrabble cheater projects
You are supposed to learn and get what are necessary for Python beginning by video course.
During study, you can follow up practice at link.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Log Buffer #369, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-05-02 08:23

The Log Buffer carries on with the sparkling blog posts gathered from all across Oracle, SQL Server, and MySQL.
Oracle:

The VISIBILITY column in USER_INDEXES is used to check that each change has worked.

Oracle Utilities Application Framework V4.2.0.2.0 has been released with Oracle Utilities Customer Care and Billing V2.4.0.2.0.

There has been a good deal of activity in the Enhancement Request (ER) Community which is being piloted in Procurement.

A personalized customer experience is already beginning to overtake price and product as the key brand differentiator.

A common request is to produce query zones which are aligned with the base product zones.

SQL Server:

How to centralize your SQL Server Event Logs.

Use Extended Events to Get More Information About failed SQL Server Login Attempts.

DAX: Topn is not a Filter

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.

Stairway to SQL PowerShell Level 6: PowerShell Modules.

MySQL:

Name Change: MySQL Connect Becomes “MySQL Central @ OpenWorld”.

How to identify and cure MySQL replication slave lag.

MySQL, the original brand, the one developed by the MySQL team at Oracle, is steadily evolving.

Labor Day: Let’s upgrade a Galera Cluster.

Upgrading temporal columns from MySQL 5.5 to MySQL 5.6 format.

Categories: DBA Blogs

What to Consider When Selecting a Data Service Provider

Pythian Group - Fri, 2014-05-02 07:41

Today’s blog post completes our series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper. In the first two posts, we discussed the growing trend toward data outsourcing, and conducted a high level analysis of the strengths and weaknesses of the three sourcing options.

To protect the wellness of an organization’s data infrastructure, there are several criteria that should be evaluated. One critical element is the quality of talent.

The more knowledge and skills team members can bring to the table, the more they can add value to any project they’re involved with. Both breadth and depth of experience are critical to ensure they have a solid understanding of the client’s environment and know how to get the most value out of it right from the start.

Top DBAs and infrastructure experts can command high fees. However, the higher short-term cost associated with investing in quality is more than made up for by the productivity and efficiency gains they deliver. For example, the more experience team members have with systems similar to a new client’s, the faster they can identify processes that can be easily automated, resulting in immediate cost savings and freeing them up to focus on more strategic activities.

To access the full list of criteria, download the rest of our white paper, Data Infrastructure Outsourcing.

Read the first blog post, The Growing Trend Toward Data Infrastructure Outsourcing.

Read the second blog post, Developing a Data Infrastructure Sourcing Strategy.

Categories: DBA Blogs

Epic fail: Flunked my 12c OCP upgrade test

Bobby Durrett's DBA Blog - Thu, 2014-05-01 16:16

Well, I took the Oracle 12c OCP upgrade exam this morning and didn’t pass it.  I’ve spent many hours over the past weeks and months reading up on new 12c features and it wasn’t enough.  I also installed 12c databases and tested out a variety of features to no avail.

Ultimately my downfall was that I tried to prepare for the test from the manuals and the descriptions of what the test covered.  Also, I purchased the Kaplan TestPrep software which only covered the first part of the 12c upgrade exam and only included 50 questions.

I should have waited until an upgrade exam book came out and I should have gotten the more comprehensive Transcender software so I would have a more complete set of questions and a better idea what would be covered.  I can’t study everything.

If you don’t know the 12c OCP upgrade exam has a new “Key DBA Skills” section which wasn’t present in earlier exams.  You have to pass both sections.  The first section covers the new 12c features and corresponds to the previous exams.  I passed this section, although barely, even though I felt confident going in that I would get a high percentage.  The breadth of the topics covered by the second section worried me because I can’t study everything.  It definitely covered things I didn’t study including some features I’ve never used.

Both parts of the exam were challenging.  It seems like a pretty tough test.  I’ve got my Oracle 7, 9,10, and 11 certifications and I’ve passed all of those tests on the first try so this is my first failure.  So, I’m trying to regroup and think about where to go from here.

Ideally, I’d like to get Sam Alapati’s book after it comes out on Amazon and get the Transcender software as well but that costs some money.  Also, I’m thinking I need to take some time and write-up some study sheets for myself instead of trying to commit everything to memory and hoping I remember during the test.

Anyway, I thought I would share my failure with the community and hope it helps someone else prepare.  The truth is that even though it is embarrassing to fail the test I learned things in the process that I can use at my job.  It would be great to get the piece of paper and I hope to do so by the end of the year, but I’ve already learned a ton through what I’ve done so far.

- Bobby

Categories: DBA Blogs

Oracle Virtual Developer Day: Java 2014 - May 14th, 2014

Oracle recently launched Java 8 to the world ( March 25, 2014), announcing the revolutionary release of the world's #1 development platform. It is the single largest upgrade ever to the programming...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Watch: Advanced Hadoop Security Features

Pythian Group - Wed, 2014-04-30 07:41

As a follow up to a previous video, Pythian CTO Alex Gorbachev gives an overview of the advanced security features within Hadoop.

Categories: DBA Blogs

going through Tom Kyte's book chapter by chapter

Grumpy old DBA - Tue, 2014-04-29 18:02
One of the ( many varied ) things going on at my work place involves a lunch work/study group.  It probably would not surprise too many people to find out this was my idea to get going.

We have a core group of about 8 of us mostly DBA types but also a couple of developers.  Every two weeks we get together to discuss/review a chapter from Tom Kyte's ( latest version ) Expert Database Architecture book.

The idea is that each person is responsible themselves for reading the chapter coming up ( which for as good as Tom Kyte's book is really means reading it multiple times perhaps ).  Someone volunteers or is drafted to be the "point person" to guide the discussion of the chapter ... and you rotate through those assignments.  At every two weeks well geez it does take a long time to get through a book so that's a downside.

But think about it ... should you do something similar?

After we get through Tom's book the next one up will be Cary Millsap's "Optimizing Oracle Performance" ... ( I think ) ...
Categories: DBA Blogs

Diagnostic and Tuning Pack Components

DBA Scripts and Articles - Tue, 2014-04-29 14:32

I was looking today, for the list of these Diagnostic and Tuning packs components to be sure to not use licensed views in in databases that don’t have those packs. It was not an easy search so I decided to share it with you, if you get lucky you’ll find this page and it will [...]

The post Diagnostic and Tuning Pack Components appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Watch: Hadoop in the Cloud

Pythian Group - Tue, 2014-04-29 08:01

The Pythian team has received many questions about big data in the cloud, and specifically about Hadoop. Pythian CTO, Alex Gorbachev shares some of his recommendations in our latest video:

Categories: DBA Blogs

12c : PDB cannot share CDB’s temporary tablespace

Oracle in Action - Tue, 2014-04-29 00:19

RSS content

As per Oracle 12c documentation,  a PDB can

- have its own local temporary tablespace, or

- if it does not have its own  temporary tablespace, it can share the temporary tablespace with the CDB.

To demonstrate a PDB sharing the temporary tablespace of CDB,  the  first step is to create a PDB without a temporary tablespace. By default when a PDB is created from seed, it is created with its local temporary tablespace TEMP and it cannot be dropped as it is the default temporary tablespace of the PDB.

So, the only options I could think of were to

Method – I

  • Create a non-CDB without temporary tablespace and the plug it into a CDB

Method – II

  • Create a non-CDB / PDB with temporary tablespace,
  • Generate its xml file using dbms_PDB
  •  Edit the xml file to remove the entry for temporary tablespace
  •   Plug in the non-CDB into a CDB

I will demonstrate the second method. For this purpose, I created a non-CDB orcl2 using DBCA so that it contained default temporary tablespace temp.

– Open the non-CDB in read only mode

ORCL2> shu immediate;
             startup mount;
             alter database open read only;

– Generate an XML metadata file for the non-CDB

ORCL2>exec dbms_pdb.describe ('/u01/app/oracle/oradata/orcl2/orcl2.xml');

– Edit the xml file to remove the entry for temp tablespace

[oracle@em12 ~]$ vi /u01/app/oracle/oradata/orcl2/orcl2.xml

– Use the xml file to plug in the non-CDB into  CDB2 as PDB_ORCL2

CDB$ROOT@CDB2>create pluggable database pdb_orcl2 using '/u01/app/oracle/oradata/orcl2/orcl2.xml'  nocopy;

– Connect to PDB_ORCL2 and run the “$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql” script to clean up the new PDB, removing any items that should not be present in a PDB.

CDB$ROOT@CDB2>connect sys/oracle@pdb_orcl2 as sysdba

PDB_ORCL2@CDB2> @?/rdbms/admin/noncdb_to_pdb

PDB_ORCL2@CDB2> alter pluggable database open;

– Check that data from non-CDB is available in the new PDB

PDB_ORCL2@CDB2>select count(empno) from scott.emp;

COUNT(EMPNO)
------------
14

– Verify that temporary tablespace has not been created in PDB_ORCL2

PDB_ORCL2@CDB2> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
USERS
EXAMPLE

– Verify that tempfile has not been created in PDB_ORCL2

PDB_ORCL2@CDB2> select name from v$tempfile;

no rows selected

So, I was able to create a PDB without temporary tablespace. Now I wanted to check if PDB_ORCL2 uses the temp tablespace of the CDB.

– First check that default temporary tablespace TEMP exists for the CDB

CDB$ROOT@CDB2> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS

CDB$ROOT@CDB2>  select PROPERTY_NAME, PROPERTY_VALUE

from database_properties
where upper(PROPERTY_NAME) like '%TEMP%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ---------------
DEFAULT_TEMP_TABLESPACE        TEMP

– Set pga_aggregate_target to its lowest allowed value of 10m to force a sort to spill to the temporary tablespace.

 CDB$ROOT@CDB2> alter system set pga_aggregate_target=10m;

– Issue a query in PDB_ORCL2 which will spill to temporary tablespace

PDB_ORCL2@CDB2> select * from dba_objects order by 1,2,3,4,5,6,7;
select * from dba_objects order by 1,2,3,4,5,6,7
*
ERROR at line 1:
ORA-00959: tablespace 'TEMP' does not exist

I expected the query to use temporary tablespace TEMP of CDB but it failed as it expected temp tablespace to exist locally for PDB as its default temporary tablespace is set to TEMP (as was in non-CDB)

PDB_ORCL2@CDB2> col property_name for a30
PDB_ORCL2@CDB2> col property_value for a15
PDB_ORCL2@CDB2> l
1  select PROPERTY_NAME, PROPERTY_VALUE from database_properties
2*  where upper(PROPERTY_NAME) like '%TEMP%'
PDB_ORCL2@CDB2> /

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ---------------
DEFAULT_TEMP_TABLESPACE        TEMP

 – I tried to modify the default temporary tablespace to cdb$root:temp but that attempt also failed.

PDB_ORCL2@CDB2> alter database default temporary tablespace cdb$root:temp;
alter database default temporary tablespace cdb$root:temp
*
ERROR at line 1:
ORA-00933: SQL command not properly ended

Hence, I was able to create a PDB without temporary tablespace but could not make it use the temporary tablespace of CDB.

References:

Oracle documentation

———————————————————————————————————————–

Related Links:

Home

Database 12c Index

——————————————————–—————————

 



Tags:  

Del.icio.us
Digg

Comments:  3 comments on this itemYou might be interested in this:  
Copyright © ORACLE IN ACTION [12c : PDB cannot share CDB's temporary tablespace], All Right Reserved. 2014.

The post 12c : PDB cannot share CDB’s temporary tablespace appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Watch: In-Memory Option for Oracle 12c

Pythian Group - Mon, 2014-04-28 08:04

Back in September at Oracle OpenWorld 2013, Larry Ellison announced the Oracle Database In-Memory Option to Oracle 12c. Today, one of Pythian’s Advanced Technology Consultants, Christo Kutrovsky shares his thoughts on the new feature in our latest video. Stay tuned for updates while it’s being tested out in the field.

Categories: DBA Blogs