Skip navigation.

DBA Blogs

Everyone Should Have Firefighter Mentality

Pythian Group - Wed, 2014-05-21 07:43

Australia, being the most arid country in the world is known for its bushfires. The part of Australia where I live in doesn’t have many bushfires, but we do have several building fires, like any other part of the world.

Firefighter is one occupation which many kids want to be when they grow up. Kids are inspired by the shiny, clean, smart outwardly appearances of the firefighter crews passing majestically with sirens ringing in their immaculately clean trucks.

While volunteering for one of the fire stations in my suburb on a quiet day, I found those firefighters doing nothing but polishing their trucks, cleaning their stuff, making sure everything is in optimal order, and waiting for the emergency on their toes.

No matter what field you are in, what profession you are following; No matter on which step of corporate ladder you are; If you are a full time employee, a contractor, or a business owner, it is a fact that there are ample quiet times. Small chunks of hushed, still periods during each workday.

Those still moments are the “firefighter” time. Don’t let that time go to waste. Clean your desk and your tools. Polish your skills, and think about yourself and your career. Identify your areas for improvement, and grow your network. Read, write, or help out others.

In other words, get ready for that fire.

Categories: DBA Blogs

New OTN Tour

Hans Forbrich - Mon, 2014-05-19 11:51
For those of you who watch the OTN Tours, here is a list of the big ones this year

Feb 2014 - OTN Yathra by AIOUG (India) ... http://otnyathra.com/
May 2014 - OTN MENA by ARAB OUG ... http://www.otnmenatour.org/
August 2014 - OTN LAD North
August 2014 - OTN LAD South
November 2014 - OTN APAC

I was part of OTN Yathra (fantastic time again, thanks Murali), and have my name in the hat for LAD and APAC.  Unfortunately MENA conflicts with other scheduled events.
Categories: DBA Blogs

HA of Database Control for RAC made easy

The Oracle Instructor - Mon, 2014-05-19 09:11

When you install an 11g RAC database without Grid Control respectively Cloud Control present, this is what the DBCA will give you:

RAC_dbconsole1There is one Database Control OC4J Container only, running on host01. Should host01 go down, the Enterprise Manager is no longer available now. We could make that a resource, known to the clusterware and let it failover in that case. But also – and even easier – we can start a second OC4J Container to run on host02 simultaneously like this:

RAC_dbconsole2Let’s see how to implement that:

 

[oracle@host01 ~]$ emca -reconfig dbcontrol -cluster -EM_NODE host02 -EM_NODE_LIST host02

STARTED EMCA at May 14, 2014 5:16:14 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: orcl
Service name: orcl
Do you wish to continue? [yes(Y)/no(N)]: yes
May 14, 2014 5:16:26 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/orcl/emca_2014_05_14_17_16_14.log.
May 14, 2014 5:16:29 PM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) ...
May 14, 2014 5:16:34 PM oracle.sysman.emcp.EMAgentConfig performDbcReconfiguration
INFO: Propagating /u01/app/oracle/product/11.2.0/dbhome_1/host02_orcl/sysman/config/emd.properties to remote nodes ...
May 14, 2014 5:16:34 PM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) ...
May 14, 2014 5:17:33 PM oracle.sysman.emcp.EMDBPostConfig performDbcReconfiguration
INFO: Database Control started successfully
May 14, 2014 5:17:34 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage
INFO:
****************  Current Configuration  ****************
 INSTANCE            NODE           DBCONTROL_UPLOAD_HOST
----------        ----------        ---------------------

orcl              host01              host01.example.com
orcl              host02              host02.example.com

Enterprise Manager configuration completed successfully
FINISHED EMCA at May 14, 2014 5:17:34 PM
[oracle@host01 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.

https://host01.example.com:1158/em/console/aboutApplication

Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/host01_orcl/sysman/log

Not only can I access Database Control at host01 as usual, I can also get it at host02 now:

[oracle@host01 ~]$ ssh host02
Last login: Wed May 14 10:50:32 2014 from host01.example.com
[oracle@host02 ~]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.3.0
Copyright (c) 1996, 2011 Oracle Corporation.  All rights reserved.
https://host02.example.com:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/host02_orcl/sysman/log

All this is of course not new, but you won’t find it easy in the docs. That is something from my RAC accelerated course last week in Reading, by the way. Even seasoned RAC DBAs are sometimes not aware of that option, so I thought it might be helpful to publish it here briefly :-)


Tagged: Database Control, High Availability, Oracle Enterprise Manager, RAC
Categories: DBA Blogs

Partitions and Segments and Data Objects

Hemant K Chitale - Mon, 2014-05-19 07:47
Have you ever wondered about how Partitions are mapped to Segments ?  How  does Oracle identify the "data object" (as different from the logical object) that maps to the Segment for an Object ?

[Why does Oracle differentiate between "object_id" and "data_object_id" ?  An object may be created without a Segment.  An existing Segment for an object (e.g. a Table) may be recreated (e.g. by an ALTER TABLE tablename MOVE command) thus changing it's "data_object_id" without changing its "object_id")].

For a Partitioned Table, every Partition is an Object.  But (beginning with 11.2.0.2 and "deferred_segment_creation" behaviour), the Segment is created only when the Partition is populated with one or more rows.  What happens when a Partition is SPLIT ?

Here is a simple demonstration with some notes :


SQL> drop table test_partitioned_table purge;

Table dropped.

SQL>
SQL> -- create test table with 5+1 partitions
SQL> create table test_partitioned_table
2 (
3 id_column number,
4 data_column varchar2(15)
5 )
6 partition by range (id_column)
7 (
8 partition p_100 values less than (101),
9 partition p_200 values less than (201),
10 partition p_300 values less than (301),
11 partition p_400 values less than (401),
12 partition p_500 values less than (501),
13 partition p_max values less than (MAXVALUE)
14 )
15 /

Table created.

SQL>
SQL> -- populate the first 4 partitions
SQL> insert into test_partitioned_table
2 select rownum, to_char(rownum)
3 from dual
4 connect by level < 379
5 /

378 rows created.

SQL>
SQL> -- identify the segments that did get created
SQL> -- note : Use DBA_SEGMENTS as HEADER_% information is not available in USER_SEGMENTS
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 order by partition_name
7 /

PARTITION_NAME HEADER_FILE HEADER_BLOCK
------------------------------ ----------- ------------
P_100 11 34449
P_200 11 35473
P_300 11 36497
P_400 11 38417

SQL>
SQL> -- identify the objects
SQL> -- use the DBA_OBJECTS view for consistency with previous query on DBA_SEGMENTS
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 order by subobject_name
7 /

SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
P_100 114541 114541
P_200 114542 114542
P_300 114543 114543
P_400 114544 114544
P_500 114545 114545
P_MAX 114546 114546

6 rows selected.

SQL>


Notice how, although there are 6 Partitions and 6 Objects, there are only 4 Segments. Only the first 4 Partitions that have rows in them now have Segments associated with them.  Pay attention to the (HEADER_FILE, HEADER_BLOCK) and DATA_OJECT_ID values as I proceed to manipulate the Partitions.


SQL> -- split the first partition
SQL> alter table test_partitioned_table
2 split partition p_100
3 at (51)
4 into (partition p_50, partition p_100)
5 /

Table altered.

SQL>
SQL> -- identify the segments
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_50','P_100')
7 order by partition_name
8 /

PARTITION_NAME HEADER_FILE HEADER_BLOCK
------------------------------ ----------- ------------
P_100 11 40465
P_50 11 39441

SQL>
SQL> -- identify the objects
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 and subobject_name in ('P_50','P_100')
7 order by subobject_name
8 /

SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
P_100 114541 114548
P_50 114547 114547

SQL>


Notice how Partition P_50, with a new OBJECT_ID and DATA_OBJECT_ID (above the highest then-existent value). But did you notice that the (HEADER_FILE, HEADER_BLOCK) pair and the DATA_OBJECT_ID for Partition P_100 are completely new values ? Oracle has created a *new* Segment for Partition P_100 and discarded the old segment. The SPLIT operation has created two *new* Segments and removed the old Segment for Partition P_100. What does this also mean ?  Oracle had to actually rewrite all 100 rows in that Partition in the process of creating two new Segments.  Let me say that again : Oracle had to rewrite all 100 rows.

Let me proceed with another test.


SQL> -- insert one row into the 5th partition
SQL> insert into test_partitioned_table
2 select 450, to_char(450) from dual
3 /

1 row created.

SQL>
SQL> -- identify the segment
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name = 'P_500'
7 order by partition_name
8 /

PARTITION_NAME HEADER_FILE HEADER_BLOCK
------------------------------ ----------- ------------
P_500 11 34449

SQL>
SQL> -- identify the object
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 and subobject_name = 'P_500'
7 order by subobject_name
8 /

SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
P_500 114545 114545

SQL>


Inserting a row into Partition P_500 has created a Segment (which did not exist earlier).  [Coincidentally, it has actually "reused" the one extent that earlier belonged to Partition P_100 -- look at the (HEADER_FILE, HEADER_BLOCK) pair --  and had become a free extent  for a while.  I say "coincidentally" because if there had been some other extent allocation for this Table or any other Table or Index in the same tablespace, that free extent could have been reused by another Partition / Table / Index].  The DATA_OBJECT_ID had already been allocated to the Partition when the Table was created, so this doesn't change.
I have deliberately inserted  a "boundary" value of 450 in this Partition.  This will be the maxvalue for Partition P_450.  I will now proceed to split the Partition along this boundary.


SQL> -- split the 5th partition
SQL> -- now p_450 will have the 1 row and p_500 no rows
SQL> alter table test_partitioned_table
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> -- identify the segments
SQL> select partition_name, header_file, header_block
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_450','P_500')
7 order by partition_name
8 /

PARTITION_NAME HEADER_FILE HEADER_BLOCK
------------------------------ ----------- ------------
P_450 11 34449
P_500 11 41489

SQL>
SQL> -- identify the objects
SQL> select subobject_name, object_id, data_object_id
2 from dba_objects
3 where owner = 'HEMANT'
4 and object_name = 'TEST_PARTITIONED_TABLE'
5 and object_type = 'TABLE PARTITION'
6 and subobject_name in ('P_450','P_500')
7 order by subobject_name
8 /

SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
P_450 114549 114545
P_500 114545 114550

SQL>


Now, isn't that interesting ? Partition P_450 has "inherited" the (HEADER_FILE, HEADER_BLOCK) and DATA_OBJECT_ID of what was Partition P_500 earlier. What has happened is that the Segment for Partition P_500 has now become the Segment for Partition P_450 while a *new* Segment (and DATA_OBJECT_ID) has been created for Partition P_500. Effectively, the physical entity (Segment and Data_Object) for Partition P_500 has been "reused" for Partition P_450 while Partition P_500 has been "born again" in a new incarnation. This SPLIT (unlike the earlier SPLIT) resulted in only 1 new Segment (and Data_Object).  The existing row remained in the existing Segment without being rewritten.  The new Segment is created for any "empty" Partition.

For further reading, I suggest that you read on "fast split" operations under "Optimizing SPLIT PARTITION and SPLIT SUBPARTITION Operations".

SQL> select * from test_partitioned_table partition (p_450);

ID_COLUMN DATA_COLUMN
---------- ---------------
450 450

SQL> select * from test_partitioned_table partition (p_500);

no rows selected

SQL>

There you can see that the row is in Partition P_450 whose physical extent is the same as before.


Note : In my demonstration, each Segment is only 1 Extent.

SQL> l
1 select partition_name, extents
2 from dba_segments
3 where owner = 'HEMANT'
4 and segment_name = 'TEST_PARTITIONED_TABLE'
5 and segment_type = 'TABLE PARTITION'
6* order by partition_name
SQL> /

PARTITION_NAME EXTENTS
------------------------------ ----------
P_100 1
P_200 1
P_300 1
P_400 1
P_450 1
P_50 1
P_500 1

7 rows selected.

SQL>

You may have to be dealing with Segments with multiple Extents.

Another takeaway from the query on DBA_SEGMENTS is that (OWNER, SEGMENT_NAME) is not the identifying Key for a Segment.  In fact for a Partitioned table there is *NO* Segment for the Table itself.  There exist Segments for the Table Partitions.  The query on DBA_SEGMENTS must be on (OWNER, SEGMENT_NAME, PARTITION_NAME) by SEGMENT_TYPE = 'TABLE PARTITION'.

.
.
.

Categories: DBA Blogs

Interesting info-graphics on Data-center / DB-Manageability

Pankaj Chandiramani - Mon, 2014-05-19 04:21


 Interesting info-graphics on Data-center / DB-Manageability



Categories: DBA Blogs

last post on GLOC 2014 conference ... new technical stuff coming soon

Grumpy old DBA - Sun, 2014-05-18 18:25
One of the presentations I am going to work on next will be in the SQL Plan Management interaction/intersection area with AWR information.  At GLOC 2014 Craig Martin had a very nice one that kind of is kick starting my interest in coming up with some relevant / worthwhile.

For the conference itself it was an epic "best ever" kind of event that made one feel both valued/valuable for despite all the hard work and there was plenty of that ... it was above and beyond what we were hoping for.  All that achieved through a combination of top speakers and greath workshops.

Somehow even the conference did well when at the last moment our keynote speaker for the final day of the event scratched on us.  Through a set of bad luck and problems in Chicago Steven Feuerstein was unable to fly into Cleveland.  After 15 years of doing events this was our first time when a keynote speaker did not make it.  Yeah I know we shoulda had a contingency plan in place.

Ric Van Dyke from Hotsos stepped up to the plate and delivered an exceptional keynote in Steven's place.  Thank you Ric!

A special thanks to all of our NEOOUG officers and our conference planning committee which also included several people outside of NEOOUG ... you know who you are and we could not have done this without you!

Finally my appreciation for our conference chair Rumpi Gravenstein!  He is the guy the visionary that started this whole journey for us.  His energy and enthusiasm and dedication is inspiring to see!
Categories: DBA Blogs

SQL Activity for the last hour

DBA Scripts and Articles - Sat, 2014-05-17 12:59

This script can be used to show the top 10 SQL activity for the last hour. It uses the v$active_session_history view to search top SQL by resource consumption. Top 10 SQL Activity [crayon-5400289c6cafe905684479/] Here is the result you can obtain: and the active sessions history graph for the same period:

The post SQL Activity for the last hour appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

12c Online Table Redefinition Examples

Bobby Durrett's DBA Blog - Fri, 2014-05-16 16:22

I’ve been learning about online table redefinition in Oracle 12c.  Here is a zip of some test scripts I built for my own understanding: zip

I spent some time trying out DBMS_REDEFINITION on an 11.2 Exadata test system and then started messing with it on 12c in a generic Linux VM.  The 12c version of the DBMS_REDEFINITION includes a new procedure called REDEF_TABLE which lets you do in one step certain things you did with multiple calls to the package in 11.2.  This is an online table compress on 12c:

BEGIN
  DBMS_REDEFINITION.REDEF_TABLE(
    uname                      => user,
    tname                      => 'TEST',
    table_compression_type     => 'COMPRESS');
END;
/

Things that can’t be done in one step like this require calls to procedures such as CAN_REDEF_TABLE, START_REDEF_TABLE, REGISTER_DEPENDENT_OBJECT, COPY_TABLE_DEPENDENTS, and FINISH_REDEF_TABLE.  Example online12c5.sql uses all of these.  Here is a summary of each included file’s test:

online12c1.sql – compress table in one step

online12c2.sql – compress table in multiple steps and show that it creates a hidden column without a primary key

online12c3.sql – same as previous test but with primary key

online12c4.sql – copy contraints and indexes

online12c5.sql – change the columns for the non-unique index

online12c6.sql – change order and type of columns

- Bobby

Categories: DBA Blogs

Log Buffer #371, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-05-16 07:56

Fueled by massive growth of data and propelled by mammoth future potential of its application, bloggers across the database technologies are finding new things to explore in the light of previous knowledge. This Log Buffer Edition covers that all.

Oracle:

To configure an instrumentation watch, you first need to know what instrumentation is, and how to instrument applications or servers.

Why Choose to Deploy Agile PLM in the Cloud?

One of the things that makes JSF different from other approaches to server-side Java web development is that it offers a much higher level of abstraction and transparently maintains UI state.

Step by step instructions for setting up a development environment for doing development with Hippo CMS.

Oracle Solaris 11.2 at the Atlanta OpenStack Summit

SQL Server:

RAID and Its Impact on your SQL Performance.

Microsoft Azure Diagnostics Part 1: Introduction

Using Encrypted Backups in SQL Server 2014

A new plug in for Management Studio from Red Gate is free. It will give you access to all the scripts at SQLServerCentral, including your own briefcase.

Validate File Properties with PowerShell Prior to Import into SQL Server

MySQL:

Benchmark: SimpleHTTPServer vs pyclustercheck (twisted implementation)

Archival and Analytics – Importing MySQL data into Hadoop Cluster using Sqoop

Cross your Fingers for Tech14, see you at OSCON

New Tungsten Replicator 2.2.1 now available

MySQL May Newsletter is Available!

Categories: DBA Blogs

Consider speaking at #ukoug_tech14

The Oracle Instructor - Thu, 2014-05-15 10:11

The call for papers is still open for UKOUG Tech 14 – a great event to speak at for sure!

UKOUG Tech 14The board explicitly encourages first-time speakers and women to submit an abstract.

Both doesn’t apply for me, but I have submitted abstracts in spite :-)

I can say only the best about the past annual conferences of the UK Oracle User Group. Great speakers, very good conditions and an excellent opportunity to get in touch with other Oracle enthusiasts.

So if you – yes, YOU! – are an expert in Oracle Core Technology, but hesitated so far to speak at public events about your topics, this might be the best time to get over it :-)


Tagged: #ukoug_tech14
Categories: DBA Blogs

(Slightly Off Topic) Spurious Correlations

Hemant K Chitale - Wed, 2014-05-14 09:33
During the course of the job, we find, discover and analyze "data" and come up with "information".  Sometimes we find correlations and "discover" causes.  We say "Event 'A'  caused Result 'X'".   However, it can  so happen that some "discovered" correlations are not "causal correlations" --- i.e. "Event 'A' has NOT really caused Result 'X'".  The mathematical correlation ("coefficient of correlation") may be high but there really is no logical or physical association between the two.

Here are some examples of Spurious Correlations.

The next time you say "I find a high correlation between the two", stop and think.  For a study of common biases and fallacies, I recommend "the art of thinking clearly" by rolf dobelli.

,
,
,



Categories: DBA Blogs

Oracle Passwords: How to Use Punctuation Symbols

Pythian Group - Wed, 2014-05-14 07:04

You can’t make a password strong enough. But at least you can try. Having at least one upper case character, one lower case character, one number, one punctuation mark or symbol, and greater than 8 characters, you can have a password which can be considered something decently safe, probably.

In Oracle, if you embed punctuation marks within your passwords, you have to use single quote around that password if you are using orapwd utility from command line. If you are altering the password from the sqlplus utility, you need to use the double quotes around the password.

Example of both is given below, as sys password is being changed:

From orapwd:

$ORACLE_HOME/dbs $ orapwd file=orapwtest password=”My1s!pass” entries=40  force=y
-bash: !pass”: event not found

$ORACLE_HOME/dbs $ orapwd file=orapwtest password=’My1s!pass’ entries=40  force=y

Changed successfully.

From sqlplus:

SQL> alter user sys identified by ‘My1s!pass’;
alter user sys identified by ‘My1s!pass
*
ERROR at line 1:
ORA-00988: missing or invalid password(s)

SQL> alter user sys identified by “My1s!pass”;

User altered.

Enjoy!!!

Categories: DBA Blogs

Tackling the challange of Provisoning Databases in an agile datacenter

Pankaj Chandiramani - Wed, 2014-05-14 01:03

One of the key task that a DBA performs repeatedly is Provisioning of Databases which also happens to one of the top 10 Database Challenges as per IOUG Survey .

Most of the challenge comes in form of either Lack of Standardization or it being a Long and Error Prone Process . This is where Enterprise Manager 12c can help by making this a standardized process using profiles and lock-downs ; plus have a role and access separation where lead dba can lock certain properties of database (like character-set or Oracle Home location  or SGA etc) and junior DBA's can't change those during provisioning .Below image describes the solution :



In Short :



  • Its Fast

  • Its Easy 

  • And you have complete control over the lifecycle of your dev and production resources.


I actually wanted to show step by step details on how to provision a 11204 RAC using Provisioning feature of DBLM  , but today i saw a great post by MaaZ Anjum that does the same , so i am going to refer you to his blog here :


Patch and Provision in EM12c: #5 Provision a Real Application Cluster Database


Other Resources : 


Official Doc : http://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_db_overview.htm#CJAJCIDA


Screen Watch : https://apex.oracle.com/pls/apex/f?p=44785:24:112210352584821::NO:24:P24_CONTENT_ID%2CP24_PREV_PAGE:5776%2C1


Others : http://www.oracle.com/technetwork/oem/lifecycle-mgmt-495331.html?ssSourceSiteId=ocomen



Categories: DBA Blogs

Create new cdb with two pdbs

Bobby Durrett's DBA Blog - Tue, 2014-05-13 17:38

Quick post.  I’m messing with 12c again.  I finished recreating my cdb but this time found the option to create multiple pdbs on the initial creation.  I’m using DBCA.

PDF with screenshots of me creating a cdb with two pdbs

- Bobby

Categories: DBA Blogs

The Hadoop and the Hare

Pythian Group - Mon, 2014-05-12 07:38

I speak to a lot of people who are terribly concerned with “real-time”. Can data get into the warehouse in real-time? Can we record everything the user does on the site in real-time? Real-time is a magic phrase, because it really means “so fast that nobody cares”. In a business sense, it usually means that nobody has really thought about the latency requirements, so they’re going to push for the lowest latency possible. This is a dangerous habit, and you need to fight against it at every turn. Some of the best, cheapest, most scalable solutions will only be available when you accept that you can wait for an hour (or even hours!) to see some results.

Case in point, a client asking for a NoSQL store to hold user event data. It would have to handle millions of concurrent users, all generating dozens of events within minutes. They were (rightly) concerned about write performance and scaling this NoSQL store horizontally to handle the volume of writes. When asked why they needed the events, they didn’t know: they wanted to do exploratory analytics and see if there were interesting metrics to be mined. I proposed dumping the events on a queue – Kafka, Kinesis – or just writing logs for Flume to pick up. Rather than asking a data store to make all of these messages available for querying immediately, do things asynchronously, run some batch jobs to process data in Hadoop, and see if interesting data comes out. The Hadoop cluster and the ingestion pipeline is an order of magnitude less expensive and risky than the cool, real-time solution which confers no immediate benefits.

Maybe this client will decide that batch analytics are good enough, and they’ll stick with using Hive or Impala for querying. Maybe they discover a useful metric, or a machine learning technique they want to feed back into their user-facing application. Many specific metrics can be stored as simple counters per user, in a store like Redis, once they’ve been identified as valuable. Machine learning techniques are interesting because (besides cold start situations) they should be stable: the output shouldn’t change dramatically for small amounts of new data, so new information only needs to be considered periodically. In both cases, the requirements should be narrowed down as far as possible before deciding to invest in “real-time”.

In another case, a client presented the classic data warehousing problem: store large volumes of events with fine grain, and slice and dice them in every imaginable way. Once again, the concept of reducing latency came up. How quickly can we get events into the warehouse so people can start querying them? The answer is that we can make anything happen, but it will be needlessly expensive and malperformant. The main goal of the project was to provide historical reporting: there was no indication that clients would want to do the same kind of pivots and filtering on data only from the past minute. The low latency application would be cool and interesting to develop, but it would be much simpler to find which reports  users want to be low-latency, then precompute them and store them in a more appropriate way.

A more appropriate way is Dynamo, Cassandra, or your preferred NoSQL key-value store. Precompute aggregates you know the user wants with very low latency, and store them keyed on time with a very fine grain: you have the benefit of high write throughput here, but at the cost of little query complexity. Once the data is no longer interesting – it’s visible in the warehouse with much more grain along the various dimensions – then drop it from NoSQL.

Starting with a relatively slow, batch platform gives very high flexibility at a low cost, and with little development effort. Once your users – internal or clients – have explored the data set and started complaining about latency, then is the time to build specialized pipelines for those specific use cases. Since Kafka, Kinesis and Flume all support streaming computations as well as batch, these will serve as good connection points for your “real-time” pipeline.

Categories: DBA Blogs

Very proud of my peeps ... GLOC 2014 is rocking

Grumpy old DBA - Fri, 2014-05-09 17:59
At this time ... Friday night ... 2 1/2 days before Great Lakes Oracle Conference 2014 kicks off ( GLOC 2014 ) things are rocking and rolling.

We have a new grand total of attendee's and registrations ... sitting at 321 at the moment.  This is big for us ... top quality speakers / great workshops / most affordable conference ever.

So proud of everyone involved in bringing this conference together.  Lots of people to thank but number one person is Rumpi Gravenstein our conference chair and driving force behind this crazy idea that we should do this and could do this.  Takes a visionary to create a goal!

Still time to get here and take advantage of all that we have to offer.  Special thanks to all our workshop leaders and our great selection of speakers!



Categories: DBA Blogs

12c : PDB cannot share CDB’s temporary tablespace

Oracle in Action - Tue, 2014-04-29 00:19

RSS content

As per Oracle 12c documentation,  a PDB can

- have its own local temporary tablespace, or

- if it does not have its own  temporary tablespace, it can share the temporary tablespace with the CDB.

To demonstrate a PDB sharing the temporary tablespace of CDB,  the  first step is to create a PDB without a temporary tablespace. By default when a PDB is created from seed, it is created with its local temporary tablespace TEMP and it cannot be dropped as it is the default temporary tablespace of the PDB.

So, the only options I could think of were to

Method – I

  • Create a non-CDB without temporary tablespace and the plug it into a CDB

Method – II

  • Create a non-CDB / PDB with temporary tablespace,
  • Generate its xml file using dbms_PDB
  •  Edit the xml file to remove the entry for temporary tablespace
  •   Plug in the non-CDB into a CDB

I will demonstrate the second method. For this purpose, I created a non-CDB orcl2 using DBCA so that it contained default temporary tablespace temp.

– Open the non-CDB in read only mode

ORCL2> shu immediate;
             startup mount;
             alter database open read only;

– Generate an XML metadata file for the non-CDB

ORCL2>exec dbms_pdb.describe ('/u01/app/oracle/oradata/orcl2/orcl2.xml');

– Edit the xml file to remove the entry for temp tablespace

[oracle@em12 ~]$ vi /u01/app/oracle/oradata/orcl2/orcl2.xml

– Use the xml file to plug in the non-CDB into  CDB2 as PDB_ORCL2

CDB$ROOT@CDB2>create pluggable database pdb_orcl2 using '/u01/app/oracle/oradata/orcl2/orcl2.xml'  nocopy;

– Connect to PDB_ORCL2 and run the “$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql” script to clean up the new PDB, removing any items that should not be present in a PDB.

CDB$ROOT@CDB2>connect sys/oracle@pdb_orcl2 as sysdba

PDB_ORCL2@CDB2> @?/rdbms/admin/noncdb_to_pdb

PDB_ORCL2@CDB2> alter pluggable database open;

– Check that data from non-CDB is available in the new PDB

PDB_ORCL2@CDB2>select count(empno) from scott.emp;

COUNT(EMPNO)
------------
14

– Verify that temporary tablespace has not been created in PDB_ORCL2

PDB_ORCL2@CDB2> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
USERS
EXAMPLE

– Verify that tempfile has not been created in PDB_ORCL2

PDB_ORCL2@CDB2> select name from v$tempfile;

no rows selected

So, I was able to create a PDB without temporary tablespace. Now I wanted to check if PDB_ORCL2 uses the temp tablespace of the CDB.

– First check that default temporary tablespace TEMP exists for the CDB

CDB$ROOT@CDB2> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS

CDB$ROOT@CDB2>  select PROPERTY_NAME, PROPERTY_VALUE

from database_properties
where upper(PROPERTY_NAME) like '%TEMP%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ---------------
DEFAULT_TEMP_TABLESPACE        TEMP

– Set pga_aggregate_target to its lowest allowed value of 10m to force a sort to spill to the temporary tablespace.

 CDB$ROOT@CDB2> alter system set pga_aggregate_target=10m;

– Issue a query in PDB_ORCL2 which will spill to temporary tablespace

PDB_ORCL2@CDB2> select * from dba_objects order by 1,2,3,4,5,6,7;
select * from dba_objects order by 1,2,3,4,5,6,7
*
ERROR at line 1:
ORA-00959: tablespace 'TEMP' does not exist

I expected the query to use temporary tablespace TEMP of CDB but it failed as it expected temp tablespace to exist locally for PDB as its default temporary tablespace is set to TEMP (as was in non-CDB)

PDB_ORCL2@CDB2> col property_name for a30
PDB_ORCL2@CDB2> col property_value for a15
PDB_ORCL2@CDB2> l
1  select PROPERTY_NAME, PROPERTY_VALUE from database_properties
2*  where upper(PROPERTY_NAME) like '%TEMP%'
PDB_ORCL2@CDB2> /

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ---------------
DEFAULT_TEMP_TABLESPACE        TEMP

 – I tried to modify the default temporary tablespace to cdb$root:temp but that attempt also failed.

PDB_ORCL2@CDB2> alter database default temporary tablespace cdb$root:temp;
alter database default temporary tablespace cdb$root:temp
*
ERROR at line 1:
ORA-00933: SQL command not properly ended

Hence, I was able to create a PDB without temporary tablespace but could not make it use the temporary tablespace of CDB.

References:

Oracle documentation

———————————————————————————————————————–

Related Links:

Home

Database 12c Index

——————————————————–—————————

 



Tags:  

Del.icio.us
Digg

Comments:  3 comments on this itemYou might be interested in this:  
Copyright © ORACLE IN ACTION [12c : PDB cannot share CDB's temporary tablespace], All Right Reserved. 2014.

The post 12c : PDB cannot share CDB’s temporary tablespace appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Nationwide Deploys Database Applications 600% Faster

Pankaj Chandiramani - Mon, 2014-04-28 03:37

Nationwide Deploys Database Applications 600% Faster





Heath Carfrey of Nationwide, a leading global insurance and
financial services organization, discusses how Nationwide saves time and
effort in database provisioning with Oracle Enterprise Manager
.


Key-points :



  1. Provisioning Databases using Profiles  (aka Gold Images)

  2. Automated Patching

  3.  Config/Compliance tracking




Categories: DBA Blogs

EMCLI setup

Pankaj Chandiramani - Mon, 2014-04-28 02:15

A quick note on how to install EMCLI which is used for various CLI operations from EM . I was looking to test some Database provisioning automation via EMCLI and thus was looking to setup the same . 


EMCLI Setup
To set up EMCLI on the host, follow these steps:
1.    Download the emcliadvancedkit.jar from the OMS using URL https://<omshost>:<omsport>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
2.    Set your JAVA_HOME environment variable and ensure that it is part of your PATH. You must be running Java 1.6.0_43 or greater. For example:
o    setenv JAVA_HOME /usr/local/packages/j2sdk
o    setenv PATH $JAVA_HOME/bin:$PATH
3.    You can install the EMCLI with scripting option in any directory either on the same machine on which the OMS is running or on any machine on your network (download the emcliadvancedkit.jar to that machine)
java -jar emcliadvancedkit.jar client -install_dir=<emcli client dir>
4.    Run emcli help sync from the EMCLI Home (the directory where you have installed emcli) for instructions on how to use the "sync" verb to configure the client for a particular OMS.
5.    Navigate to the Setup menu then the Command Line Interface. See the Enterprise Manager Command Line Tools Download page for details on setting EMCLI.



Categories: DBA Blogs

12c: Connecting to CDB/PDB – Set Container Vs Connect

Oracle in Action - Thu, 2014-04-24 03:29

RSS content

In Oracle 12c, you can connect to a PDB using two methods :

- Switch the container using Alter system set container …

- Use connect command to connect to PDB using network alias

Let’s compare the two methods :

The use of SET CONTAINER avoids the need to create a new connection from scratch.

If there is an existing connection to a PDB / CDB$root, the same connection can be used to connect to desired PDB / CDB$root.

– Connect to CDB

[oracle@em12 ~]$ sqlplus system/oracle@cdb1

CDB$ROOT@CDB1> sho con_name

CON_NAME
------------------------------
CDB$ROOT

– Check the PID for the process created on the operating system

[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep

oracle 23271 1 0 10:23 ? 00:00:00 oraclecdb1 (LOCAL=NO)

– Change the container to PDB1 using Set container

CDB$ROOT@CDB1> alter session set container=pdb1;

sho con_name

CON_NAME
------------------------------
PDB1

– Check that the operating system PID remains the same as earlier connection is reused and a new connection has not been created

[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep

oracle 23271 1 0 10:23 ? 00:00:00 oraclecdb1 (LOCAL=NO)

– Switch the container back to cdb$root using connect

CDB$ROOT@CDB1> conn system/oracle@cdb1
sho con_name

CON_NAME
------------------------------
CDB$ROOT

– Check that a new operating system PID has been created as a new  connection  has been created

[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep

oracle 23409 1 0 10:29 ? 00:00:00 oraclecdb1 (LOCAL=NO)
glogin.sql is not executed when Alter session set container is used

To demonstrate it, I have added following lines to my glogin.sql to display CDB/PDB name in SQL prompt:

define gname=idle
column global_name new_value gname
set heading off
set termout off
col global_name noprint
select upper(sys_context ('userenv', 'con_name') || '@' || sys_context('userenv', 'db_name')) global_name from dual;
set sqlprompt '&gname> '
set heading on
set termout on

- Let’s connect to PDB1 using “Connect” and verify that glogin.sql is executed and prompt displays CDB/PDB name

 

SQL> conn sys/oracle@pdb1 as sysdba
PDB1@CDB1> 

- Verify that the prompt displays current container (PDB1) and container database (CDB1)

PDB1@CDB1> sho con_name
PDB1

PDB1@CDB1> sho parameter db_name
db_name                              string      cdb1

– Now let’s connect to PDB2 using Alter session set container and verify that glogin.sql is not executed and the same prompt as earlier is displayed

PDB1@CDB1>  alter session set container=pdb2;

 Session altered.
PDB1@CDB1> sho con_name
CON_NAME

 ------------------------------

 PDB2  
-- Let's connect to PDB2 using connect and verify that glogin.sql is executed as the prompt displays the PDB name PDB2

 PDB1@CDB1> connect sys/oracle@pdb2 as sysdba

 PDB2@CDB1> 
Pending transactions are not committed when Alter system set container is used

– Let’s start a transaction in PDB1

PDB1@CDB1> create table pdb1_tab(x number);
Table created.
PDB1@CDB1> insert into pdb1_tab values (1);
1 row created.

– Switch the container to PDB2

PDB1@CDB1> alter session set container=pdb2;

– Try to start another transaction on PDB2 – does not allow as an active transaction exists in the parent container PDB1

PDB1@CDB1> create table pdb2_tab (x number);

 create table pdb2_tab (x number)

 *

 ERROR at line 1:

 ORA-65023: active transaction exists in container PDB1

– In another session check that the transaction was not committed and no rows are visible in table pdb1_tab

CDB$ROOT@CDB1> conn system/oracle@pdb1

 PDB1@CDB1> select * from pdb1_tab;
no rows selected
 Alter session set container cannot be used by local users

 

– Try to give set container privilege to a local user HR in PDB2 – fails as common privilege cannot be granted to a local user and hence a local user cannot user alter session set container to connect to another PDB

PDB2@CDB1> connect system/oracle@pdb2

 PDB2@CDB1> grant set container to hr container=all;

 grant set container to hr container=all

 *

 ERROR at line 1:

 ORA-65030: one may not grant a Common Privilege to a Local User or Role

I hope this post was useful.

Your comments and suggestions are always welcome.

References :

http://docs.oracle.com/cd/E16655_01/server.121/e17636/cdb_admin.htm#ADMIN13970

———————————————————————————-

Related Links:

Home

Database 12c Index

——————————————————–

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Connecting to CDB/PDB - Set Container Vs Connect], All Right Reserved. 2014.

The post 12c: Connecting to CDB/PDB – Set Container Vs Connect appeared first on ORACLE IN ACTION.

Categories: DBA Blogs