Feed aggregator

Follow @ORCL_Linux on Twitter

Sergio's Blog - Thu, 2010-05-27 02:06
We've created the following Twitter handles for those of you who like your Oracle Linux and virtualization news in micro chunks * [@ORCL_Linux](http://twitter.com/ORCL_linux) * [@ORCL_virtualize](http://twitter.com/ORCL_virtualize)
Categories: DBA Blogs

Composite Interval Partitioning isn't as advertised.

Claudia Zeiler - Wed, 2010-05-26 18:37

Oracle® Database VLDB and Partitioning Guide 11g Release 1 (11.1) Part Number B32024-01 says:

Interval Partitioning

Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. You must specify at least one range partition.


You can create single-level interval partitioned tables as well as the following composite partitioned tables:

* Interval-range

* Interval-hash

* Interval-list

Sure, I can create these composite partitions, but the results aren't particularly useful. When I tried. Oracle spread my results nicely between the two hash subpartitions for the manually defined partition, but put everything in the same subpartition for the interval generated partition. Notice that these are identical sets of rows. The only difference is the key to force them into the manually specified partition or the generated partition. I assume that there is a metalink note on this somewhere.

I got equivalent results for interval-list composite partitioning. I won't bore the reader with the step-by-step for that test since the results are also that all rows in the generated partitions are forced into one subpartition.

Here are my results for the interval hash test:


SQL> create table interval_hash (
N number,
N2 number
)
partition by range(N) interval (2)
SUBPARTITION BY HASH (N2)
(partition p1 values less than (2)
(SUBPARTITION p_1 ,
SUBPARTITION p_2
));

Table created.

SQL> BEGIN


FOR i IN 1 .. 15 LOOP

INSERT INTO interval_hash VALUES (5, i);
INSERT INTO interval_hash VALUES (0, i);

END LOOP;
COMMIT;
END;
/

PL/SQL procedure successfully completed.


SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_HASH', granularity=>'ALL');

PL/SQL procedure successfully completed.


SQL> SELECT table_name, partition_name, subpartition_name, num_rows
FROM user_tab_subpartitions
ORDER by table_name, partition_name, subpartition_name;

TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
-------------------- -------------------- -------------------- ----------
INTERVAL_HASH P1..................P_1........................... 6
INTERVAL_HASH P1..................P_2........................... 9
INTERVAL_HASH SYS_P138......SYS_SUBP137............15

(I am having tabbing problems in blogger. I hope that my added lines of dots don't confuse too much)


SQL> select * from interval_hash subpartition(p_2) order by n2;

N N2
---------- ----------
0 1
0 3
0 4
0 7
0 9
0 10
0 12
0 14
0 15

9 rows selected.

SQL> select * from interval_hash subpartition(p_1) order by n2;

N N2
---------- ----------
0 2
0 5
0 6
0 8
0 11
0 13

6 rows selected.


SQL> select * from interval_hash subpartition(SYS_SUBP137) ORDER BY N2;

N N2
---------- ----------
5 1
5 2
5 3
5 4
5 5
5 6
5 7
5 8
5 9
5 10
5 11
5 12
5 13
5 14
5 15

15 rows selected.

How to upgrade your Dell’s BIOS directly from Ubuntu

Eduardo Rodrigues - Mon, 2010-05-24 23:27
I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?! Many people...

This is a summary only. Please, visit the blog for full content and more.

How to upgrade your Dell’s BIOS directly from Ubuntu

Java 2 Go! - Mon, 2010-05-24 23:27
I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?! Many people...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Micromanaging Memory Consumption

Eduardo Rodrigues - Mon, 2010-05-24 22:34
by Eduardo Rodrigues As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management,...

This is a summary only. Please, visit the blog for full content and more.

Micromanaging Memory Consumption

Java 2 Go! - Mon, 2010-05-24 22:34
by Eduardo Rodrigues As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management,...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

That's a whole lot of partitions!

Claudia Zeiler - Mon, 2010-05-24 19:26
Playing with interval partitioning...
I create the simplest table possible and insert 3 rows - generating 3 partitions.

SQL> create table d1 (dt date)
2 partition by range (dt) interval (numtoyminterval(1,'MONTH'))
3 (PARTITION P1 VALUES LESS THAN (TO_DATE('08/01/1776', 'MM/DD/YYYY')));

Table created.


SQL> insert into d1 values (to_date('07/04/1776', 'MM/DD/YYYY'));

1 row created.

SQL> insert into d1 values (to_date('09/22/1862', 'MM/DD/YYYY'));

1 row created.

SQL> insert into d1 values (to_date('08/18/1920', 'MM/DD/YYYY'));

1 row created.


SQL> select * from d1;

DT
---------
04-JUL-76
22-SEP-62
18-AUG-20


SQL> select table_name, partition_name from user_tab_partitions where table_name = 'D1';

TABLE_NAME PARTITION_NAME
------------------------------ ------------------------------
D1 P1
D1 SYS_P62
D1 SYS_P63

But when I look at the partition_count in user_part_tables...

SQL> select table_name, partition_count from user_PART_TABLES where table_name = 'D1';


TABLE_NAME PARTITION_COUNT
------------------------------ ---------------
D1 1048575

That's a whole lot of partitions! Clearly that is the maximum possible partitions. It's odd that the developers at Oracle chose to store that value there rather than the actual count of partitions created. They obviously have it available. Ah, the mysteries of the Oracle.

Traditional vs OLAP

Michael Armstrong-Smith - Fri, 2010-05-21 23:04
I have been following a very interesting thread on LinkedIn in the group called Data Warehouse & Business Intelligence Architects. The thread is discussing the pros and cons of OLAP as compared to more traditional methods of modeling. Personally I love these discussions. Here's what I recently said:

For me, probably an oldie in terms of these discussions, I have been working with modeling and data warehouses coming up on 25 years. I find it very, very strange that for some reason the term OLAP gets pushed around as if it is the answer to everything. This is probably being unfair to the technique because it's actually been around in one form or another a lot longer than most people realise.

Long before the term was invented or, more to the point shall we say, the technique was discovered, documented and given a formal name, we have been able to model enormous data warehouses with enormous amounts of data. Databases with terabytes of data are not new.

If I'm following the thread correctly I see two schools of thought, one pushing OLAP as the bees' knees and one pushing relational modeling. As someone who entered this field not too many years after Dr. Edgar Codd was first touting his ideas to IBM I can tell you that if a relational model is done correct with the right partitions, indexes and joins I can design a data warehouse using traditional methods for far less money than most folks would have you believe it should cost.

I'm somewhat of a historian and I actually have in my possession a set of Dr. Codd's early drafts. It makes for fascinating reading. So to anyone who is not sown on the idea yet I would urge you to read one of the many good books on the subject. You can do no worse than start with one of Ralph Kimball's books but you might also want to look at Bill Inmon.

Personally, I don't adhere strictly to any of the father's of data warehousing. I have read them all and I mix and match as the situation arises replete with a little tangential leap from time to time, sometimes of faith but mostly based on experience. Oh yes, and occasionally I mix them all, you know, just for fun because, after all, this is a beautiful world and we are in a beautiful profession and we have beautiful problems to solve.

So, what do you think? Are you a purist, a traditionalist or a modernist, somewhere in between or an amalgum of all three?

Should we ban anonymity on the Internet?

Peter O'Brien - Fri, 2010-05-21 09:46
In an Information Security article a few months back, Bruce Schneier (author of Schneier on Security) and Marcus Ranum put some points forward for and against internet anonymity. I have to admit that I agree with Schneier and find Ranum's argument quite weak. He appears to suggest that the main reason to enforce identity is to avoid spam. The tools aren't great, but there are already mechanisms in place to address this. Criminals are always getting better at finding ways to exploit weaknesses in internet technologies increasingly at the heart of the way we shop, interact, work, entertain and inform ourselves. We just have to keep up with the pace in the cat and mouse game. Sacrificing anonymity, and the right to privacy, is too great a cost for just avoiding emails about Viagra (tm) and Nigerian generals with a stash of cash to move out of the country.

What is the great danger of not being anonymous? Well it's all the inferring that goes on about facts that get gathered around the things you search for, shop for, chat about, view and listen to. These are then used to categorise you for advertising, inclusion or exclusion from groups or activities. NetFlix provided a great example of this last year. Just weeks after the contest began, two University of Texas researchers showed that with the NetFlix data one could identify users and in some cases their political leanings and sexual orientation.

Getting back to Schneier's point, trying to implement a robust identification system, which criminals can not outwit or take advantage of, is not possible...
Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you'll never truly know where a packet came from. Work on the problems you can solve: software that's secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we're doing, and they'll do more to improve security than trying to fix insoluble problems.

Tech M&A deals of 2010

Vikas Jain - Wed, 2010-05-19 18:51
Here's some notable tech M&A activity happened till May, 2010.

In Security space,
  • Oracle IdM adding identity analytics (OIA) to it's portfolio through the broader Sun acquisition
  • Symantec enhancing encryption portfolio with PGP, GuardianEdge, and vulnerability assessment offering through Gideon Technologies
  • EMC's RSA Security Division acquired Archer Technologies for GRC across physical+virtual infrastructures
  • Trustwave acquired Intellitactics for SIEM to enhance PCI compliance offering, and BitArmor to enhance endpoint security offering
In Cloud computing space,
  • VmWare seems to be building up Cloud PaaS platform acquiring Spring Source (in 2009) , and now Zimbra, and Rabbit Technologies
  • CA acquired Nimsoft and 3Tera to manage cloud environments
  • Cisco acquired Rohati Systems for cloud security in Cisco's Nexus switch line
In Mobile space,
  • SAP planning to buy Sybase for it's mobile middleware
  • Apple getting Siri, HP getting Palm, RIM getting Viigo
References:
Network World slideshow on Tech acquisitions of 2010
PWC report on Tech M&A insights for 2010

How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

Alejandro Vargas - Wed, 2010-05-19 05:31

The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document.

In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

Categories: DBA Blogs

How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

Alejandro Vargas - Wed, 2010-05-19 05:31

The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document.

In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

Categories: DBA Blogs

Impact of Truncate or Drop Table When Flashback Database is Enabled

Alejandro Vargas - Wed, 2010-05-19 04:51

Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover.

One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explain, we knew there is an overhead for these operations.

But the information on the note was not clear enough, with the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

Categories: DBA Blogs

Impact of Truncate or Drop Table When Flashback Database is Enabled

Alejandro Vargas - Wed, 2010-05-19 04:51

Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover.

One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explain, we knew there is an overhead for these operations.

But the information on the note was not clear enough, with the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

The Oracle Enterprise Linux Software and Hardware Ecosystem

Sergio's Blog - Wed, 2010-05-19 00:43

It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux.

As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing.

Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities.

The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via oelhelp_ww@oracle.com

Chip Manufacturers Server vendors Storage Systems, Volume Management and File Systems Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand SOA and Middleware Backup, Recovery & Replication Data Center Automation Clustering & High Availability Virtualization Platforms and Cloud Providers Security Management
Categories: DBA Blogs

New Discoverer Books

Michael Armstrong-Smith - Tue, 2010-05-18 20:08
I thought I would let you know that McGraw-Hill may well be interested in doing 2 extra versions of our best selling Discoverer book. As you know, the current book is on version 10g and incorporates both end user and administration. We are going to separate these out into a brand new Oracle Discoverer 11g Administration Handbook and a smaller one for end users as a sort of tutorial for getting to know the tool. There is still demand for material on Discoverer and now, following the release of 11g, I believe would be a good time to bring our current book up to date.


The end user book will basically take our end user training, extend it and convert it into book format. The bulk of this material already exists so is almost written.

The main book that I will be working on the 11g Administration Handbook and I wanted to get your thoughts.

As a launch point I will be taking the original book and stripping out everything to do with end users leaving just the administration chapters. Then I am going to add brand new material. The topics I definitely want to include are:
  • Managing PL/SQL functions – nothing on this in original book
  • Java command line – again nothing on this in the original book
  • Interfacing Discoverer with BI Publisher
  • Application Server Management using Weblogic – one, maybe two chapters on this
  • Interfacing with Oracle E-Business Suite
I’m also thinking about adding a chapter on what’s next for Discoverer with a discussion about upgrading to OBI EE and perhaps even covering the Discoverer to OBI EE migration mechanism in some detail.

I'd like to get your input. From the administrators point of view, what would you like to see covered in such a book? Do you have any thoughts as to new material that should be covered?

If so, please contact me via email

OCM - MOS - Metalink

Herod T - Tue, 2010-05-18 19:46
Well,

Being removed from using oracle support for a period of time, I was spared the implementation pains that others experienced.  Too bad those pains haven't subsided. Come on... Flash?!?!?

If you put the fact that they chose flash (come on flash?!?!?) aside, the system is actually very good.  I can see the immediate benefit of using a collector and sending the data to them.  I was able to raise an SR with all of the particulars in about 2 minutes, would have been less if the !@$#%! backspace button worked, but it's 2010, why do we still have backspace buttons.

I don't have any of the searching issues that other's have had, the power search is actually pretty powerful when you figure it out and having a 3rd party list of missing patches has already proven to be a great asset in getting things up to date.  I generally feel that given enough time, MOS will be a good system, assuming they go to something else other than flash.

Come on Flash!?!?!?!?




Pleasing line

Rob Baillie - Mon, 2010-05-17 02:47
Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...



$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');


Pages

Subscribe to Oracle FAQ aggregator