Feed aggregator

How to upgrade your Dell’s BIOS directly from Ubuntu

Java 2 Go! - Mon, 2010-05-24 23:27
I know this post is totally off topic but I faced this same issue last week and I’m pretty sure this will be very handy for a lot of people out there. So why not share it, right?! Many people...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Micromanaging Memory Consumption

Eduardo Rodrigues - Mon, 2010-05-24 22:34
by Eduardo Rodrigues As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management,...

This is a summary only. Please, visit the blog for full content and more.

Micromanaging Memory Consumption

Java 2 Go! - Mon, 2010-05-24 22:34
by Eduardo Rodrigues As we all know, specially since Java 5.0, the JVM guys have been doing good job and have significantly improved a lot of key aspects, specially performance and memory management,...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

That's a whole lot of partitions!

Claudia Zeiler - Mon, 2010-05-24 19:26
Playing with interval partitioning...
I create the simplest table possible and insert 3 rows - generating 3 partitions.

SQL> create table d1 (dt date)
2 partition by range (dt) interval (numtoyminterval(1,'MONTH'))
3 (PARTITION P1 VALUES LESS THAN (TO_DATE('08/01/1776', 'MM/DD/YYYY')));

Table created.


SQL> insert into d1 values (to_date('07/04/1776', 'MM/DD/YYYY'));

1 row created.

SQL> insert into d1 values (to_date('09/22/1862', 'MM/DD/YYYY'));

1 row created.

SQL> insert into d1 values (to_date('08/18/1920', 'MM/DD/YYYY'));

1 row created.


SQL> select * from d1;

DT
---------
04-JUL-76
22-SEP-62
18-AUG-20


SQL> select table_name, partition_name from user_tab_partitions where table_name = 'D1';

TABLE_NAME PARTITION_NAME
------------------------------ ------------------------------
D1 P1
D1 SYS_P62
D1 SYS_P63

But when I look at the partition_count in user_part_tables...

SQL> select table_name, partition_count from user_PART_TABLES where table_name = 'D1';


TABLE_NAME PARTITION_COUNT
------------------------------ ---------------
D1 1048575

That's a whole lot of partitions! Clearly that is the maximum possible partitions. It's odd that the developers at Oracle chose to store that value there rather than the actual count of partitions created. They obviously have it available. Ah, the mysteries of the Oracle.

Traditional vs OLAP

Michael Armstrong-Smith - Fri, 2010-05-21 23:04
I have been following a very interesting thread on LinkedIn in the group called Data Warehouse & Business Intelligence Architects. The thread is discussing the pros and cons of OLAP as compared to more traditional methods of modeling. Personally I love these discussions. Here's what I recently said:

For me, probably an oldie in terms of these discussions, I have been working with modeling and data warehouses coming up on 25 years. I find it very, very strange that for some reason the term OLAP gets pushed around as if it is the answer to everything. This is probably being unfair to the technique because it's actually been around in one form or another a lot longer than most people realise.

Long before the term was invented or, more to the point shall we say, the technique was discovered, documented and given a formal name, we have been able to model enormous data warehouses with enormous amounts of data. Databases with terabytes of data are not new.

If I'm following the thread correctly I see two schools of thought, one pushing OLAP as the bees' knees and one pushing relational modeling. As someone who entered this field not too many years after Dr. Edgar Codd was first touting his ideas to IBM I can tell you that if a relational model is done correct with the right partitions, indexes and joins I can design a data warehouse using traditional methods for far less money than most folks would have you believe it should cost.

I'm somewhat of a historian and I actually have in my possession a set of Dr. Codd's early drafts. It makes for fascinating reading. So to anyone who is not sown on the idea yet I would urge you to read one of the many good books on the subject. You can do no worse than start with one of Ralph Kimball's books but you might also want to look at Bill Inmon.

Personally, I don't adhere strictly to any of the father's of data warehousing. I have read them all and I mix and match as the situation arises replete with a little tangential leap from time to time, sometimes of faith but mostly based on experience. Oh yes, and occasionally I mix them all, you know, just for fun because, after all, this is a beautiful world and we are in a beautiful profession and we have beautiful problems to solve.

So, what do you think? Are you a purist, a traditionalist or a modernist, somewhere in between or an amalgum of all three?

Should we ban anonymity on the Internet?

Peter O'Brien - Fri, 2010-05-21 09:46
In an Information Security article a few months back, Bruce Schneier (author of Schneier on Security) and Marcus Ranum put some points forward for and against internet anonymity. I have to admit that I agree with Schneier and find Ranum's argument quite weak. He appears to suggest that the main reason to enforce identity is to avoid spam. The tools aren't great, but there are already mechanisms in place to address this. Criminals are always getting better at finding ways to exploit weaknesses in internet technologies increasingly at the heart of the way we shop, interact, work, entertain and inform ourselves. We just have to keep up with the pace in the cat and mouse game. Sacrificing anonymity, and the right to privacy, is too great a cost for just avoiding emails about Viagra (tm) and Nigerian generals with a stash of cash to move out of the country.

What is the great danger of not being anonymous? Well it's all the inferring that goes on about facts that get gathered around the things you search for, shop for, chat about, view and listen to. These are then used to categorise you for advertising, inclusion or exclusion from groups or activities. NetFlix provided a great example of this last year. Just weeks after the contest began, two University of Texas researchers showed that with the NetFlix data one could identify users and in some cases their political leanings and sexual orientation.

Getting back to Schneier's point, trying to implement a robust identification system, which criminals can not outwit or take advantage of, is not possible...
Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you'll never truly know where a packet came from. Work on the problems you can solve: software that's secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we're doing, and they'll do more to improve security than trying to fix insoluble problems.

Tech M&A deals of 2010

Vikas Jain - Wed, 2010-05-19 18:51
Here's some notable tech M&A activity happened till May, 2010.

In Security space,
  • Oracle IdM adding identity analytics (OIA) to it's portfolio through the broader Sun acquisition
  • Symantec enhancing encryption portfolio with PGP, GuardianEdge, and vulnerability assessment offering through Gideon Technologies
  • EMC's RSA Security Division acquired Archer Technologies for GRC across physical+virtual infrastructures
  • Trustwave acquired Intellitactics for SIEM to enhance PCI compliance offering, and BitArmor to enhance endpoint security offering
In Cloud computing space,
  • VmWare seems to be building up Cloud PaaS platform acquiring Spring Source (in 2009) , and now Zimbra, and Rabbit Technologies
  • CA acquired Nimsoft and 3Tera to manage cloud environments
  • Cisco acquired Rohati Systems for cloud security in Cisco's Nexus switch line
In Mobile space,
  • SAP planning to buy Sybase for it's mobile middleware
  • Apple getting Siri, HP getting Palm, RIM getting Viigo
References:
Network World slideshow on Tech acquisitions of 2010
PWC report on Tech M&A insights for 2010

How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

Alejandro Vargas - Wed, 2010-05-19 05:31

The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document.

In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

Categories: DBA Blogs

How to Calculate TCP Socket Buffer Sizes for Data Guard Environments

Alejandro Vargas - Wed, 2010-05-19 05:31

The MAA best practices contains an example of how to calculate the optimal TCP socket buffer sizes, that is quite important for very busy Data Guard environments, this document Formula to Calculate TCP Socket Buffer Sizes.pdf contains an example of using the instructions provided on the best practices document.

In order to execute the calculation you need to know which is the band with or your network interface, usually will be 1Gb, on my example is a 10Gb network; and the round trip time, RTT, that is the time it takes for a packet to make a travel to the other end of the network and come back, on my example that was provided by the network administrator and was 3 ms (1000/seconds)

Categories: DBA Blogs

Impact of Truncate or Drop Table When Flashback Database is Enabled

Alejandro Vargas - Wed, 2010-05-19 04:51

Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover.

One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explain, we knew there is an overhead for these operations.

But the information on the note was not clear enough, with the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

Categories: DBA Blogs

Impact of Truncate or Drop Table When Flashback Database is Enabled

Alejandro Vargas - Wed, 2010-05-19 04:51

Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover.

One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explain, we knew there is an overhead for these operations.

But the information on the note was not clear enough, with the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

The Oracle Enterprise Linux Software and Hardware Ecosystem

Sergio's Blog - Wed, 2010-05-19 00:43

It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux.

As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing.

Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities.

The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via oelhelp_ww@oracle.com

Chip Manufacturers Server vendors Storage Systems, Volume Management and File Systems Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand SOA and Middleware Backup, Recovery & Replication Data Center Automation Clustering & High Availability Virtualization Platforms and Cloud Providers Security Management
Categories: DBA Blogs

New Discoverer Books

Michael Armstrong-Smith - Tue, 2010-05-18 20:08
I thought I would let you know that McGraw-Hill may well be interested in doing 2 extra versions of our best selling Discoverer book. As you know, the current book is on version 10g and incorporates both end user and administration. We are going to separate these out into a brand new Oracle Discoverer 11g Administration Handbook and a smaller one for end users as a sort of tutorial for getting to know the tool. There is still demand for material on Discoverer and now, following the release of 11g, I believe would be a good time to bring our current book up to date.


The end user book will basically take our end user training, extend it and convert it into book format. The bulk of this material already exists so is almost written.

The main book that I will be working on the 11g Administration Handbook and I wanted to get your thoughts.

As a launch point I will be taking the original book and stripping out everything to do with end users leaving just the administration chapters. Then I am going to add brand new material. The topics I definitely want to include are:
  • Managing PL/SQL functions – nothing on this in original book
  • Java command line – again nothing on this in the original book
  • Interfacing Discoverer with BI Publisher
  • Application Server Management using Weblogic – one, maybe two chapters on this
  • Interfacing with Oracle E-Business Suite
I’m also thinking about adding a chapter on what’s next for Discoverer with a discussion about upgrading to OBI EE and perhaps even covering the Discoverer to OBI EE migration mechanism in some detail.

I'd like to get your input. From the administrators point of view, what would you like to see covered in such a book? Do you have any thoughts as to new material that should be covered?

If so, please contact me via email

OCM - MOS - Metalink

Herod T - Tue, 2010-05-18 19:46
Well,

Being removed from using oracle support for a period of time, I was spared the implementation pains that others experienced.  Too bad those pains haven't subsided. Come on... Flash?!?!?

If you put the fact that they chose flash (come on flash?!?!?) aside, the system is actually very good.  I can see the immediate benefit of using a collector and sending the data to them.  I was able to raise an SR with all of the particulars in about 2 minutes, would have been less if the !@$#%! backspace button worked, but it's 2010, why do we still have backspace buttons.

I don't have any of the searching issues that other's have had, the power search is actually pretty powerful when you figure it out and having a 3rd party list of missing patches has already proven to be a great asset in getting things up to date.  I generally feel that given enough time, MOS will be a good system, assuming they go to something else other than flash.

Come on Flash!?!?!?!?




Pleasing line

Rob Baillie - Mon, 2010-05-17 02:47
Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...



$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');


Converting a CVS repository to SVN

Susan Duncan - Mon, 2010-05-10 09:42
I've recently gone through this with an old CVS repository we have. We wanted to keep the history but not the CVS repository. I used cvs2svn from tigris.org with great success.

This converter actually converts from CVS to many different repositories including git. It offers many options for conversion - I created an SVN dumpfile but you can also convert directly into an existing or a new SVN repository.

I chose to use the options file to run my conversion - so I could play with the different options, do dry-runs, adjust what output I needed etc. Here are some of the settings I used from the cvs2svn-example.options file that comes with the converter

ctx.output_option = DumpfileOutputOption(
dumpfile_path=r'/cvs2svn-susan-output/cvs2svn-dump1', # Name of dumpfile to create
#author_transforms=author_transforms,
)
### to output conversion to a dump file

ctx.dry_run = False ### always a good idea to do a dry run!

ctx.trunk_only ####I decided not to bring over any branches and tags

run_options.add_project(
r'/cvs-copy', ##### to specify the repos project to be converted
trunk_path='trunk',
branches_path='branches',
tags_path='tags',
.........)

For my conversion I didn't need to use many of the other possible options such as mapping author names and symbol handling. I was confident that, as I had used JDeveloper to populate my CVS repository, this had been handled for me. For instance, I didn't need to do a lot of prepping of my CVS repository to ensure that my binary files had been correctly added to the repository. This can be a problem as CVS and SVN handle them differently. The documentation at Tigris and the options file are very detailed in how to handle these potential issues.

and that's really all there was - just run the converter and point to the options file
cvs2svn --options=MYOPTIONSFILE
I then created a new Remote Directory in my target SVN repository using JDeveloper's Versioning Navigator and ran the standard load utility on the command line to add my converted CVS repository to my existing SVN repository.
svnadmin load /svn/mySVNrepository/ --parent-dir mynewSVN_dir < /cvs2svn-dump1
A check that my workspaces checked out correctly and that I could see my image files and I was done (sorry to all my QA colleagues who want more testing mentioned than this!)

Tortoise and JDeveloper

Susan Duncan - Mon, 2010-05-10 09:24
Recently I discovered a new setting that could trip you up if you are using both TortoiseSVN and JDeveloper to access your SVN repository from an OTN forum post.

If you are finding that you can't see the overlays on your JDeveloper project files or that some of the menu items are not enabled - make sure that in your TortoiseSVN settings you have unchecked Use "_svn" instead of ".svn"

Anyone got any other tips for using both tools?

Auto complete functionality with latest Lucene Domain Index

Marcelo Ochoa - Sat, 2010-05-08 10:44
A few days ago I uploaded two new releases of Oracle Lucene Domain Index, once based on Lucene 2.9.2 core base (10g, 11g) and another based on 3.0.1 release (10g/11g).
The question is why 3.0.1 release only have one installation file?
This is because the code base of Lucene 3.x branch is only compatible with JDK1.5 so to get Lucene 3.x release working on 10g databases which is based on JDK1.4 I included a retro-translator, this library gets code compiled in 1.5 format and converts it to 1.4, I'll explain more in details this process in another post.
The important point is, I want to still supporting Lucene Domain Index for Oracle 10g because the installed base of this release is big even with the end of official support next July.
On the other hand this new release includes another great contribution from Pedro Pinheiro, an auto-complete pipeline table function. This reinforce the goal of Lucene Domain Index that with a few new classes and some PLSQL wrapper you can extend LDI to your need.
Here a simple example:
I am creating and populating a simple table with a english-spanish dictionary lookup:
create table dicc (
   term varchar(256),
   def  varchar2(4000))
/
-- Populate dictionary with 10K terms and definitions
@@dicc-valuesthen a Lucene Domain Index for auto-complete functionality:
create index dicc_lidx on dicc(term) indextype is lucene.luceneindex
parameters('ExtraCols:def;LogLevel:INFO;FormatCols:TERM(ANALYZED_WITH_POSITIONS_OFFSETS);PerFieldAnalyzer:TERM(org.apache.lucene.analysis.WhitespaceAnalyzer),DEF(org.apache.lucene.analysis.StopAnalyzer)');Note that TERM column is analyzed storing term positions offset.
With this index created we can query using auto complete pipeline table function as follow:
SQL> select * from table(lautocomplete('DICC_LIDX','TERM','th',15)) t;

TERM    DOCFREQ
there       3
theory     2
thaw       2
then        2
therefore 2
thence     1
their        1
thanks     1
theft        1
theatrical 1
the          1
theme     1
that         1
thermal   1
thank      1
15 rows selected.
Elapsed: 00:00:00.01First argument of this function is your index name, second argument is the column used for auto complete, third argument is the string used for lookup and last argument is how many terms are returned. By default rows are returned order by docFreq descending. Here other example:
SQL> select * from table(lautocomplete('DICC_LIDX','TERM','spor',10)) t;
TERM      DOCFREQ
sport         3
sportsman 1
sporadic   1
Elapsed: 00:00:00.02For the example table which includes 10102 rows the execution time of above examples is around 21ms, not bad for a notebook.
Another new feature of this release is parallel index on RAM, which is enable by default with this release, indexing on RAM means that when you are working in OnLine mode a batch of new rows to be added to the index are processed in parallel (ParallelDegree parameter) more information is on Lucene Domain Index documentation on-line, if you have a server with multi-core processor or a RAC installation with sufficient RAM this feature speed up your indexing time by 40% eliminating the BLOB access during a partial index creation.
Well next post will be about how to deal with Libraries compiled using JDK1.5 on Oracle 10g Databases. Stay tunned...

Pages

Subscribe to Oracle FAQ aggregator