Feed aggregator

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

Alejandro Vargas - Wed, 2010-05-19 04:34

Recently I have received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims

I was happy to have the opportunity to know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" .

The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments.

She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

Categories: DBA Blogs

The Oracle Enterprise Linux Software and Hardware Ecosystem

Sergio's Blog - Wed, 2010-05-19 00:43

It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux.

As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing.

Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities.

The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via oelhelp_ww@oracle.com

Chip Manufacturers Server vendors Storage Systems, Volume Management and File Systems Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand SOA and Middleware Backup, Recovery & Replication Data Center Automation Clustering & High Availability Virtualization Platforms and Cloud Providers Security Management
Categories: DBA Blogs

New Discoverer Books

Michael Armstrong-Smith - Tue, 2010-05-18 20:08
I thought I would let you know that McGraw-Hill may well be interested in doing 2 extra versions of our best selling Discoverer book. As you know, the current book is on version 10g and incorporates both end user and administration. We are going to separate these out into a brand new Oracle Discoverer 11g Administration Handbook and a smaller one for end users as a sort of tutorial for getting to know the tool. There is still demand for material on Discoverer and now, following the release of 11g, I believe would be a good time to bring our current book up to date.


The end user book will basically take our end user training, extend it and convert it into book format. The bulk of this material already exists so is almost written.

The main book that I will be working on the 11g Administration Handbook and I wanted to get your thoughts.

As a launch point I will be taking the original book and stripping out everything to do with end users leaving just the administration chapters. Then I am going to add brand new material. The topics I definitely want to include are:
  • Managing PL/SQL functions – nothing on this in original book
  • Java command line – again nothing on this in the original book
  • Interfacing Discoverer with BI Publisher
  • Application Server Management using Weblogic – one, maybe two chapters on this
  • Interfacing with Oracle E-Business Suite
I’m also thinking about adding a chapter on what’s next for Discoverer with a discussion about upgrading to OBI EE and perhaps even covering the Discoverer to OBI EE migration mechanism in some detail.

I'd like to get your input. From the administrators point of view, what would you like to see covered in such a book? Do you have any thoughts as to new material that should be covered?

If so, please contact me via email

OCM - MOS - Metalink

Herod T - Tue, 2010-05-18 19:46
Well,

Being removed from using oracle support for a period of time, I was spared the implementation pains that others experienced.  Too bad those pains haven't subsided. Come on... Flash?!?!?

If you put the fact that they chose flash (come on flash?!?!?) aside, the system is actually very good.  I can see the immediate benefit of using a collector and sending the data to them.  I was able to raise an SR with all of the particulars in about 2 minutes, would have been less if the !@$#%! backspace button worked, but it's 2010, why do we still have backspace buttons.

I don't have any of the searching issues that other's have had, the power search is actually pretty powerful when you figure it out and having a 3rd party list of missing patches has already proven to be a great asset in getting things up to date.  I generally feel that given enough time, MOS will be a good system, assuming they go to something else other than flash.

Come on Flash!?!?!?!?




Pleasing line

Rob Baillie - Mon, 2010-05-17 02:47
Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...



$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');


Converting a CVS repository to SVN

Susan Duncan - Mon, 2010-05-10 09:42
I've recently gone through this with an old CVS repository we have. We wanted to keep the history but not the CVS repository. I used cvs2svn from tigris.org with great success.

This converter actually converts from CVS to many different repositories including git. It offers many options for conversion - I created an SVN dumpfile but you can also convert directly into an existing or a new SVN repository.

I chose to use the options file to run my conversion - so I could play with the different options, do dry-runs, adjust what output I needed etc. Here are some of the settings I used from the cvs2svn-example.options file that comes with the converter

ctx.output_option = DumpfileOutputOption(
dumpfile_path=r'/cvs2svn-susan-output/cvs2svn-dump1', # Name of dumpfile to create
#author_transforms=author_transforms,
)
### to output conversion to a dump file

ctx.dry_run = False ### always a good idea to do a dry run!

ctx.trunk_only ####I decided not to bring over any branches and tags

run_options.add_project(
r'/cvs-copy', ##### to specify the repos project to be converted
trunk_path='trunk',
branches_path='branches',
tags_path='tags',
.........)

For my conversion I didn't need to use many of the other possible options such as mapping author names and symbol handling. I was confident that, as I had used JDeveloper to populate my CVS repository, this had been handled for me. For instance, I didn't need to do a lot of prepping of my CVS repository to ensure that my binary files had been correctly added to the repository. This can be a problem as CVS and SVN handle them differently. The documentation at Tigris and the options file are very detailed in how to handle these potential issues.

and that's really all there was - just run the converter and point to the options file
cvs2svn --options=MYOPTIONSFILE
I then created a new Remote Directory in my target SVN repository using JDeveloper's Versioning Navigator and ran the standard load utility on the command line to add my converted CVS repository to my existing SVN repository.
svnadmin load /svn/mySVNrepository/ --parent-dir mynewSVN_dir < /cvs2svn-dump1
A check that my workspaces checked out correctly and that I could see my image files and I was done (sorry to all my QA colleagues who want more testing mentioned than this!)

Tortoise and JDeveloper

Susan Duncan - Mon, 2010-05-10 09:24
Recently I discovered a new setting that could trip you up if you are using both TortoiseSVN and JDeveloper to access your SVN repository from an OTN forum post.

If you are finding that you can't see the overlays on your JDeveloper project files or that some of the menu items are not enabled - make sure that in your TortoiseSVN settings you have unchecked Use "_svn" instead of ".svn"

Anyone got any other tips for using both tools?

Auto complete functionality with latest Lucene Domain Index

Marcelo Ochoa - Sat, 2010-05-08 10:44
A few days ago I uploaded two new releases of Oracle Lucene Domain Index, once based on Lucene 2.9.2 core base (10g, 11g) and another based on 3.0.1 release (10g/11g).
The question is why 3.0.1 release only have one installation file?
This is because the code base of Lucene 3.x branch is only compatible with JDK1.5 so to get Lucene 3.x release working on 10g databases which is based on JDK1.4 I included a retro-translator, this library gets code compiled in 1.5 format and converts it to 1.4, I'll explain more in details this process in another post.
The important point is, I want to still supporting Lucene Domain Index for Oracle 10g because the installed base of this release is big even with the end of official support next July.
On the other hand this new release includes another great contribution from Pedro Pinheiro, an auto-complete pipeline table function. This reinforce the goal of Lucene Domain Index that with a few new classes and some PLSQL wrapper you can extend LDI to your need.
Here a simple example:
I am creating and populating a simple table with a english-spanish dictionary lookup:
create table dicc (
   term varchar(256),
   def  varchar2(4000))
/
-- Populate dictionary with 10K terms and definitions
@@dicc-valuesthen a Lucene Domain Index for auto-complete functionality:
create index dicc_lidx on dicc(term) indextype is lucene.luceneindex
parameters('ExtraCols:def;LogLevel:INFO;FormatCols:TERM(ANALYZED_WITH_POSITIONS_OFFSETS);PerFieldAnalyzer:TERM(org.apache.lucene.analysis.WhitespaceAnalyzer),DEF(org.apache.lucene.analysis.StopAnalyzer)');Note that TERM column is analyzed storing term positions offset.
With this index created we can query using auto complete pipeline table function as follow:
SQL> select * from table(lautocomplete('DICC_LIDX','TERM','th',15)) t;

TERM    DOCFREQ
there       3
theory     2
thaw       2
then        2
therefore 2
thence     1
their        1
thanks     1
theft        1
theatrical 1
the          1
theme     1
that         1
thermal   1
thank      1
15 rows selected.
Elapsed: 00:00:00.01First argument of this function is your index name, second argument is the column used for auto complete, third argument is the string used for lookup and last argument is how many terms are returned. By default rows are returned order by docFreq descending. Here other example:
SQL> select * from table(lautocomplete('DICC_LIDX','TERM','spor',10)) t;
TERM      DOCFREQ
sport         3
sportsman 1
sporadic   1
Elapsed: 00:00:00.02For the example table which includes 10102 rows the execution time of above examples is around 21ms, not bad for a notebook.
Another new feature of this release is parallel index on RAM, which is enable by default with this release, indexing on RAM means that when you are working in OnLine mode a batch of new rows to be added to the index are processed in parallel (ParallelDegree parameter) more information is on Lucene Domain Index documentation on-line, if you have a server with multi-core processor or a RAC installation with sufficient RAM this feature speed up your indexing time by 40% eliminating the BLOB access during a partial index creation.
Well next post will be about how to deal with Libraries compiled using JDK1.5 on Oracle 10g Databases. Stay tunned...

The easy-small-simple-quick-step-by-step-how-to article on AspectJ you’ve been looking for is right here

Eduardo Rodrigues - Fri, 2010-05-07 20:24
by Eduardo Rodrigues That’s right. Have you ever spent hours of your precious time googling the Web trying to find an easy, small, simple, quick and step-by-step tutorial, article or sample on how to...

This is a summary only. Please, visit the blog for full content and more.

The easy-small-simple-quick-step-by-step-how-to article on AspectJ you’ve been looking for is right here

Java 2 Go! - Fri, 2010-05-07 20:24
by Eduardo Rodrigues That’s right. Have you ever spent hours of your precious time googling the Web trying to find an easy, small, simple, quick and step-by-step tutorial, article or sample on how to...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

How to write a simple yet “bullet-proof” object cache

Eduardo Rodrigues - Fri, 2010-05-07 19:49
…continued from a previous post, by Eduardo Rodrigues As promised, in this post, I’ll explain how we solved the 2nd part of the heap memory exhaustion problem described in my previous post: the skin...

This is a summary only. Please, visit the blog for full content and more.

How to write a simple yet “bullet-proof” object cache

Java 2 Go! - Fri, 2010-05-07 19:49
…continued from a previous post, by Eduardo Rodrigues As promised, in this post, I’ll explain how we solved the 2nd part of the heap memory exhaustion problem described in my previous post: the skin...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

The X (Path) File

Eduardo Rodrigues - Fri, 2010-05-07 19:48
by Eduardo Rodrigues This week I came across one of those mysterious problems where I had some test cases that needed to verify the content of some DOM trees to guarantee that the test went fine. So,...

This is a summary only. Please, visit the blog for full content and more.

The X (Path) File

Java 2 Go! - Fri, 2010-05-07 19:48
by Eduardo Rodrigues This week I came across one of those mysterious problems where I had some test cases that needed to verify the content of some DOM trees to guarantee that the test went fine. So,...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Time Dimensions with Hourly Time Periods

Keith Laker - Thu, 2010-05-06 08:26
I was working on an application last week that required time series analysis at Hour, Day, Month, Quarter and Year levels. Two interesting things came out of this application.

First, a little implementation detail. The data was supplied in the fact and dimension tables at the Hour level with a TIMESTAMP data type. As you might expect then, there were time periods at the hour level such as:

02-JAN-10 10.00.00.000000000 AM
02-JAN-10 11.00.00.000000000 AM
02-JAN-10 12.00.00.000000000 PM
02-JAN-10 01.00.00.000000000 PM
02-JAN-10 02.00.00.000000000 PM

In my first attempt at building the time dimension I loaded hours directly from TIMESTAMP data type. In that case, the members at Hour level were loaded into the dimension stripped of the hour (e.g., 02-JAN-10). Since this isn't what I wanted, I converted the hours into a CHAR as follows:

CREATE VIEW time_dim_view AS
SELECT
TO_CHAR(hour_id, 'DD-MON-YYYY HH24') AS hour_id,
TO_CHAR(hour_id, 'DD-MON-YYYY HH24') AS hour_desc,
hour_time_span,
hour_id AS hour_end_date,
.. and so on.

This gave me dimension members at hour as follows:

01-JAN-2010 00
01-JAN-2010 01
01-JAN-2010 02
01-JAN-2010 03
01-JAN-2010 04

That worked just fine. I did the same for the descriptions (so that they would be more easily readable by end users) and added a corresponding column to a fact view so that the time view and fact view joined correctly on the TO_CHAR(...) columns.

For the TIME SPAN attribute, I used a fractional value of DAY (0.041667, which is 1/24th of a day). I read the DATETIME into the END DATE attribute as is (no conversion required). From there on, everything worked perfectly (cube builds, time series calculations, etc).

If you happen to look at the END DATE attribute from the OLAP DML side, be sure to wrap the END_DATE object in a TO_CHAR function so that you see the hours. Otherwise, you will see only the day in most cases (it depends on the NLS_DATE_FORMAT setting for the session). For example:

REPORT DOWN TIME TO_CHAR(TIME_END_DATE 'DD_MON_YYYY HH24')

The other thing that was interesting has more to do with the application design. As so often happens, the customer was inclined to build one cube with all history at the hour level (two years of history). When examining the reporting requirements, however, it turned out that hour level analysis very rarely occurs more than 2 month back. Almost all of the reporting looking back over the two years was at the day level or higher (that is, not hourly level reporting).

We could have built the one cube (two years, hour and higher), but most of the processing of hour level data would have been a waste because users don't look at the older data at that level. Instead, we built a very efficient application with two cubes. One cube contained only three months of data at the hour, day, month, quarter and year levels. Another cube contained two years of history starting at the day level.

Presentation of the data is mostly done using Oracle Business Intelligence Enterprise Edition(via SQL to the cube). Some reports examine hourly level data. Other reports examine more aggregate data over longer time periods. Time series calculations (e.g., period to date, moving average, etc.) were added to both cubes and made available in the OBIEE reports.

Occasionally, a user will want to drill from day to hour more than three months back. To support this, OBIEE was set up to drill from day (in the two year cube) to hour in the fact table. The only compromise was that the time series calculations of the cube were not available when drilling to hour in the fact table. That didn't matter to these users.

From the end user perspective, the fact that there were two cubes instead of one (as well as a fact table) was completely irrelevant since OBIEE presented all data in reports in a single dashboard. From a processing perspective, the system was much more efficient and manageable as compared to the single big cube approach.

It is very worthwhile to keep this lesson in mind when you design your applications. Pay careful attention to reporting requirements and build cubes that meet those requirements. You can tie multiple cubes together in a tool such as OBIEE. This approach is often much better then building a single cube every level of detail.

In this case, the example is about what level of detail is in which cube. The same concept applies to dimensions. You might find it much more efficient to build Cube 1 with dimensions A, B, C and D and Cube 2 with dimensions A, B, E and F rather than one big cube with all dimensions.
Categories: BI & Warehousing

Berkeley DB Java Edition 4.0.103 Available

Charles Lamb - Mon, 2010-05-03 02:19

We'd like to let you know that JE 4.0.103 is now at http://www.oracle.com/technology/software/products/berkeley-db/je/index.html. The patch release contains both small features and bug fixes, many of which were prompted by feedback on this forum. Some items to note:


  • New CacheMode values for more control over cache policies, and new statistics to enable better interpretation of caching behavior. These are just one initial part of our continuing work in progress to make JE caching more efficient.

  • Fixes for proper cache utilization calculations when using the -XX:+UseCompressedOops JVM option.

  • A variety of other bug fixes.

There is no file format or API changes. As always, we encourage users to move promptly to this new release.

Bicycle Diaries - I

Vattekkat Babu - Mon, 2010-05-03 00:41

Once I entered work life, physical activity was pretty much restricted to keyboard and mousing. I hate running. I like only Cricket, Badminton and Table Tennis for sports. All need others to be available. I love to swim, but in Bangalore where I stay, it is not very convenient. I used to enjoy cycling when I was in school. Some six months ago, bought one (Hercules ACT 104). Rode it on and off for 4-5 short trips. While it is enjoyable, I never stuck to a routine. Since I am on vacation now, I thought I will attack it as a 2 week project and see if I can actually do it.

If you are a fitness freak, don't bother. I am talking about 5km as a goal - if you routinely do 15km+, you might find this quite boring.

Flooding in Tennessee

Michael Armstrong-Smith - Sun, 2010-05-02 22:54
As many of you will be aware there has been unprecedented and extensive flooding throughout Western and Central Tennessee this weekend. My home town is Cookeville which lies about 75 miles to the east of Nashville which as you know is one of the worst hit areas with well over 14 inches of rain in the last 48 hours. To everyone who has asked after me and my family I just want to say thank you and to let you know that we are safe. Even though there is water all around the area with trees down and rivers over their banks our property, because it is at a higher elevation than most, is safe.

Unfortunately, the same cannot be said for the rest of the state. Not so very far away there are lots of houses under water and I know that my home state is being devastated even as I write. For anyone who has ever been here you will know that this is one of the most beautiful parts of the United States which makes it even harder to take. While Tennessee may not be the richest state in the union the people here are hard working, God loving, gentle folk who didn't need this.

If you have the opportunity to donate anything to a relief effort, should one be organized, please do so. At the very least, please keep the people in this area in your thoughts and prayers as you go to sleep tonight.

More than Iron Man - Oracle and Marvel

Peter O'Brien - Fri, 2010-04-30 09:23
At the beginning of April 2010, Oracle, using the buzz around the release of Iron Man 2, kicked off a worldwide advertising campaign focused on introducing the powerful combination of Oracle and Sun. This includes old school billboards and commercials on a variety of old and new media platforms...

All this makes for some fantastic visuals, but how exactly is Marvel using Oracle? The list of Oracle products being used by Marvel is diverse:

  • Oracle E-Business Suite, including Financials, Human Resources, Self Service HR, Manufacturing and Incentive Compensation
  • Oracle Business Intelligence Suite
  • Oracle Configurator
  • Oracle Enterprise Content Management Suite (formerly Stellent)
  • Oracle Insight
Further information on how Marvel is able to keep track of inventory, and manage the budget on epics like Iron Man, is revealed in Support for Superheroes, Avengers, Assemble! and Marvel Entertainment Grows its Business with Oracle (video)

Pages

Subscribe to Oracle FAQ aggregator