The last couple articles I have written focused on meta-data or DDL extraction for Oracle. The search for a part III to those articles lead me to Oracle's Data Pump utility. Not necessarily for the data movement piece but because it has an API for meta-data. Well even though I have been using 10g for quite some time, I have yet to use Data Pump. I thought this would be a great way to introduce myself, and possibly you the reader, to this new utility. This article will serve as a basic introduction to Data Pump and then in subsequent articles we will walk through the new command line options for Data Pump's export and import (expdp & impdp), and look at the PL/SQL packages DBMS_DATAPUMP and DBMS_METADATA.
This article is the result of observations of the UNDO tablespace of Oracle 9i and Oracle 10g in various situations. We start with a simple query showing how to monitor the amount of undo generated in a session for a specific time. We investigate the creation, expansion, and resize of UNDO tablespace, and the issues that guide the reuse of UNDO segments. The impact of parameters like UNDO_RETENTION in Oracle 9i and UNDO_RETENTION and the GUARANTEE clause in CREATE UNDO statements is discussed using simple reproducible examples.
In this article James continues to explore the Oracle's Metadata API and provides a powerful function to compare objects and schemas and print the DDL required to bring them in sync.
This article shows how Oracle's Heterogeneous Services can be configured to allow a database to connect to a Microsoft Access database using standard databases links. The method described can be used to connect to MS-Access from about any platform - Unix/ Linux or Windows.
In this article James explores the Oracle's Metadata API (DBMS_METADATA) and shows how database users can extract object definitions (DDL statements) from an Oracle database without having to go through a stack of dictionary views.
Jared explains how Oracle manages passwords and how "thinking like a hacker" can help you to better protect your databases from potential password theft.
There is a great debate about the rapidly-falling costs of RAM and the performance benefits of full caching of Oracle databases. Let's take a closer look at the issues over large RAM data buffers, tuning by adjusting system parameters and using fast hardware to correct sub-optimal Oracle code:
Prior to Oracle9i, the only two cost-based optimizer modes were all_rows and first_rows optimization. One of the shortcomings of traditional first_rows SQL optimization was that the first_rows goal did not know the scope of the query and generally favored index access over full-table scans.
Sometimes it is a rouge query, sometimes a simple data clean up effort by the users, whatever may the cause be, inadvertent data-loss is a very common phenomenon. Backup and recovery capabilities are provided by the database management systems which ensure the safety and protection of valuable enterprise data in case of data loss however, not all data-loss situations call for a complete and tedious recovery exercise from the backup. Oracle introduced flashback features in Oracle 9i and 10g to address simple data recovery needs.
Prior to Oraclre10g, capturing wait event information was a cumbersome process involving the setting of special events (e.g. 10046) and the reading of complex trace dumps. Fortunately, Oracle10g has simplified the way that wait event information is captured and there are a wealth of new v$ and wrh$ views relating to Oracle wait events.