Advisor Webcast Recordings
Did you know that Advisor Webcasts are recorded and available for download? Topic covered include many Oracle products as well as My Oracle Support. Note:740966.1 has the details on how the program works.
One author you've probably heard of: Charles Darwin
The other? Famous in his time, but in the 20th and 21st centuries largely forgotten: Alfred Russel Wallace.
Darwin was a Big Data scientist, spending 20 years after his trip to the Galapagos gathering data from his own experiments and from botanists around the world, to make his theory unassailable. Wallace was a field naturalist, studying species and variation, up close and very personal.
Both ended up in the same place at roughly the same time, driven by the inescapable conclusion from these three facts:
1. More organisms are born than can survive (for their full "normal" lifespan). 2. Like father like son: we inherit characteristics from our parents3. NOT like father like son: each offspring varies in some way from its parents.
So who/what survives to reproduce and pass on its genes? Or rather, who dies and why? You can die purely by accident. You are the biggest, strongest lion. Nothing can beat you. But a tree falls on you. Dead and gone.
Or you can survive because you have an advantage, however slight, that another in your species lacks. Your beak is slightly more narrow and lets you get at all the nuts on the tree. Your legs are slightly longer so you can avoid the tiger. And so on, everything sorting out how to eat, how to survive long enough to reproduce, from bacteria to coral to fish to mammals.
And with each passing generation, the mutations that help you survive get passed along, and so we (humans and everyone, everything) change - sometimes slowly, sometimes quickly. But change we do.
With this announcement on July 1, 1858, humans now had a way of understanding how the world works without having to fall back on some unknowable god or gods. And we have also been able to build on Wallace's and Darwin's insight to now understand, perhaps too well, how life works on our planet, and how similar we are to so many other species.
Which means - to my way of thinking - that we no longer have any excuses, we humans, for our ongoing devastation and depletion of our world and our co-inhabitants.
In a more rational world, in which humans shared their planet with everything around them, instead of consuming everything in sight, July 1 would be an international day of celebration.
Well, at least I posted a note on my blog! Plus I will go outside later and cut back invasives, to help native trees grow.
How will you celebrate International Evolution Day?
Here are some links to information about evolution, about the way these two men got to the point of announcing their discoveries, and more.
You will read in some of these articles about Wallace being "robbed" of his just fame and recognition; I must tell you that Wallace, in his own words and the way he lived his life, was gracious and generous in spirit. He always saw Darwin as the one who fully elaborated the theory, making its acceptance so instantly widespread across Europe. He did not seem the least bit jealous.
And Wallace was, in many ways, a far more interesting human being than Darwin. I encourage to check out his autobiography, My Life, as a way of being introduced to one of my heroes.
The prevalence of the bring-your-own-device trend has incited new database security concerns while simultaneously improving employee performance.
Enterprises don't want to sacrifice worker productivity and happiness simply because server activity can't be properly managed. There's no reason to abandon BYOD. All it requires is assiduous surveillance, new usage protocols and network optimization.
The biggest concern comes within
Every organization has at least one staff member who couldn't be more dissatisfied with his or her current work situation. The idea of the disgruntled employee may seem somewhat cartoonish, but it's important that businesses consider the situation as a serious threat to data confidentiality.
Chris DiMarco, a contributor to InsideCounsel, acknowledged that mobile devices can be useful assets to personnel harboring ill intentions. David Long-Daniels, co-chairman of Greenberg Traurig's Global Labor & Employment Practice, noted that malicious activity can be carried out through smartphones in a number of ways, and it all starts with willingly sharing information.
"What happens if an individual leaves and you don't have a policy that allows you to wipe their device?" Long-Daniels posited, as quoted by the source.
Set up a protocol
Thankfully, there's a way you can deter malevolent employees from stealing critical information. Bret Arsenault, CIO of Microsoft and contributor to Dark Reading, noted that the software developer has successfully deterred deviancy by implementing database active monitoring and segregating personal and corporate data. He acknowledged that any device accessing company email must:
- Encrypt the information on the mechanism
- Be activated by a PIN
- Enable remote management and application updates to protect Microsoft's programs
Handling transactions off-premise has been a significant boon for Microsoft. The organization consistently deploys products that act as administrators between its own databases and the personal devices of employees. In addition, the solution allows Microsoft to remove any corporate intelligence from devices in the event the user leaves the enterprise.
Implement an access strategy
Depending on what hardware an employee is using and how trustworthy a worker is deemed to be, Microsoft defines how much database access a person will receive. Arsenault maintained that the business asks the following questions:
- What kind of email solution is an individual using? Is it personal or corporate?
- Is his or her device managed and authenticated by Microsoft or handled solely by the employee?
- Is the mechanism being used from a known or unidentified location?
With the aforementioned approaches in mind and sound remote database support at their backs, enterprises will be able to benefit from the flexible workflow BYOD offers without suffering from security woes.
The post Microsoft’s database administration strengthens BYOD security appeared first on Remote DBA Experts.
Yesterday was my first day at 18F!
What is 18F? We're a small, little-known government organization that works outside the usual channels to accomplish special projects. It involves black outfits and a lot of martial arts.
Kidding! Sort of. 18F is a new agency within the GSA that does citizen-focused work for other parts of the U.S. Government, working small, quick projects to make information more accessible. We're using all the tricks: small teams, agile development, rapid iteration, open-source software, test-first, continuous integration. We do our work in the open.
Sure, this is old hat to you, faithful blog readers. But bringing it into government IT work is what makes it exciting. We're hoping that the techniques we use will ripple out beyond the immediate projects we work on, popularizing them throughout government IT and helping efficiency and responsiveness throughout. This is a chance to put all the techniques I've learned from you to work for all of us. Who wouldn't love to get paid to work for the common good?
Obviously, this is still my personal blog, so nothing I say about 18F counts as official information. Just take it as my usual enthusiastic babbling.
We share our skills to maximize your revenue!
Oracle Enterprise Manager 12c Release 4 has a lot of new features; however, I quickly want to focus on a feature that has been in OEM12c for awhile. This feature is Business Intelligence Publisher (BIP). BIP has been a part of OEM12c since it was initially released; at first it was a bit cumbersome to get it installed. With the release of 18.104.22.168, the OEM team has done a great job at making the process a lot easier. Although this post is not directly talking about BIP installation; just understand that the process is easier and details can be found here.
What I want to focus on is how to configure BIP, once installed, to use shared storage. I don’t recall if the requirement for shared storage was required in earlier versions of OEM12c; however, if you want to share BIP reports between OMS nodes in a high-avaliablity configuration, a shared location is required. The initial directions for reconfiguring BIP for shared storage can be found here.
In order to allow multiple OMS nodes to support multiple BIP servers the following command needs to be ran:
emctl config oms -bip_shared_storage -config_volume <directory location> -cluster_volume <directory location>
Note: The directory location supplied for the shared location has to be accessible by both OMS nodes.
Example: emctl config oms -bip_shared_storage -config_volume /oms/BIP/config -cluster_volume /oms/BIP/cluster
When the reconfiguring of BIP begins, you will be asked for the Admin User’s password (Weblogic User) and the SYSMAN password. Supply these and then wait for the completion of the script. Once completed the CONFIG and CLUSTER directories for BIP will be moved to the location specified.
The new directory locations can be verified from the BIP web page under Administration -> Server Configuration.
In the end, reconfiguring BIP to use shared storage is quite simple.
Filed under: OEM
I’ve already written about the 12cR3 to 12cR4 upgrade here. I did a few run through’s at home to practice it and it all seemed good.
Setting The Scene
Just to set the scene, for our production environment we run Cloud Control in a VMware virtual machine, using Oracle Linux 6.5 as the quest OS. With that setup, we can use a simple installation (DB and OMS on the same VM) and use VMware to provide our failover, rather than having to worry about multiple OMS installations and any DB failover technology etc. If there’s one thing I’ve learned about Cloud Control, it’s Keep It Simple Stupid (KISS)! As far as our managed servers go, most of our databases and all our middle tier stuff runs on VMware and Oracle Linux too. We have a handful of things still hanging around on HP-UX and Solaris, which will hopefully be migrated soon…
Upgrade Attempt 1 : Non-Starter
Yesterday I started the upgrade of our production system. Pretty much straight out of the blocks I hit a road block. It didn’t like the agents running on our HP-UX servers. The upgrades of the HP-UX agents are so painful. Every time so far I’ve had to reinstall them. As a result, I didn’t bother to upgrade them last time and kept running with the previous version of the agents. The upgrade wouldn’t have anything to do with that, so I forgot about the Cloud Control upgrade and I spent yesterday attempting to upgrade the HP-UX agents to 12cR3, before I could attempt the 12cR4 Cloud Control upgrade.
As usual, the upgrade of the agents on HP-UX involved me uninstalling, removing all the targets, installing, discovering all the targets and setting up the backups etc. Not all of it is scripted yet, so it is an annoying and painful process. I’m not sure if other HP-UX users suffer this, but it seems pretty consistently bad for us. The sooner we get rid of these straggling HP-UX servers the better!
So this wasn’t so much a failure of the upgrade. It was really down to me being lazy and not bothering to upgrade some agents.
Fast forward to this morning and I was actually ready to start the upgrade.
Upgrade Attempt 2 : Success
With the 12cR3 agents in place on HP-UX, the upgrade ran past that step with no problems and on to the main body of the installation. The install and upgrade were textbook.
I’ve upgraded the agent on the cloud control server, but I’m not going to upgrade any of the other agents until I know things are working fine.
Tim…Enterprise Manager Cloud Control 12cR4 Production Upgrade was first posted on July 1, 2014 at 11:12 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Last week was a great party for the entire Google developer family, including Google Cloud Platform. And within the Cloud Platform, Big Data processing services. Which is where my focus has been in the almost two years I’ve been at Google.
It started with a bang, when our fearless leader Urs unveiled Cloud Dataflow in the keynote. Supported by a very timely demo (streaming analytics for a World Cup game) by my colleague Eric.
After the keynote, we had three live sessions:
In “Big Data, the Cloud Way“, I gave an overview of the main large-scale data processing services on Google Cloud:
- Cloud Pub/Sub, a newly-announced service which provides reliable, many-to-many, asynchronous messaging,
- the aforementioned Cloud Dataflow, to implement data processing pipelines which can run either in streaming or batch mode,
- BigQuery, an existing service for large-scale SQL-based data processing at interactive speed, and
- support for Hadoop and Spark, making it very easy to deploy and use them “the Cloud Way”, well integrated with other storage and processing services of Google Cloud Platform.
The next day, in “The Dawn of Fast Data“, Marwa and Reuven described Cloud Dataflow in a lot more details, including code samples. They showed how to easily construct a streaming pipeline which keeps a constantly-updated lookup table of most popular Twitter hashtags for a given prefix. They also explained how Cloud Dataflow builds on over a decade of data processing innovation at Google to optimize processing pipelines and free users from the burden of deploying, configuring, tuning and managing the needed infrastructure. Just like Cloud Pub/Sub and BigQuery do for event handling and SQL analytics, respectively.
Later that afternoon, Felipe and Jordan showed how to build predictive models in “Predicting the future with the Google Cloud Platform“.
We had also prepared some recorded short presentations. To learn more about how easy and efficient it is to use Hadoop and Spark on Google Cloud Platform, you should listen to Dennis in “Open Source Data Analytics“. To learn more about block storage options (including SSD, both local and remote), listen to Jay in “Optimizing disk I/O in the cloud“.
It’s liberating to now be able to talk freely about recent progress on our quest to equip Google Cloud users with easy to use data processing tools. Everyone can benefit from Google’s experience making developers productive while efficiently processing data at large scale. With great power comes great productivity.
Packt Publishing are celebrating 10 glorious years of publishing books. To celebrate this huge milestone, from June 26th Packt is offering all of its eBooks and Videos at just $10 each for 10 days. This promotion covers every title and customers can stock up on as many copies as they like until July 5th.Explore this offer here http://bit.ly/1m1PPqj
Recently, some new versions of java were made available. Most people think Java updates are boring and only security-oriented. But one of the last updates (7u40) includes a feature which deserves attention. I mean Java Mission Control 5.2.
If you know the Oracle JRockit JVM a little bit, you might have heard about JMC, which was called JRockit Mission Control in the past. In fact, it’s a tool suite embedded with the Hotspot JDK since this so called 7u40 release which allows to monitor and profile your JVM.
Previously, it was only available for JRockit JVMs and it has now been ported to the Hotspot. JMC is a way more accurate and complete than JConsole or other embedded tools. It does not affect JVM performances more than 1%.
JMC gathers low level information thanks to its Flight Recorder tool which listens and waits for internal events. It can then monitor, manage, profile, and eliminate memory leaks from the JVM.
The new version of JMC now has a new browser with subnodes for available server side services with their states. It is supported by Eclipse 3.8.2/4.2.2 and later, it allows deeper management of MBeans, especially setting values directly in the attribute tree. It converges with JRockit event management: All information provided by JRockit is now available in the Hotspot as well.
To enable JMC, you will have to add the following arguments to the JVM:
Questions often arise on the PeopleTools versions for which Critical Patch Updates have been published, or if a particular PeopleTools version is supported.
The attached page shows the patch number for PeopleTools versions associated with a particular CPU publication. This information will help you decide which CPU to apply and when to consider upgrading to a more current release.
The link in "CPU Date" goes to the landing page for CPU advisories, the link in the individual date, e.g. Apr-10, goes to the advisory for that date.
The page also shows the CVE's addressed in the CPU, a synopsis of the issue and the Common Vulnerability Scoring System (CVSS) value.
To find more details on any CVE, simply replace the CVE number in the sample URL below.
Common Vulnerability Scoring System Version 2 Calculator
This page shows the components of the CVSS score
Example CVSS response policy http://www.first.org/_assets/cvss/cvss-based-patch-policy.pdf
All the details in this page are available on My Oracle Support and public sites.
The RED column indicates the last patch for a PeopleTools version and effectively the last support date for that version.
Applications Unlimited support does NOT apply to PeopleTools versions.
The Back Story
Using wait time is part of an Oracle Time Based Analysis (OTBA). While Oracle process CPU consumption is a big part of the analysis, the other category is non-idle wait time.
You can see the two categories of time clearly "in action" with my Real-Time Session Sampler script, rss.sql. It's part of my OraPub System Monitor (OSM) toolkit, that can be downloaded for free. Here's an example of the output:
I need to explain a little more about CPU time and then Oracle wait time.
An Oracle processes wants to burn CPU. Without consuming CPU an Oracle process, can well...not process work! Oracle keeps track of the CPU consumption time. It's the actual CPU time consumed, for example, 500 ms or 3 seconds.
When an Oracle process can not burn CPU, the process, in Oracle terms, it must wait. For example, when an Oracle process waits, it's like it yells out details about why it's waiting. We call this yell a wait event. Each wait event has a name. And the name provides clues about why the process can't burn CPU and is therefore waiting.
There Are Different Reasons Why An Oracle Process Waits
There are three broad categories Oracle must time, when a process is not consuming CPU:
- When the wait time is predetermined and not interruptible. Perhaps when a latch can not be acquired through repeated attempts so the process takes a break (i.e., sleeps) for a fix period of time, say 10ms.
- When the wait time is predetermined but the process can be woken by another process. Perhaps a log writer is in the middle of its three second sleep and then a server process commits. The log writer will be signaled to wake and do some work.
- When Oracle has no idea how long the wait may last. Perhaps a process submitted a block to the IO subsystem for a synchronous read. Oracle has no idea how long this may take. This situation is what I'm focusing on in this posting and I show in the video below.
Let's say when I work it's like an Oracle process consuming CPU. And if I have to stop working to drive to a meeting, it's like an Oracle process waiting. And in this situation, I really don't know how long the wait will take. It's out of my control. There could be an accident along the way (think: table level lock)! An Oracle process can experience this same kind of thing.
If I'm an Oracle server process and I discover a block I need is not in the buffer cache, I'm going to need to make a call to the OS for that block. When I make the synchronous IO call, I really do not know how long it will take and I have to wait, that is, the Oracle process must wait.
From a DBA perspective, when I perform an Oracle Time Based Analysis (OTBA) I need know how long the IO, that is, the wait took. How does Oracle figure this out? It's pretty simple actually. It's basiclly like this:
- Get the current time, start time
- Make the synchronous IO call and wait until IO received
- Get the current time, end time
- Calculate the delta, end time - start time
Actually Watching An Oracle Process Figuring Out the Wait Time
Some say that seeing is believing. For sure it helps one to learn quickly. If you want to see with your own eyes an Oracle server process determine wait time for a multiple block synchronous read (event name is, db file scattered read), watch the below video.
Pretty cool, eh? By the way, the processing of timing processes and events has a special name, called instrumentation. As we can see, Oracle has instrumented its kernel code.
Enjoy your work and thanks for reading!
You can watch seminar introductions (like above) for free on YouTube!
If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. For details go to www.orapub.com. I also offer on-site training and consulting services.
P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.
So, the World Cup is in full swing.
Now the lesser teams have fallen by the wayside ( England), we can get on with enjoying a feast of footie.
As well as a glut of goals, the current tournament has given us a salutory reminder of the importance of diet for elite athletes.
After predictably (and brilliantly) destroying England single-handedly, Luis Suaraz found himself a bit peckish and nipped out for an Italian. Now the whole world seems to be commenting on his eating habits.
Like Luis, you may find yourself thinking that you’ve bitten off more than you can chew when confronted by DBMS_DATAPUMP.
The documentation does offer some help…to an extent. However, the whole thing can seem a bit fiddly, especially if you’re used to the more traditional command-line interface for Datapump.
What follows is a tour through DBMS_DATAPUMP based on my own (sometimes painful) experience, broken down into bite-sized chunks.
Much of the functionality to filter object types and even data is common to both Exports and Imports.
So, the approach I’ve taken is to cover the Export first, with a view to finally producing a Full Database export.
I’ve then used the Import process against this to demonstrate some of the package’s filtering capabilities.
So, what’s on the menu today ?
- Privileges required to run a DBMS_DATAPUMP job from your current schema and for the whole database
- Running a consistent export
- Running datapump jobs in the background
- Monitoring running jobs
- Importing from one schema to another
- Specifying the types of objects to include in Exports and Imports
- Specifying subsets of data
- DDL only Jobs
- How to Kill a Datapump Job
The full code examples have all been written and tested on Oracle XE 11gR2.
I’ve tried to maximise the use of in-line hard-coded values and minimise the number of variables in an attempt to make the code easier to follow.
Also, in these examples I’ve made use of the default DATA_PUMP_DIR directory object, but you can use any directory object to which you have the appropriate privileges.
For dessert, there are a couple of other DBMS_DATAPUMP features that I have found useful that are specific to Enterprise Edition ( in one case, with the Partitioning Option) ;
- Including specific table partitions
- Parallel processing
So, a fair bit to get through then. I hope you have an apetite…Privileges required for using DBMS_DATAPUMP
Obviously, the first thing you need is execute privileges on DBMS_DATAPUMP. By default, this is granted to PUBLIC, although the more security conscious DBAs will have instituted more rigorous controls over just who gets access to this package.
The other thing you’ll need is READ and WRITE privileges on a database Directory Object.
The DATA_PUMP_DIR directory object is created by default (but any old Directory Object will do)…
select directory_path from dba_directories where directory_name = 'DATA_PUMP_DIR' / DIRECTORY_PATH -------------------------------------------------------------------------------- /u01/app/oracle/admin/XE/dpdump/
The HR user, for example, would need to be granted…
grant read, write on data_pump_dir to hr /
That’s pretty-much it. With just these privileges( along with those for creating/altering the relevant object types), you can run DBMS_DATAPUMP to export/import objects and data in your current schema.
Once again, assuming we’re connected as HR :
set serveroutput on size unlimited declare -- -- Just export the current schema (HR) -- l_dph number; -- The datapump job handle l_status varchar2(4000); -- terminating status of the job begin -- create the job... l_dph := dbms_datapump.open ( operation => 'EXPORT', job_mode => 'SCHEMA', -- schema level export will use the current user by default job_name => 'HR_SCHEMA_EXPORT' -- appropriate job name ); -- Specify the name and location of the export dump file we want to create... dbms_datapump.add_file ( handle => l_dph, filename => 'hr_export.dmp', directory => 'DATA_PUMP_DIR', -- can use any database directory object filetype => dbms_datapump.ku$_file_type_dump_file, reusefile => 1 -- if this file already exists, overwrite it ); -- ...and a log file to track the progress of the export dbms_datapump.add_file ( handle => l_dph, filename => 'hr_export.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file, reusefile => 1 ); -- Kick off the export dbms_datapump.start_job( handle => l_dph); -- ...and wait for the job to complete dbms_datapump.wait_for_job( handle => l_dph, job_state => l_status); dbms_output.put_line('Job done - status '||l_status); end; /
It’s probably worth noting that dbms_datapump jobs contain four main components as a rule.
First DBMS_DATAPUMP.OPEN creates a kind of container to hold the definition of the job.
Next, you specify any particular requirements for the job. This must always include reference to a dump file.
Then, use DBMS_DATAPUMP.START_JOB to kick-off the job you’ve defined.
Finally, you can either wait for the job to finish (as in this case), or leave it to run in the background. We’ll come onto background execution in a bit.
If you want the God-Like powers to export/import the entire database then you will need the roles :
These roles are granted to the DBA role and the SYS user by default.
Incidentally, the fact that roles play quite a significant part in DBMS_DATAPUMP priviliges means that, if you do intend to wrap this functionality in a package, it would probably be wise to create it with Invoker Rights – e.g.
create or replace package my_datapump_package authid current_user as ...
Now that’s all sorted…
set serveroutput on size unlimited declare -- -- Do a no-nonsense full export -- l_dph number; l_status varchar2(4000); begin l_dph := dbms_datapump.open ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'MIKE_FULL_EXPORT' ); -- Specify the dump file we're going to create dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file, reusefile => 1 ); -- ... and a log file dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file, reusefile => 1 ); -- Now press the big red shiny button... dbms_datapump.start_job( handle => l_dph); -- wait around for a bit... dbms_datapump.wait_for_job( handle => l_dph, job_state => l_status); dbms_output.put_line('Job done - status '||l_status); end; /
Pretty much the same as with the schema export we’ve already done. However, this time, we’ve specified the job_mode as FULL rather than SCHEMA.
With this script saved as full_exp_simple.sql…
SQL> @full_exp_simple.sql Job done - status COMPLETED PL/SQL procedure successfully completed. SQL>
Depending on the size of your database ( and the capacity of the server you’re running on), this may take a while.
When it’s all finished, you can wander over to the DATA_PUMP_DIR on the OS and you’ll see the fruit of your labours…
cd /u01/app/oracle/admin/XE/dpdump ls -l my_full_export.* -rw-r----- 1 oracle dba 321515520 Jun 18 19:55 my_full_export.dmp -rw-r--r-- 1 oracle dba 84957 Jun 18 19:55 my_full_export.log
If we look at the logfile, we can see it starts like this :
Starting "MIKE"."MIKE_FULL_EXPORT": Estimate in progress using BLOCKS method... Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Total estimation using BLOCKS method: 349.9 MB Processing object type DATABASE_EXPORT/TABLESPACE ...
… and ends with …
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE" 0 KB 0 rows Master table "MIKE"."MIKE_FULL_EXPORT" successfully loaded/unloaded ****************************************************************************** Dump file set for MIKE.MIKE_FULL_EXPORT is: /u01/app/oracle/admin/XE/dpdump/my_full_export.dmp Job "MIKE"."MIKE_FULL_EXPORT" successfully completed at 19:55:33
We now have an export. If, like me, you’ve just run this from a database where you know that there are no other active sessions, then you should have a nice consistent export. Unfortunately, this isn’t always the case…Making your Export Read Consistent…and other tweaks
To adress this issue of consistency, we’ll need to change our script a bit. Whilst we’re at it, it would also be nice to have some record in the logfile of when the job started so we can work out how long it actually ran for.
Finally, we’d like to be able to run the export in the background so we don’t have to hang around waiting for it.
Those of you old enough to remember the original export utility will recall that you could ensure that data in an export was referrentially consistent by the simple expedient of specifying consistent = Y in the exp command.
The equivalent in DBMS_DATAPUMP is to specify a value for FLASBACK_SCN.
NOTE – in order for this to work, your database must be running in ARCHIVELOG mode. This is especially relevant if you’re playing along on XE, which runs in NOARCHIVELOG by default.
To check the current status of archiving on the database :
select log_mode from v$database / LOG_MODE ------------ ARCHIVELOG
If the query comes back NOARCHIVELOG then you need to enable archiving.
To do this, you need to connect as SYSDBA, shutdown and then mount the database, before starting archiving and then re-opening the database.
These steps can be achieved as follows once connected to the database as sysdba :
shutdown immediate; startup mount; alter database archivelog / alter database open /
NOTE – these steps are intended if you are messing around in your own database ( e.g. XE on a laptop, like I am here). If you’re on a controlled environment, then you need to get your friendly neighbourhood DBA to do the honours.
Assuming the database is now in archivelog mode, the next thing we need to do is find SCN that we can pass to DBMS_DATAPUMP for it to use as a reference point for what we mean by consistent.
The SCN – System Change Number – is incremented every time a transaction completes in the database.
A quick demonstration is probably in order….
SQL> select current_scn from v$database; CURRENT_SCN ----------- 2774525 SQL> create table snacks( snack_name varchar2(30), description varchar2(4000)); Table created. SQL> select current_scn from v$database; CURRENT_SCN ----------- 2774562 SQL> insert into snacks( snack_name, description) 2 values( 'CHIELLINI', 'Lite bite') 3 / 1 row created. SQL> commit; Commit complete. SQL> select current_scn from v$database; CURRENT_SCN ----------- 2774581 SQL> drop table snacks purge; Table dropped. SQL> select current_scn from v$database; CURRENT_SCN ----------- 2774608 SQL>
The fact that the SCN increments by more than one after each transaction completes in this session is explained by the transactions being run by the oracle background processes on my database.
Anyway, if we do want to find the current SCN, or even an SCN for a given time and date, we can simply use the
SQL TIMESTAMP_TO_SCN function :
select timestamp_to_scn( systimestamp) from dual; TIMESTAMP_TO_SCN(SYSTIMESTAMP) ------------------------------ 2647783 SQL>
In order to pass this information to DBMS_DATAPUMP, we need to use the SET_PARAMETERS procedure.
This should do the trick :
... dbms_datapump.set_parameter ( handle => the_datapump_job_handle, name => 'FLASHBACK_SCN', value => timestamp_to_scn( systimestamp) ); ...Adding a message to the logfile
Next on our to-do list to improve our export is a message in the logfile recording when the export job started.
Oh look, there’s a package member called LOG_ENTRY. I wonder if that will help …
... dbms_datapump.log_entry ( handle => the_datapump_job_handle, message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS') ); ...
Incidentally, we could include the SCN we’re specifying in the message as well ( although it would need to be captured in a variable before use in both the SET_PARAMETER call above and the LOG_ENTRY call).
One point to note – any entry in the logfile that results from a call to this procedure always starts with “;;;”.
Just in case you don’t fancy the idea of hanging around waiting for the job to finish, you can replace the call to WAIT_FOR_JOB with this…
... dbms_datapump.detach( handle => the_datapump_job_handle); ...The New, Improved Datapump Export Script
If we now apply all of these changes to our original script, it should look something like ….
set serveroutput on size unlimited declare l_dph number; l_scn number; -- The SCN from which the export will run begin -- setup the job l_dph := dbms_datapump.open ( operation => 'EXPORT', job_mode => 'FULL', job_name => 'MIKE_FULL_EXPORT' ); -- define the files dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file, reusefile => 1 ); dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file, reusefile => 1 ); -- -- Specify the SCN number to make sure that the export -- is a consistent copy -- l_scn := timestamp_to_scn(systimestamp); dbms_datapump.set_parameter ( handle => l_dph, name => 'FLASHBACK_SCN', value => l_scn ); -- log the start time of the job dbms_datapump.log_entry ( handle => l_dph, message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS')||' for SCN '||l_scn ); -- start the job dbms_datapump.start_job( handle => l_dph); -- ...and leave it to run dbms_datapump.detach( handle => l_dph); end; /Monitoring the job
The USER_DATAPUMP_JOBS view (there is also a DBA_ version available) contains details of any currently defined datapump jobs.
SQL> select job_name, state, attached_sessions 2 from user_datapump_jobs; JOB_NAME STATE ATTACHED_SESSIONS ------------------------------ ------------------------------ ----------------- MIKE_FULL_EXPORT EXECUTING 1 SQL>
Additionally, during the export, DATAPUMP will create a temporary table with the same name as the job.
In this table, you can see which object (if any) it’s currently working on…
select object_schema, object_name, work_item, to_char(last_update, 'hh24:mi:ss') last_updated from mike_full_export where state = 'EXECUTING' and object_schema is not null and object_name is not null /
There is an example of how to get “real-time” updates in the Oracle Documentation.
Unfortunately, it relies on DBMS_OUTPUT so the messages don’t get echoed to the screen until after the job is completed.
The simplest way to find out what’s happening right now is to check the logfile.
On Linux, for example, you could simply do this :
cd /u01/app/oracle/admin/XE/dpdump tail -f my_full_export.log
If you haven’t got command line access to the database server, or simply prefer to keep everything in the database, then you could just create an external table based on the logfile. After all, you already have the required privileges on the directory …
create table datapump_log_xt ( line number, text varchar2(4000) ) organization external ( type oracle_loader default directory data_pump_dir access parameters ( records delimited by newline nologfile fields terminated by whitespace ( line recnum, text position(1:4000) ) ) location('') ) reject limit unlimited /
If we want to check progress, we can simply “tail” the file like this :
alter table datapump_log_xt location ('my_full_export.log') / select text from datapump_log_xt where line > ( select max(line) - 10 from datapump_log_xt) order by line; TEXT -------------------------------------------------------------------------------- Processing object type DATABASE_EXPORT/CONTEXT Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE 10 rows selected. SQL>
So far, we’ve looked exclusively at Exporting everything either from a given schema or an entire database.
Datapump also allows you to filter the data. Generally speaking, these filters apply whether you are importing or exporting.
One of the many reasons that you may want to take a DataPump export may be to refresh a development environment.
It’s possible that the schema you’re refreshing on the Development database has a different name from the one on your Production environment. Let’s say, for the sake of argument, that we have a HR_DEV user in our development environment…
create user hr_dev identified by pwd / grant connect, resource, unlimited tablespace, create view, create sequence, create session, alter session, create synonym, create table, create procedure to hr_dev / alter user hr_dev default tablespace users temporary tablespace temp /
Fortunately, importing the HR objects in the export file into the HR_DEV user is fairly simple :
... dbms_datapump.metadata_remap ( handle => the_datapump_job_handle, name => 'REMAP_SCHEMA' old_value => 'HR' value => 'HR_DEV' ); ...Specifying schemas and object types
At this point, let’s say that we only want to import the HR schema. The rest of the objects in the export file aren’t really relevant to us.
Furthermore, let’s assume we only want to import Tables and Sequences as we’ll re-create all of our PL/SQL stored program units, views etc from source code files.
Allow me to introduce the METADATA_FILTER procedure :
... dbms_datapump.metadata_filter ( handle => the_datapump_job_handle, name => 'SCHEMA_EXPR', value => q'[= 'HR']' ); ...
Once again, not as painful as it might appear. However, it’s here where we begin to see the benefits of quote delimiters.Filtering by object types
Here’s where things get a bit more interesting.
Remember the export logfile. At the beginning, there were a number of entries like :
Processing object type DATABASE_EXPORT/TABLESPACE Processing object type DATABASE_EXPORT/PROFILE Processing object type DATABASE_EXPORT/SYS_USER/USER Processing object type DATABASE_EXPORT/SCHEMA/USER ...
These paths are the basis for how datapump determines what to export.
Fortunately they are stored in some publicly available views :
We’re doing a SCHEMA import so we can check to see that the relevant object_path is available to us by :
select object_path, comments from schema_export_objects where object_path in ('TABLE', 'SEQUENCE') / OBJECT_PATH COMMENTS -------------------- -------------------------------------------------------------------------------- SEQUENCE Sequences in the selected schemas and their dependent grants and audits TABLE Tables in the selected schemas and their dependent objects
The Path Map looks to be a flattened hierarchy ( possibly an XML representation). The point here is that, by specifying a node in this hierarchy, you can persuade DBMS_DATAPUMP to do your bidding.
As both TABLE and SEQUENCE are nodes in the object_path, we should be able to use that here…
... dbms_datapump.metadata_filter ( handle => the_datapump_job_handle, name => 'INCLUDE_PATH_EXPR', value => q'[IN ('TABLE', 'SEQUENCE')]' ); ...
This will give us everything at the level at and below the TABLE and SEQUENCE nodes. Therefore, you will also get INDEXES for the tables…as well as Triggers.
select object_path, comments from schema_export_objects where object_path like '%TABLE%TRIGGER%' / OBJECT_PATH COMMENTS ------------------------------ ------------------------------ SCHEMA_EXPORT/TABLE/TRIGGER Triggers TABLE/TRIGGER Triggers
Remember, we don’t want any PL/SQL program units, so we need to filter these out. Fortunately, calls to the METADATA_FILTER procedure seem to be addative. As TRIGGER appears below TABLE in the hierarchy, we can filter them out with an additional call to the procedure :
... dbms_datapump.metadata_filter ( handle => the_datapump_job_handle, name => 'EXCLUDE_PATH_EXPR', value => q'[= 'TRIGGER']' ); ...The finished HR Import Script
Here it is then, a script to Import Tables and Sequences from HR to HR_DEV :
set serveroutput on size unlimited declare -- -- Import of HR objects to HR_DEV -- Only importing Tables and Sequences -- l_dph number; l_predicate varchar2(4000); begin -- Setup the job "context" as usual l_dph := dbms_datapump.open ( operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'IMP_HR_DEV' ); -- We only want objects in the HR schema dbms_datapump.metadata_filter ( handle => l_dph, name => 'SCHEMA_EXPR', value => q'[= 'HR']' ); -- and then only Sequences and Tables dbms_datapump.metadata_filter ( handle => l_dph, name => 'INCLUDE_PATH_EXPR', value => q'[IN ('TABLE', 'SEQUENCE')]' ); -- ...but no triggers... dbms_datapump.metadata_filter ( handle => l_dph, name => 'EXCLUDE_PATH_EXPR', value => q'[= 'TRIGGER']' ); -- and we want to import these objects into the HR_DEV schema dbms_datapump.metadata_remap ( handle => l_dph, name => 'REMAP_SCHEMA', old_value => 'HR', value => 'HR_DEV' ); -- -- If we find a table that already exists in the target -- schema then overwrite it.. -- dbms_datapump.set_parameter ( handle => l_dph, name => 'TABLE_EXISTS_ACTION', value => 'REPLACE' ); -- Now point to the export dump file to take this from dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file ); -- ...and make sure we log what's going on... dbms_datapump.add_file ( handle => l_dph, filename => 'hr_dev_imp.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file, reusefile => 1 ); -- log the start time... dbms_datapump.log_entry ( handle => l_dph, message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS') ); -- start the job... dbms_datapump.start_job( handle => l_dph); -- and detach... dbms_datapump.detach(handle => l_dph); end; /
Now to give it a whirl ( saved as hr_dev_imp.sql)….
SQL> @hr_dev_imp.sql PL/SQL procedure successfully completed. SQL>
Looking at the logfile, all appears well…
;;; Job starting at 18:15:29 Master table "MIKE"."IMP_HR_DEV" successfully loaded/unloaded Starting "MIKE"."IMP_HR_DEV": Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA . . imported "HR_DEV"."COUNTRIES" 6.367 KB 25 rows . . imported "HR_DEV"."DEPARTMENTS" 7.007 KB 27 rows . . imported "HR_DEV"."EMPLOYEES" 16.80 KB 107 rows . . imported "HR_DEV"."JOBS" 6.992 KB 19 rows . . imported "HR_DEV"."JOB_HISTORY" 7.054 KB 10 rows . . imported "HR_DEV"."LOCATIONS" 8.273 KB 23 rows . . imported "HR_DEV"."REGIONS" 5.476 KB 4 rows Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS Job "MIKE"."IMP_HR_DEV" successfully completed at 18:16:15
If we now connect as HR_DEV and check the objects we have in our schema :
select object_name, object_type from user_objects order by object_type, object_name / OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------------------ COUNTRY_C_ID_PK INDEX DEPT_ID_PK INDEX DEPT_LOCATION_IX INDEX EMP_DEPARTMENT_IX INDEX EMP_EMAIL_UK INDEX EMP_EMP_ID_PK INDEX EMP_JOB_IX INDEX EMP_MANAGER_IX INDEX EMP_NAME_IX INDEX JHIST_DEPARTMENT_IX INDEX JHIST_EMPLOYEE_IX INDEX JHIST_EMP_ID_ST_DATE_PK INDEX JHIST_JOB_IX INDEX JOB_ID_PK INDEX LOC_CITY_IX INDEX LOC_COUNTRY_IX INDEX LOC_ID_PK INDEX LOC_STATE_PROVINCE_IX INDEX REG_ID_PK INDEX DEPARTMENTS_SEQ SEQUENCE EMPLOYEES_SEQ SEQUENCE LOCATIONS_SEQ SEQUENCE COUNTRIES TABLE DEPARTMENTS TABLE EMPLOYEES TABLE JOBS TABLE JOB_HISTORY TABLE LOCATIONS TABLE REGIONS TABLE 29 rows selected. SQL>Sequences are special
There is one thing to be aware of with importing sequences.
Most database objects ( procedure, packages, triggers, views etc), can be overwritten by using CREATE OR REPLACE.
Like tables, you cannot do this with Sequences.
Unlike tables, DBMS_DATAPUMP does not have an option to re-create existing sequences.
This means that, if we were to refresh the HR_DEV schema again using the same script, we’d be likely to run into a bit of a problem, as you can see from this log file entry :
... Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE ORA-31684: Object type SEQUENCE:"HR_DEV"."LOCATIONS_SEQ" already exists ORA-31684: Object type SEQUENCE:"HR_DEV"."DEPARTMENTS_SEQ" already exists ORA-31684: Object type SEQUENCE:"HR_DEV"."EMPLOYEES_SEQ" already exists ...
With this in mind, it’s probably a good idea to drop any sequences prior to refreshing with an import…
set serveroutput on size unlimited begin for r_seq in ( select sequence_name from dba_sequences where sequence_owner = 'HR_DEV' ) loop -- -- check the sequence name is "clean" -- if regexp_instr( replace( r_seq.sequence_name, '_'), '[[:punct:]]|[[:space:]]') > 0 then raise_application_error( -20000, 'Sequence name contains dodgy characters.'); end if; -- drop the sequence execute immediate 'drop sequence hr_dev.'||r_seq.sequence_name; dbms_output.put_line('Sequence '||r_seq.sequence_name||' dropped.'); end loop; end; /
If we run this we’ll get…
SQL> @drop_sequences.sql Sequence DEPARTMENTS_SEQ dropped. Sequence EMPLOYEES_SEQ dropped. Sequence LOCATIONS_SEQ dropped. PL/SQL procedure successfully completed. SQL>Specifying a subset of data
Sometimes you don’t want to export/import everything. You might just want a few tables with a subset of data.
In our HR_DEV environment we want to focus on the EMPLOYEES table. We don’t want all of the rows – let’s just have one department…
Once again, METADATA_FILTER can be used here :
dbms_datapump.metadata_filter ( handle => the_datapump_job_handle, name => 'NAME_LIST', value => q'['EMPLOYEES']', object_path => 'TABLE' );
NAME_LIST tells datapump to look for object names rather than in object type paths.
Specifying the object type path as TABLE means that datapump will only look for a table called EMPLOYEE and not any other type of object with the same name.
Here, we need to use the DATA_FILTER procedure. Unlike it’s METADATA counterpart, you need to provide a syntactically correct predicate for it to work…
... dbms_datapump.data_filter ( handle => the_datapump_job_handle, name => 'SUBQUERY', value => 'where department_id = 20', table_name => 'EMPLOYEES', schema_name => 'HR' ); ...The Data Sub-set Import Script
Before running this, I’ve taken the precaution of dropping all of the objects from the HR_DEV schema as I don’t want to run into any pesky constraint errors…
set serveroutput on size unlimited declare -- -- import Subset of employees into HR_DEV ( having cleared down the schema first) -- l_dph number; begin l_dph := dbms_datapump.open ( operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'IMP_EMPLOYEES' ); -- We only want objects in the HR schema dbms_datapump.metadata_filter ( handle => l_dph, name => 'SCHEMA_EXPR', value => q'[= 'HR']' ); -- only TABLES... dbms_datapump.metadata_filter ( handle => l_dph, name => 'INCLUDE_PATH_EXPR', value => q'[= 'TABLE']' ); -- and then only the EMPLOYEES table dbms_datapump.metadata_filter ( handle => l_dph, name => 'NAME_LIST', value => q'['EMPLOYEES']', object_path => 'TABLE' ); -- without any triggers or ref constraints dbms_datapump.metadata_filter ( handle => l_dph, name => 'EXCLUDE_PATH_EXPR', value => q'[IN ('TRIGGER', 'REF_CONSTRAINT')]' ); -- subset of EMPLOYEES dbms_datapump.data_filter ( handle => l_dph, name => 'SUBQUERY', value => 'where department_id = 20', table_name => 'EMPLOYEES', schema_name => 'HR' ); dbms_datapump.metadata_remap ( handle => l_dph, name => 'REMAP_SCHEMA', old_value => 'HR', value => 'HR_DEV' ); -- -- If we find a table that already exists in the target -- schema then overwrite it.. -- dbms_datapump.set_parameter ( handle => l_dph, name => 'TABLE_EXISTS_ACTION', value => 'REPLACE' ); -- -- Now point to the export dump file to take this from -- dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file ); -- -- ...and make sure we log what's going on... -- dbms_datapump.add_file ( handle => l_dph, filename => 'hr_dev_employees_imp.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file, reusefile => 1 ); -- log the start time... dbms_datapump.log_entry ( handle => l_dph, message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS') ); -- -- start the job... -- dbms_datapump.start_job( handle => l_dph); -- -- and detach... -- dbms_datapump.detach(handle => l_dph); end; /
Notice that I’ve also specifically excluded the REF_CONSTRAINTS and TRIGGERS from the import.
Run this and we get the following output in the logfile :
;;; Job starting at 18:35:49 Master table "MIKE"."IMP_EMPLOYEES" successfully loaded/unloaded Starting "MIKE"."IMP_EMPLOYEES": Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA . . imported "HR_DEV"."EMPLOYEES" 16.80 KB 2 out of 107 rows Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS Job "MIKE"."IMP_EMPLOYEES" successfully completed at 18:36:33
We can see that only the EMPLOYEES table and it’s associated indexes have been imported :
select object_name, object_type from user_objects order by object_type, object_name 4 / OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------------------ EMP_DEPARTMENT_IX INDEX EMP_EMAIL_UK INDEX EMP_EMP_ID_PK INDEX EMP_JOB_IX INDEX EMP_MANAGER_IX INDEX EMP_NAME_IX INDEX EMPLOYEES TABLE 7 rows selected. SQL>
…and no Foreign Key constraints :
SQL> select table_name, constraint_name, constraint_type 2 from user_constraints; TABLE_NAME CONSTRAINT_NAME C ------------------------------ ------------------------------ - EMPLOYEES EMP_LAST_NAME_NN C EMPLOYEES EMP_EMAIL_NN C EMPLOYEES EMP_HIRE_DATE_NN C EMPLOYEES EMP_JOB_NN C EMPLOYEES EMP_EMP_ID_PK P EMPLOYEES EMP_EMAIL_UK U EMPLOYEES EMP_SALARY_MIN C 7 rows selected. SQL>
and only those EMPLOYEES in DEPARTMENT_ID 20…
select last_name, department_id from employees / LAST_NAME DEPARTMENT_ID ------------------------- ------------- Hartstein 20 Fay 20 SQL>DDL_ONLY Datapump operations
Once again the DATA_FILTER procedure comes in here. However, this time, the call is a bit different :
... dbms_datapump.data_filter ( handle => the_datapump_job_handle, name => 'INCLUDE_ROWS', value => 0 ); ...
Here the value is effectively a boolean – 0 = false i.e. don’t include rows. The default is 1 – do include rows.
Incidentally, this time I’ve dropped the user HR_DEV altogether before importing as it will be re-created by the Import if it does not already exist.
This next script will import all of the DDL from HR to HR_DEV, but will not import any data…
set serveroutput on size unlimited declare -- -- Import of HR objects to HR_DEV -- This time all DDL, but no data -- l_dph number; begin l_dph := dbms_datapump.open ( operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'IMP_DDL_HR_DEV' ); -- We only want objects in the HR schema dbms_datapump.metadata_filter ( handle => l_dph, name => 'SCHEMA_EXPR', value => q'[= 'HR']' ); -- but no data dbms_datapump.data_filter ( handle => l_dph, name => 'INCLUDE_ROWS', value => 0 ); -- -- and we want to import these objects into the HR_DEV schema -- dbms_datapump.metadata_remap ( handle => l_dph, name => 'REMAP_SCHEMA', old_value => 'HR', value => 'HR_DEV' ); -- If we find a table that already exists in the target -- schema then overwrite it.. dbms_datapump.set_parameter ( handle => l_dph, name => 'TABLE_EXISTS_ACTION', value => 'REPLACE' ); -- Now point to the export dump file to take this from dbms_datapump.add_file ( handle => l_dph, filename => 'my_full_export.dmp', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_dump_file ); -- ...and make sure we log what's going on... dbms_datapump.add_file ( handle => l_dph, filename => 'hr_dev_ddl_imp.log', directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file, reusefile => 1 ); -- log the start time... dbms_datapump.log_entry ( handle => l_dph, message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS') ); -- start the job... dbms_datapump.start_job( handle => l_dph); -- and detach... dbms_datapump.detach(handle => l_dph); end; /
After running this we find that the HR_DEV user has been created. However, you will need to connect using the password of the HR user included in the export.
We can see now that all of the HR objects have been imported into HR_DEV :
select object_name, object_type from user_objects order by object_type, object_name / OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------------------ ADD_JOB_HISTORY PROCEDURE COUNTRIES TABLE COUNTRY_C_ID_PK INDEX DEPARTMENTS TABLE DEPARTMENTS_SEQ SEQUENCE DEPT_ID_PK INDEX DEPT_LOCATION_IX INDEX EMPLOYEES TABLE EMPLOYEES_SEQ SEQUENCE EMP_DEPARTMENT_IX INDEX EMP_DETAILS_VIEW VIEW EMP_EMAIL_UK INDEX EMP_EMP_ID_PK INDEX EMP_JOB_IX INDEX EMP_MANAGER_IX INDEX EMP_NAME_IX INDEX JHIST_DEPARTMENT_IX INDEX JHIST_EMPLOYEE_IX INDEX JHIST_EMP_ID_ST_DATE_PK INDEX JHIST_JOB_IX INDEX JOBS TABLE JOB_HISTORY TABLE JOB_ID_PK INDEX LOCATIONS TABLE LOCATIONS_SEQ SEQUENCE LOC_CITY_IX INDEX LOC_COUNTRY_IX INDEX LOC_ID_PK INDEX LOC_STATE_PROVINCE_IX INDEX REGIONS TABLE REG_ID_PK INDEX SECURE_DML PROCEDURE SECURE_EMPLOYEES TRIGGER UPDATE_JOB_HISTORY TRIGGER USER_FK_TREE_VW VIEW 35 rows selected. SQL>
…and just to prove that we’ve imported no data at all…
SQL> select count(*) from regions; COUNT(*) ---------- 0 SQL> select count(*) from locations; COUNT(*) ---------- 0 SQL> select count(*) from departments; COUNT(*) ---------- 0 SQL> select count(*) from jobs; COUNT(*) ---------- 0 SQL> select count(*) from job_history; COUNT(*) ---------- 0 SQL> select count(*) from employees; COUNT(*) ---------- 0 SQL> select count(*) from countries; COUNT(*) ---------- 0 SQL>Killing a runaway job
There will be times when things don’t quite work as expected. This is especially true if you’re playing around with Datapump parameter settings.
Just occasionally when testing the code in this post I’ve come across some variation on the theme of …
ERROR at line 1: ORA-31634: job already exists ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 1137 ORA-06512: at "SYS.DBMS_DATAPUMP", line 5283 ORA-06512: at line 11
Yep, I’v fixed my runtime error, re-executed the scrpt and got this insistent little message.
The first thing to do when you get this ( especially if you’re running in the same session that you started the original job) is to logout and log back in again.
The second thing to do is to check to see if the job is still hanging about…
user select job_name, state from user_datapump_jobs /
If so, then you can attach to the job and stop it.
If I’m connected as the job owner and I’m only running the one job, the following script should do the trick…
set serveroutput on size unlimited declare l_dph number; l_job_name user_datapump_jobs.job_name%type; begin select job_name into l_job_name from user_datapump_jobs; dbms_output.put_line('Killing '||l_job_name); l_dph := dbms_datapump.attach( job_name => l_job_name, job_owner => user); dbms_datapump.stop_job( l_dph); dbms_output.put_line('Job '||l_job_name||' terminated.'); exception when no_data_found then dbms_output.put_line('Job has already terminated.'); end; /
This should work almost every time. On the off-chance it doesn’t, and the job does not appear in the USER_DATAPUMP_JOBS view anymore, then it’s possible that the temporary table created as part of the job has not been tidied up.
Remember, the temporary table is created in the job owner’s schema with the same name as the datapump job itself.
So, if I were to hit this problem whilst running the job IMP_HR_DEV, I could check :
select 1 from user_tables where table_name = 'IMP_HR_DEV' /
If the table does still exist and there is no entry in the _DATAPUMP_JOB view, we can simply drop it :
drop table imp_hr_dev /
This should finally persuade datapump that the job is not in fact running.A couple of Enterprise Edition Features
For those of you on a richer diet, there are a couple of further morsels which may well be worthy of consideration.
If you want to speed up your Datapump job by making use of the CPU cores available you could try :
... dbms_datapump.set_parallel ( handle => the_datapump_job_handle, degree => 32 ); ...
If you’ve really pushed the boat out and have the partitioning option, you can tell datapump to take only specific partitions for tables.
For example, say you have a number of tables that are partitioned by date ranges.
The tables may be partitioned by quarter-year, with the partitions following a consistend naming convention (e.g. Q12014, Q22014 etc).
If we only want to export/import the latest partition for each of these tables( say Q22014), we can do something like this :
... for r_part_tabs in ( select table_name from user_tab_partitions where partition_name = 'Q22014' ) loop dbms_datapump.data_filter ( handle => the_datapump_job_handle, name => 'PARTITION_EXPR', value => q'[= 'Q22014']', table_name => r_part_tabs.table_name ); end loop ...
Despite having gorged ourselves on this feast of DBMS_DATAPUMP delights, it’s fair to say that I’ve by no means covered everything.
Doubtless you will want to do things with this API that I’ve not covered here. Indeed, you’ll probably also find better ways of implementing some of the same functionality.
In the meantime, I’m off for a digestif.
Filed under: Oracle, PL/SQL Tagged: archivelog mode, database_export_objects, dba_datapump_jobs, dbms_datapump examples, dbms_datapump.add_file, dbms_datapump.data_filter, dbms_datapump.detach, dbms_datapump.log_entry, dbms_datapump.metadata_filter, dbms_datapump.metadata_remap, dbms_datapump.open, dbms_datapump.set_parallel, dbms_datapump.set_parameter, dbms_datapump.start_job, dbms_datapump.stop_job, dbms_datapump.wait_for_job, DDL Only import, drop sequence, EXCLUDE_PATH_EXPR, external table, flashback_scn, INCLUDE_PATH_EXPR, INCLUDE_ROWS, NAME_LIST, ora-31634, PARTITION_EXPR, REF_CONSTRAINTS, remap_schema, Running a Datapump job in the background, schema_export_objects, SCHEMA_EXPR, scn, SUBQUERY, TABLE_EXISTS_ACTION, table_export_objects, tail a logfile from sql, timestamp_to_scn, user_datapump_jobs, v$database.current_scn, v$database.log_mode
WebLogic 12cR3 was released towards the end of last week, so this weekend I had an install-fest.
- Oracle WebLogic Server (WLS) 12c Release 3 (12.1.3) Installation on Oracle Linux 5 and 6
- WebLogic Server 12cR3 : ADF Application Development Runtime Installation on Oracle Linux
I also did some minor amendments to some existing articles.
- WebLogic Server 12cR2 and 12cR3 : ADF Application Development Runtime – Repository Configuration Utility (RCU)
- WebLogic Server 12cR2 and 12cR3 : Create, Extend and Remove Domains
- WebLogic Server 12cR2 and 12cR3 : Clustered Domains
From a newbie administrator perspective, like me, there is very little difference between WebLogic 12cR2 and 12cR3, so most of the time it is business as usual.
To coincide with this new release, Packt are doing a $10 promotion for WebLogic eBooks (WebLogic Partner Community EMEA).
Tim…WebLogic 12cR3 articles was first posted on June 30, 2014 at 11:11 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
We share our skills to maximize your revenue!
My local lab consists of two iMacs, an Apple Airbook, several arrays of 1-TB external hard drives from OWC, and a serious effort in leveraging virtual machines (I'm personally a VMware fan). It wasn't too long ago that I could run most Oracle technology on my little lab platform. And I like it that way...I prefer to have total control over all my prototyping and development environments.
But as Oracle transitions to the cloud, the software footprint for the various Oracle technologies I work with has just become too big for my lab platform. Fusion Applications, the Business Process Management Suite, OBIEE, the SOA Suite...heck, even E-Business Suite 12.2.x is pushing the limit. Some of the footprint growth is due to new features, some to code bloat or technical debt, and some is just the nature of the beast with enterprise applications built for the cloud.
I'll also admit to being cheap here. There's no subscription fees for AWS or Oracle Cloud coming out of my personal wallet. Just not gonna happen. And investing in new hardware seems to be akin to stocking up on buggy whips or vacuum tubes.
So, what's my plan? My intent is to go back to basics for my home lab. I can still run an Oracle database quite nicely (anything ranging from Enterprise to APEX to Express). I can also run Oracle ADF (but, due to performance issues, I choose Glassfish over WebLogic). OBIEE sample apps also fit and run well. But I'll have to look to my company, my customer, or some other kind-hearted soul for bigger sandboxes - Fusion Apps or Middleware, OBIEE, and anything having to do with SOA integration or business process builds. I'm not crazy about this solution, but it is what it is...
What about the rest of you out there? How do you folks (especially the non-Oracle employees) handle your personal sandboxes these days? Any ideas or suggestions? Comments welcome.
As usual the latest version can be downloaded here.
This version in particular supports now the new 12c "Adaptive" plan feature - previous versions don't cope very well with those if you don't add the "ADAPTIVE" formatting option manually.
Here are the notes from the change log:
- GV$SQL_MONITOR and GV$SQL_PLAN_MONITOR can now be customized in the
settings as table names in case you want to use your own custom monitoring repository that copies data from GV$SQL_MONITOR and GV$SQL_PLAN_MONITOR in order to keep/persist monitoring data. The tables need to have at least those columns that are used by XPLAN_ASH from the original views
- The "Activity Timeline based on ASH" for RAC Cross Instance Parallel Execution shows an additional line break for the GLOBAL summary
- Fixed various bugs related to some figures when analyzing Cross Instance RAC Parallel Execution
- The new "GLOBAL" aggregation level for Cross Instance RAC Parallel Execution (see version 4.0 change log below) is now also shown in the "Information on Parallel Degree based on ASH" section
- The "Parallel Distribution ASH" column on execution plan line level now can show process information taken from Real-Time SQL Monitoring for those processes that are not found in ASH samples.
This effectively means that with monitoring information available for every plan line every involved process will now be shown along with its ASH sample count and rows produced
So some processes will show up now with a sample count of 0.
The idea behind this is to provide more information about row distribution even for those lines/processes that are not covered by the ASH samples.
Previously the rowcount produced was only shown for those processes covered in ASH samples
The new behaviour is default - if you find the output messy you can return to previous behaviour (show only rowcounts for processes found in ASH samples) by setting the new configuration switch "show_monitor_rowcount" to any other value than the default of "YES"
- The "Real-Time SQL Monitoring" information on execution plan line level now includes the read and write request information ("ReadReq", "WriteReq")
- The I/O figures based on ASH now include the new "DELTA_READ_MEM_BYTES" information that was added in 12c. This applies to the following sections:
- SQL Statement I/O Summary based on ASH
- Parallel Worker activity overview based on ASH
- Activity Timeline based on ASH
The "Read Mem Bytes" seems to correspond to the "logical read bytes from cache" statistics, so any direct path reads are not covered by this value
- Added some more verbose description in the "Note" sections how to handle long lines. XPLAN_ASH now does a SET TRIMSPOOL ON if you want to spool the output to a file
- Whenever the output referred to DFOs this was changed to "DFO TREE", which is the correct term
- The "Parallel Worker activity overview based on ASH" section now shows a blank line between the sections which should make this section more readable
- Adaptive plans are now supported by XPLAN_ASH
Note they don't work well with previous versions, the formatting of the inactive lines breaks and the overall information can be misleading if you don't add manually the "ADAPTIVE" formatting option
If XPLAN_ASH detects an adaptive plan, it will always force the ADAPTIVE formatting option.
This also means that Adaptive plans for the time being won't work with SASH as SASH doesn't collect the OTHER_XML column from GV$SQL_PLAN
You could manually add that column to SASH_SQLPLANS and add the column to the "sash_pkg.get_sqlplans" procedure - this is a CLOB column, but INSERT / SELECT should work I think
The view SASH_PLAN_TABLE needs also to be modified to select the OTHER_XML column instead of a dummy NULL
Although this output is less readable than the "faked" output that shows only the plan operations that are actually in use, it is the only simple way how ASH/MONITOR data can be related to execution plan lines, as these hold the information with the actual plan line, not the one that is made up by DBMS_XPLAN.DISPLAY* based on the DISPLAY_MAP information in the OTHER_XML column
Hence I decided for the time being to use the same approach as 12c Real-Time SQL Monitoring and always show the full/adaptive shape of the plan
Another challenge for XPLAN_ASH with adaptive plans is the possibly changing PLAN_HASH_VALUE during execution.
XPLAN_ASH extracts the PLAN_HASH_VALUE from ASH/MONITOR when trying to get the plan from DBA_HIST_SQL_PLAN.
Hence XPLAN_ASH now needs to take care to extract the most recent PLAN_HASH_VALUE, previously it didn't matter as it wasn't supposed to change during execution. This seems to work based on my tests, but it's something to keep in mind
- The new "gather stats on load" 12c feature implies for INSERT...SELECT statements that the cursor will immediately be invalidated/removed from the Library Cache after (successful) execution. So now such
INSERT...SELECT behave like CTAS which also gets removed immediately. This is a pity as you won't be able to pick up the plan from the Library Cache after the execution completes using XPLAN_ASH (or any other tool using DBMS_XPLAN.DISPLAY*).
Although V$SQL_PLAN_MONITOR might keep plan for some time after the execution, it can't be used as input to DBMS_XPLAN.DISPLAY*, hence this isn't a viable workaround. In principle however this isn't a good thing as the SQL and plan information might be missing from AWR / STATSPACK reports due to the
At the time being the only viable workaround known to me for this is to prevent the "gather stats on load" feature either via parameter "_optimizer_gather_stats_on_load" or hint "no_gather_optimizer_statistics", or via using pre-12c optimizer feature settings which implicitly disables the feature which is of course not
really a good workaround as the feature itself might be rather desirable
Penetration Testing A Hands-On Introduction to Hacking By Georgia Weidman. A book was written about the basic of Penetration testing. It gave concepts, ideas, and techniques in 5 parts: The Basics, Assessments, Attacks, Exploit Development and Mobile Hacking.
- Crack passwords and wireless network keys with brute-forcing and wordlists
- Test web applications for vulnerabilities
- Use the Metasploit Framework to launch exploits and write your own Metasploit modules
- Automate social-engineering attacks
- Bypass antivirus software
- Turn access to one machine into total control of the enterprise in the post exploitation phase
Written By: Surachart Opun http://surachartopun.com
Every time when creating VO based on EO, JDeveloper will generate VO with Declarative mode:
This is how VO Query wizard looks like now. VO Query is declarative and will be generated on runtime. Although, it doesn't mean all attributes will be checked dynamically by default. On opposite, all attributes by default will be included into generated query:
Option 'Calculate Optimized Query at Runtime' is selected by default, this means Declarative mode is on. List of attributes is displayed, these attributes will be always included into SQL query. If we want to exclude some of them, we need to set Selected in Query = false for the attribute:
In sample application, I'm going to set this property to be false for all attributes, except a key:
If we return back to the VO Query wizard screen, now only key attribute remains selected:
UI is developed to display six attributes:
Only displayed attributes are included into SQL statement:
Download sample application - ADFBC12cApp.zip.