A lot of new features popped up with RMAN 12c, like Expansion of Multi-section support, or Simplified cross-platform migration and many many more. However, in this post I would like to highlight a small, but very helpful new feature which I demonstrated during my session at the SOUG day.
The automatic datafile creation
In earlier versions of Oracle, you might run into situations where you create a new tablespace, and some objects into it, and then, all of a sudden, the DB crashes or the datafile is lost, without ever being backed up.
This is where the follwing command comes into play:
alter database create datafile <NAME>;
However, in Oracle 12c, this is done automatically for you, in case you use RMAN.
Lets create a tablespace and afterwards resize the datafile.
SQL> create tablespace dbi datafile '/home/oracle/rman/dbi01.dbf' size 16M; Tablespace created. SQL> alter database datafile '/home/oracle/rman/dbi01.dbf' resize 32M; Database altered.
Now lets create a table in the new tablespace.
SQL> create table t_dbi tablespace dbi as select * from dba_objects; Table created. SQL> select count(*) from t_dbi; COUNT(*) ---------- 120130
Afterwards, we simulate an error and then check what Oracle says.
$ echo destroy > /home/oracle/rman/dbi01.dbf
SQL> alter system flush buffer_cache; System altered. SQL> select count(*) from t_dbi; select count(*) from t_dbi * ERROR at line 1: ORA-03135: connection lost contact Process ID: 25538 Session ID: 387 Serial number: 55345
From the alert.log
Errors in file /u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/OCM121_ora_25904.trc: ORA-01157: cannot identify/lock data file 11 - see DBWR trace file ORA-01110: data file 11: '/home/oracle/rman/dbi01.dbf'
Ok. Datafile is gone, and we have no backup. Lets do a preview to see, how Oracle would restore the datafile.
RMAN> restore ( datafile 11 ) preview; Starting restore at 30-SEP-2016 15:46:27 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=355 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=126 device type=DISK datafile 11 will be created automatically during restore operation using channel ORA_DISK_1 using channel ORA_DISK_2 List of Archived Log Copies for database with db_unique_name OCM121 ===================================================================== Key Thrd Seq S Low Time ------- ---- ------- - -------------------- 519 1 529 A 30-SEP-2016 12:23:00 Name: +FRA/OCM121/ARCHIVELOG/2016_09_30/thread_1_seq_529.454.923931563 recovery will be done up to SCN 7346834 Media recovery start SCN is 7346834 Recovery must be done beyond SCN 7346834 to clear datafile fuzziness Finished restore at 30-SEP-2016 15:46:28
As you can see in the preview output, Oracle will create the datafile automatically for us. Ok. Lets try it.
RMAN> restore ( datafile 11 ); Starting restore at 30-SEP-2016 15:48:29 using channel ORA_DISK_1 using channel ORA_DISK_2 creating datafile file number=11 name=/home/oracle/rman/dbi01.dbf restore not done; all files read only, offline, or already restored Finished restore at 30-SEP-2016 15:48:31 RMAN> recover datafile 11; Starting recover at 30-SEP-2016 15:49:17 using channel ORA_DISK_1 using channel ORA_DISK_2 starting media recovery media recovery complete, elapsed time: 00:00:02 Finished recover at 30-SEP-2016 15:49:20 RMAN> alter database datafile 11 online; Statement processed
Ready. We can now access our table again without running the “alter database create datafile” command during the restore. And the resize to 32M was also done for us. However, the resize came during the recovery part.
$ ls -l dbi01.dbf -rw-r----- 1 oracle asmadmin 33562624 Sep 30 15:52 dbi01.dbf
From my point of view, a small, but quite helpful feature.
One last remark. I have done my demo with Oracle 12cR1 with July 2016 PSU. In that version, the RMAN PREVIEW command has a bug, which says that you cannot recover into a consistent state, even if you can. After applying the following OnOff patch on top of July PSU 2016, the RMAN PREVIEW command works as expected.
Patch 20315311: RMAN-5119: RECOVERY CAN NOT BE DONE TO A CONSISTENT STATE
Oracle has a dedicated team that works on the difficult problems like mirroring E-Business Suite or ingesting SalesForce data into BI repositories for analysis. You can find their postings at www.ateam-oracle.com. One of their posts from April 28th was detailing how to setup a VNC server using an ssh tunnel to securely communicate from a Windows desktop using a putty tunnel to a tigerVNC server installed on a Linux instance in the Oracle Compute Cloud. The installation instructions talk about setting up putty on a Windows desktop as well as setting up the X-Windows user interface and tigerVNC on the Linux server.
There are basically two options for this configuration. The first option is to setup the VNC software to run on the Linux server and have it listen on all network connections. If you setup a Corente VPN to connect your office to the Oracle Cloud nothing else needs to be done other than open up port 5901 on your virtual private network. Once this is done you will have access to the console from any computer in your office that is connected to the VPN subnet. You could also open up port 5901 to the public internet but this is not recommended because you are exposing a potential security hole that someone can exploit. It is easy to configure this short term and turn it off when you are done with your testing of the configuration. Alternatively you can setup an ssh tunnel from a computer in your office and anyone who opens up port 5901 to that ip address is tunneled through the ssh connection to port 5901 on the Oracle Cloud. This is a simple and secure way of connecting one desktop to one computer. It also allows a single access point for people to use to connect to a server in the cloud.
One of the key questions that you need to ask when configuring a console connection is how are you going to use it. Do you have a team of administrators that are all going to access a server in the cloud? Do they need simultaneous access? Do you have a VPN need to connect other servers and services or are you doing it just for console connection? There is a cost of $75/month to run a VPN server because you have to pay for a compute instance to run the connection. In my opinion paying $75/month to connect one person to one computer is not worth the cost. You can do the same thing with an SSH tunnel. If, on the other hand, you have 3-4 admins that are trying to access services and developers that are trying to access other servers having them setup a dozen or so SSH tunnels every time that they want to connect is not an efficient use of time.
In summary, many people only look at how to do something. They don't take a step back and look at why they are doing something. If the software that you are installing is only available with a graphical interface installer then you have to set up a connection once possibly twice. If you can run the installer in silent mode with a configuration file to feed it the mouse clicks and text entry, go with the silent mode install. Scripting will allow you to automate the install and put it in the ora-init of your Orchestration thus reducing time and potential error conditions by selecting a different option during re-installation. If you don't have a need for a long term VPN go with a short term SSH tunnel connection. You can always install the VNC software and not enable it on the Linux server in the Oracle Cloud. I did not go through a screen shot of installation and configuration because the A-Team did a great job in their tutorial.
"Had a question from a client yesterday concerning the subject: I want to know which database session (APEX_PUBLIC_USER) is servicing which APEX session. Poking around in the V$ tables, I can see that in v$SQL, the module column will reveal the APEX Application and Page, but not the Session ID. Even if the session ID was in there, I don’t see an obvious way to join back to V$SESSION."
It's a bit of a puzzling problem for DBA's and developers - being able to correlate a database session with a specific APEX application and session. As I wrote about earlier, all database sessions in the database session pool of ORDS are connected as APEX_PUBLIC_USER (or ANONYMOUS, if you're using the embedded PL/SQL Gateway). If a user is experiencing slowness in their APEX application, how can a DBA look under the hood, identify the database session associated with the request from that user, and also look at the active SQL statement and possibly any wait events with that session?
This question comes up a lot, and should really be covered in the Oracle Application Express documentation. But in the meantime, here's the definitive answer:
APEX populates the following information in GV$SESSION for ACTIVE sessions:
client_info: Workspace ID:Authenticated username
module: DB Schema/APEX:APP application id:page id
client_identifier: Authenticated username:APEX Session ID
For example, for a recent request I did on apex.oracle.com, I had the following values in the DB session that executed my request:
module: JOEL_DB/APEX:APP 17251:4
There is no permanent, fixed correlation between an APEX session and a database session. The assignment of a session in the session pool to service an APEX request is essentially unpredictable and not constant. That's why this correlation can only be done for active database sessions, which are actively servicing APEX requests.
There's one caveat. A developer could overwrite these values using the database-provided PL/SQL API, which I've seen customers do occasionally. Otherwise, for active database sessions, you'll see these three elements populated in GV$SESSION, and module & client_identifer will also be present in the Active Session History. Carlos Sierra has an excellent blog post about how to query the Active Session History and identify poor performing APEX applications and their associated SQL.
In the last post we looked at how to get a ZFS file system up and running on a CentOS 7 host and how to enable the auto mount of the ZFS file systems. In this post we’ll look at two of the features ZFS provides: Snapshots and clones.
A ZFS snapshot is a read only copy of a file system. How can we benefit from that when it comes to PostgreSQL. There are several scenarios where this can be useful. Imagine you are developing an application and you want to test the deployment of a new release on top of a previous release. What you probably want to have is a production like PostgreSQL instance with lots of data for being able to test the upgrade path. In addition it would be great if you can revert in seconds and start from scratch just in case you run into troubles or you missed one important point in the upgrade scripts. Using ZFS snapshots you can have all of this. Lets see.
Currently my PostgreSQL instance from the last post does not contain any user data, so lets generate some:
postgres= create table my_app_table ( a int, b varchar(50) ); CREATE TABLE postgres=# with aa as postgres-# ( select * postgres(# from generate_series (1,1000000) a postgres(# ) postgres-# insert into my_app_table postgres-# select aa.a, md5(aa.a::varchar) postgres-# from aa; INSERT 0 1000000
This is the release we want to test our upgrade scripts from so lets create a snapshot of the current state of our instance:
[root@centos7 ~] zfs snapshot pgpool/pgdata@baserelease [root@centos7 ~] zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pgpool/pgdata@baserelease 16.6M - 202M -
The “@baserelease” is the name of the snapshot or to be correct everything after the “@” is the name of the snapshot.
Are you worried about consistency? This should not be an issue as PostgreSQL fsyncs the WAL so the instance should just start, apply all the wal records which are missing from the data files and you’re fine. Anyway, this is a scenario for testing: So as long as you have a consistent starting point you are fine.
A simple upgrade script could be:
postgres=# alter table my_app_table add column c date; ALTER TABLE postgres=# update my_app_table set c = now(); UPDATE 1000000
What happened to the snapshot?
[root@centos7 ~] zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pgpool/pgdata@baserelease 78.3M - 202M -
As soon as you modify data the snapshot will grow, no surprise.
So you did run your tests and discovered some things you could improve and once you improved what you wanted you want to start from the same point again. When having a snapshot this is quite easy, just revert to the snapshot. Of course you’ll need to stop your PostgreSQL instance first:
postgres@centos7:/home/postgres/ [PG1] pg_ctl stop -D /pgpool/pgdata/ -m fast waiting for server to shut down....LOG: received fast shutdown request LOG: aborting any active transactions LOG: autovacuum launcher shutting down LOG: shutting down done server stopped
As soon as the instance is down the snapshot can be reverted:
[root@centos7 ~] zfs rollback pgpool/pgdata@baserelease [root@centos7 ~] zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pgpool/pgdata@baserelease 1K - 202M -
When you check the data after you started the instance again it is exactly as it was before:
postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/pgdata/ postgres@centos7:/home/postgres/ [PG1] LOG: database system was not properly shut down; automatic recovery in progress postgres@centos7:/home/postgres/ [PG1] psql postgres psql (9.5.4 dbi services build) Type "help" for help. postgres= \d my_app_table Table "public.my_app_table" Column | Type | Modifiers --------+-----------------------+----------- a | integer | b | character varying(50) |
Notice the message about the automatic recovery, that is when the wal is replayed. Now you can just start your upgrade script again, revert in case of issues, start again, revert again, and so on.
Another use case: Rapid cloning of PostgreSQL instances (clones are writable, snapshots not). How does that work? This is where clones come into the game. For being able to clone you need a snapshot as clones depend on snapshots. Another thing to keep in mind is that you can not delete a snapshot when you have a clone still sitting on top of it. Lets see how it works:
As said we need a snapshot:
[root@centos7 ~] zfs snapshot pgpool/pgdata@clonebase
On top of this snapshot we can now create a clone:
[root@centos7 ~] zfs create pgpool/clones [root@centos7 ~] zfs clone pgpool/pgdata@clonebase pgpool/clones/1 [root@centos7 ~] zfs list NAME USED AVAIL REFER MOUNTPOINT pgpool 170M 9.46G 21K /pgpool pgpool/clones 20.5K 9.46G 19.5K /pgpool/clones pgpool/clones/1 1K 9.46G 169M /pgpool/clones/1 pgpool/pgdata 170M 9.46G 169M /pgpool/pgdata
Using the new clone we bring up another PostgreSQL instance in seconds, containing the exact data from the source of the clone:
postgres@centos7:/home/postgres/ [PG1] rm /pgpool/clones/1/*.pid postgres@centos7:/home/postgres/ [PG1] sed -i 's/#port = 5432/port=5433/g' /pgpool/clones/1/postgresql.conf postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/clones/1/ postgres@centos7:/home/postgres/ [PG1] psql -p 5433 postgres psql (9.5.4 dbi services build) Type "help" for help. postgres=
Quite cool and easy.
Conclusion: I am not sure if I’d use ZFS for production databases on Linux because I have not tested enough. But for development and testing purposes there are quite a few benefits such as snapshots and cloning. This can simply your processes a lot. You could even use snapshots and clones as a basis for your backups although I’d prefer barman or bart.
PS: To clean up:
[root@centos7 ~] zfs destroy pgpool/clones/1 [root@centos7 ~] zfs destroy pgpool/clones
Cet article Running PostgreSQL on ZFS on Linux – Fun with snapshots and clones est apparu en premier sur Blog dbi services.
- Set 1 - March 2014
- Set 2 - May 2014
- Set 3 - July 2014
- Set 4 - October 2014
- Set 5 - November 2014
- Set 6 - March 2015
- Set 7 - October 2015
- Set 8 - January 2016
- Set 9 - April 2016
We have now made available a tenth set of critical updates to AD
and TXK.These updates are included in EBS 12.2.6. You have a choice: you can apply the EBS 12.2.6 Release Update Pack or you can apply the AD and TXK updates separately.
We strongly recommend that you apply either EBS 12.2.6 or these new AD and TXK updates at your earliest convenience:
They can be downloaded from My Oracle Support, as shown by this example for AD:
Refer to the following My Oracle Support knowledge document for full installation instructions and associated tasks:
- Applying the Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2 (Doc ID 1617461.1)
What's New in this Patchset?
This patchset includes a large number of critical fixes for stability issues that will affect all customers. It also includes the following new features:
- Delta synchronization for more efficient Online Patching cycles, with streamlined handling of seed data and the Applications file system
- Improved log file structure to make diagnostics easier and more-readable
- Expanded Application Short Name to 16 characters
- Support for Logical Host Names
- Support for 64-bit JDK on Concurrent Manager tier
J-1 before the next SQL Saturday 2016 in Portugal.
I usually promote this event in France but this time this is for SQL Server Portuguese community. I will have the chance to talk about SQL Server AlwaysOn and troubleshooting scenarios. It will be also a good opportunity to meet portuguese people like Niko Neugebauer (@NikoNeugebauer), André Batista (@klunkySQL) and many others.
Moreover, Porto is definitely a beautiful city and a good way to finish the summer vacations.
Hope to see you there!
Cet article SQLSaturday #546 Porto – Troubleshooting scenarios with availability groups est apparu en premier sur Blog dbi services.
The alter system kill session command requires two unique arguments that uniquely identify the Oracle session, the session identifier and serial number.
First you have to identify the session to be killed with alter system kill session.
select SID,SERIAL#,STATUS,SCHEMANAME,PROGRAM from v$session;
The SID and SERIAL# values of the Oracle session to be killed can then be substituted and the alter system kill session command issued.
SQL> ALTER SYSTEM KILL SESSION 'sid,serial#';
Sometimes Oracle is not able to kill the session immediately with the alter system kill session command alone. Upon issuing the alter system kill session command, the session will be 'marked for kill'. It will then be killed as soon as possible.
In the case of a session being 'marked for kill' after issuing the alter system kill session command and not killed immediately, the alter system kill session command can be forced by adding the immediate keyword:
SQL> ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;
SQL> ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';
If you have configured IDLE_TIME inr your user profile.
Specify the permitted periods of continuous inactive time during a session, expressed in minutes. Long-running queries and other operations are not subject to this limit
Lets say a session has been idle for 10 Minutes. Session will continue to show as idle even after the idle_time for that user,as specified in that user's profile, has expired. When the user attempts to run a transaction against the database after the idle_time has expired, the database will disconnect the user by terminating the session. After this, the
session will no longer show in v$session. So, even if the session appears to be idle for a duration slightly more then your 10 minutes -- it is already "dead", it just doesn't show as dead yet. PMON will eventually snipe the session, marking it dead in v$session.
Reference this oracle Document for more information.
Once the oracle session is changed to SNIPED status, we can kill that session without any problem. How ever this be done manually, but watching for these SNIPED sessions every time can be irritating, we can automate the job of killing SNIPED session as below.
-- This is an optional table. Create this only if you want to Audit the killed session.
"TIME_STAMP" TIMESTAMP (6) DEFAULT SYSTIMESTAMP,
"USERNAME" VARCHAR2(50 BYTE)
)TABLESPACE "USERS" ;
Note : Inorder to create below procedure you will need DBA role & "ALTER SYSTEM" grant. If you are creating this procedure in non DBA user, you will need grant select on V$SESSION & "ALTER SYSTEM" grant
-- Procedure to kill sessions that are in sniped state
create or replace Procedure KILL_SNIPED_SESSIONS as
cursor SEL_SID is select SID,SERIAL#,USERNAME from v$session where status='SNIPED';
FETCH SEL_SID INTO SEL_REC;
EXIT WHEN SEL_SID%NOTFOUND;
V_SQL :='ALTER SYSTEM KILL SESSION '''||SEL_REC.SID||','||SEL_REC.SERIAL#||''' IMMEDIATE';
execute immediate V_SQL;
--- LETS AUDIT THE KILLED SESSIONS DATA HERE
-- COMMENT LINES BELOW THIS TO REMOVE AUDITING
V_SQL1:= 'insert INTO AUDIT_KILL_SNIPED_SESSIONS (SID,SERIAL#,USERNAME) VALUES ('''||SEL_REC.SID||''','''||SEL_REC.SERIAL#||''','''||SEL_REC.USERNAME||''')';
execute immediate V_SQL1;
-- COMMENT UNTILL THIS LINE TO REMOVE AUDITING
-- YOU CAN ALSO SETUP A JOB TO RUN EVERY 30 MINS OR HOUR TO EXECUTE THIS PROCEDURE
job_name => '"EXEC_KILL_SNIPED_SESSIONS"',
job_type => 'STORED_PROCEDURE',
job_action => 'KILL_SNIPED_SESSIONS',
number_of_arguments => 0,
start_date => TO_TIMESTAMP_TZ('2016-09-29 13:07:38.837143000 AMERICA/CHICAGO','YYYY-MM-DD HH24:MI:SS.FF TZR'),
repeat_interval => 'FREQ=HOURLY',
end_date => NULL,
enabled => FALSE,
auto_drop => FALSE,
comments => 'Job to run KILL_SNIPED_SESSIONS procedure that kills SNIPED SESSIONOS ');
name => '"EXEC_KILL_SNIPED_SESSIONS"',
attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);
name => '"EXEC_KILL_SNIPED_SESSIONS"');
Regular readers of this blog know that we have three reference tools that are essential for every E-Business Suite upgrade:
- File Comparison Report (Note 1446430.1)
- Data Model Comparison Report (Note 1290886.1)
- Seed Data Comparison Report (Note 1327399.1)
These tools allow you to generate a report of the differences between one EBS release and another for specific EBS products.
For example, you could use the Data Model Comparison to get a list of EBS 184.108.40.206 database objects that were updated in EBS 12.1.3 for Bill of Materials (BOM):
These tools now cover EBS 12.2.6 and all earlier EBS 12.2.x releases.
For example, you can now compare EBS 12.2.6 to:
- 11.5.10 Minimum Baseline
- EBS File Comparison Report Now Available
- Identifying Data Model Changes Between EBS 12.1.3 and Prior EBS Releases
- EBS Seed Data Comparison Reports Now Available
- Oracle E-Business Suite Release 12.1.3 Now Available
- E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available
- New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1
- Oracle E-Business Suite 12.2.3 Now Available
My favorite of the three is the Workshop because it goes through pictures on how to install and configure with step by step instructions. It starts out by installing Linux in the Oracle Cloud and configuring it to be the App Manager console then gets the Orchestration up and running to start the gateway in the cloud. The single change that I would make with this configuration is to configure the on premise system using a Linux image in VirtualBox locally rather than doing it from Firefox on your desktop. If you follow the same steps that you did to spin up a Linux image and install the packages in the cloud, you can do the same steps in VirtualBox. This deviation starts on page 26 with an install of Linux on a local VirtualBox and goes back to the Workshop without skipping a beat. I was able to follow the 52 page Workshop easier than the 76 page Cookbook. On the flip side, I do like the focus on network configuration in the Cookbook document for the cloud Linux instance.
The net of all this discussion is that there are various ways to configure Corente. It is a time consuming project and you can't just click a button and make it work. If you want to integrate your Cisco of Juniper router in your data center there are online instructions on configuring your router. We will not go through this because we don't have access to hardware to configure and play with.
In summary, this should be the last posting on Corente. It is a very powerful tool that allows you to create a virtual private network between computers in your home, office, or data center and computers in the cloud. It allows you to configure a typical two tier configuration in the Oracle Cloud and hide the database from the public internet while giving your DBAs and developers direct connection to the database. It also allows you to replicate your production systems that are running in your data center and create a high availability site in the Oracle Cloud. This can be done using NFS or SMB file shares and rsync to keep files synchronized or DataGuard to replicate database data between a two servers. Corente VPN allows you to create a trusted and secure communication link with your data center and the Oracle Cloud.