A lot of new features popped up with RMAN 12c, like Expansion of Multi-section support, or Simplified cross-platform migration and many many more. However, in this post I would like to highlight a small, but very helpful new feature which I demonstrated during my session at the SOUG day.
The automatic datafile creation
In earlier versions of Oracle, you might run into situations where you create a new tablespace, and some objects into it, and then, all of a sudden, the DB crashes or the datafile is lost, without ever being backed up.
This is where the follwing command comes into play:
alter database create datafile <NAME>;
However, in Oracle 12c, this is done automatically for you, in case you use RMAN.
Lets create a tablespace and afterwards resize the datafile.
SQL> create tablespace dbi datafile '/home/oracle/rman/dbi01.dbf' size 16M; Tablespace created. SQL> alter database datafile '/home/oracle/rman/dbi01.dbf' resize 32M; Database altered.
Now lets create a table in the new tablespace.
SQL> create table t_dbi tablespace dbi as select * from dba_objects; Table created. SQL> select count(*) from t_dbi; COUNT(*) ---------- 120130
Afterwards, we simulate an error and then check what Oracle says.
$ echo destroy > /home/oracle/rman/dbi01.dbf
SQL> alter system flush buffer_cache; System altered. SQL> select count(*) from t_dbi; select count(*) from t_dbi * ERROR at line 1: ORA-03135: connection lost contact Process ID: 25538 Session ID: 387 Serial number: 55345
From the alert.log
Errors in file /u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/OCM121_ora_25904.trc: ORA-01157: cannot identify/lock data file 11 - see DBWR trace file ORA-01110: data file 11: '/home/oracle/rman/dbi01.dbf'
Ok. Datafile is gone, and we have no backup. Lets do a preview to see, how Oracle would restore the datafile.
RMAN> restore ( datafile 11 ) preview; Starting restore at 30-SEP-2016 15:46:27 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=355 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=126 device type=DISK datafile 11 will be created automatically during restore operation using channel ORA_DISK_1 using channel ORA_DISK_2 List of Archived Log Copies for database with db_unique_name OCM121 ===================================================================== Key Thrd Seq S Low Time ------- ---- ------- - -------------------- 519 1 529 A 30-SEP-2016 12:23:00 Name: +FRA/OCM121/ARCHIVELOG/2016_09_30/thread_1_seq_529.454.923931563 recovery will be done up to SCN 7346834 Media recovery start SCN is 7346834 Recovery must be done beyond SCN 7346834 to clear datafile fuzziness Finished restore at 30-SEP-2016 15:46:28
As you can see in the preview output, Oracle will create the datafile automatically for us. Ok. Lets try it.
RMAN> restore ( datafile 11 ); Starting restore at 30-SEP-2016 15:48:29 using channel ORA_DISK_1 using channel ORA_DISK_2 creating datafile file number=11 name=/home/oracle/rman/dbi01.dbf restore not done; all files read only, offline, or already restored Finished restore at 30-SEP-2016 15:48:31 RMAN> recover datafile 11; Starting recover at 30-SEP-2016 15:49:17 using channel ORA_DISK_1 using channel ORA_DISK_2 starting media recovery media recovery complete, elapsed time: 00:00:02 Finished recover at 30-SEP-2016 15:49:20 RMAN> alter database datafile 11 online; Statement processed
Ready. We can now access our table again without running the “alter database create datafile” command during the restore. And the resize to 32M was also done for us. However, the resize came during the recovery part.
$ ls -l dbi01.dbf -rw-r----- 1 oracle asmadmin 33562624 Sep 30 15:52 dbi01.dbf
From my point of view, a small, but quite helpful feature.
One last remark. I have done my demo with Oracle 12cR1 with July 2016 PSU. In that version, the RMAN PREVIEW command has a bug, which says that you cannot recover into a consistent state, even if you can. After applying the following OnOff patch on top of July PSU 2016, the RMAN PREVIEW command works as expected.
Patch 20315311: RMAN-5119: RECOVERY CAN NOT BE DONE TO A CONSISTENT STATE
Oracle has a dedicated team that works on the difficult problems like mirroring E-Business Suite or ingesting SalesForce data into BI repositories for analysis. You can find their postings at www.ateam-oracle.com. One of their posts from April 28th was detailing how to setup a VNC server using an ssh tunnel to securely communicate from a Windows desktop using a putty tunnel to a tigerVNC server installed on a Linux instance in the Oracle Compute Cloud. The installation instructions talk about setting up putty on a Windows desktop as well as setting up the X-Windows user interface and tigerVNC on the Linux server.
There are basically two options for this configuration. The first option is to setup the VNC software to run on the Linux server and have it listen on all network connections. If you setup a Corente VPN to connect your office to the Oracle Cloud nothing else needs to be done other than open up port 5901 on your virtual private network. Once this is done you will have access to the console from any computer in your office that is connected to the VPN subnet. You could also open up port 5901 to the public internet but this is not recommended because you are exposing a potential security hole that someone can exploit. It is easy to configure this short term and turn it off when you are done with your testing of the configuration. Alternatively you can setup an ssh tunnel from a computer in your office and anyone who opens up port 5901 to that ip address is tunneled through the ssh connection to port 5901 on the Oracle Cloud. This is a simple and secure way of connecting one desktop to one computer. It also allows a single access point for people to use to connect to a server in the cloud.
One of the key questions that you need to ask when configuring a console connection is how are you going to use it. Do you have a team of administrators that are all going to access a server in the cloud? Do they need simultaneous access? Do you have a VPN need to connect other servers and services or are you doing it just for console connection? There is a cost of $75/month to run a VPN server because you have to pay for a compute instance to run the connection. In my opinion paying $75/month to connect one person to one computer is not worth the cost. You can do the same thing with an SSH tunnel. If, on the other hand, you have 3-4 admins that are trying to access services and developers that are trying to access other servers having them setup a dozen or so SSH tunnels every time that they want to connect is not an efficient use of time.
In summary, many people only look at how to do something. They don't take a step back and look at why they are doing something. If the software that you are installing is only available with a graphical interface installer then you have to set up a connection once possibly twice. If you can run the installer in silent mode with a configuration file to feed it the mouse clicks and text entry, go with the silent mode install. Scripting will allow you to automate the install and put it in the ora-init of your Orchestration thus reducing time and potential error conditions by selecting a different option during re-installation. If you don't have a need for a long term VPN go with a short term SSH tunnel connection. You can always install the VNC software and not enable it on the Linux server in the Oracle Cloud. I did not go through a screen shot of installation and configuration because the A-Team did a great job in their tutorial.
"Had a question from a client yesterday concerning the subject: I want to know which database session (APEX_PUBLIC_USER) is servicing which APEX session. Poking around in the V$ tables, I can see that in v$SQL, the module column will reveal the APEX Application and Page, but not the Session ID. Even if the session ID was in there, I don’t see an obvious way to join back to V$SESSION."
It's a bit of a puzzling problem for DBA's and developers - being able to correlate a database session with a specific APEX application and session. As I wrote about earlier, all database sessions in the database session pool of ORDS are connected as APEX_PUBLIC_USER (or ANONYMOUS, if you're using the embedded PL/SQL Gateway). If a user is experiencing slowness in their APEX application, how can a DBA look under the hood, identify the database session associated with the request from that user, and also look at the active SQL statement and possibly any wait events with that session?
This question comes up a lot, and should really be covered in the Oracle Application Express documentation. But in the meantime, here's the definitive answer:
APEX populates the following information in GV$SESSION for ACTIVE sessions:
client_info: Workspace ID:Authenticated username
module: DB Schema/APEX:APP application id:page id
client_identifier: Authenticated username:APEX Session ID
For example, for a recent request I did on apex.oracle.com, I had the following values in the DB session that executed my request:
module: JOEL_DB/APEX:APP 17251:4
There is no permanent, fixed correlation between an APEX session and a database session. The assignment of a session in the session pool to service an APEX request is essentially unpredictable and not constant. That's why this correlation can only be done for active database sessions, which are actively servicing APEX requests.
There's one caveat. A developer could overwrite these values using the database-provided PL/SQL API, which I've seen customers do occasionally. Otherwise, for active database sessions, you'll see these three elements populated in GV$SESSION, and module & client_identifer will also be present in the Active Session History. Carlos Sierra has an excellent blog post about how to query the Active Session History and identify poor performing APEX applications and their associated SQL.
In the last post we looked at how to get a ZFS file system up and running on a CentOS 7 host and how to enable the auto mount of the ZFS file systems. In this post we’ll look at two of the features ZFS provides: Snapshots and clones.
A ZFS snapshot is a read only copy of a file system. How can we benefit from that when it comes to PostgreSQL. There are several scenarios where this can be useful. Imagine you are developing an application and you want to test the deployment of a new release on top of a previous release. What you probably want to have is a production like PostgreSQL instance with lots of data for being able to test the upgrade path. In addition it would be great if you can revert in seconds and start from scratch just in case you run into troubles or you missed one important point in the upgrade scripts. Using ZFS snapshots you can have all of this. Lets see.
Currently my PostgreSQL instance from the last post does not contain any user data, so lets generate some:
postgres= create table my_app_table ( a int, b varchar(50) ); CREATE TABLE postgres=# with aa as postgres-# ( select * postgres(# from generate_series (1,1000000) a postgres(# ) postgres-# insert into my_app_table postgres-# select aa.a, md5(aa.a::varchar) postgres-# from aa; INSERT 0 1000000
This is the release we want to test our upgrade scripts from so lets create a snapshot of the current state of our instance:
[root@centos7 ~] zfs snapshot pgpool/pgdata@baserelease [root@centos7 ~] zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pgpool/pgdata@baserelease 16.6M - 202M -
The “@baserelease” is the name of the snapshot or to be correct everything after the “@” is the name of the snapshot.
Are you worried about consistency? This should not be an issue as PostgreSQL fsyncs the WAL so the instance should just start, apply all the wal records which are missing from the data files and you’re fine. Anyway, this is a scenario for testing: So as long as you have a consistent starting point you are fine.
A simple upgrade script could be:
postgres=# alter table my_app_table add column c date; ALTER TABLE postgres=# update my_app_table set c = now(); UPDATE 1000000
What happened to the snapshot?
[root@centos7 ~] zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pgpool/pgdata@baserelease 78.3M - 202M -
As soon as you modify data the snapshot will grow, no surprise.
So you did run your tests and discovered some things you could improve and once you improved what you wanted you want to start from the same point again. When having a snapshot this is quite easy, just revert to the snapshot. Of course you’ll need to stop your PostgreSQL instance first:
postgres@centos7:/home/postgres/ [PG1] pg_ctl stop -D /pgpool/pgdata/ -m fast waiting for server to shut down....LOG: received fast shutdown request LOG: aborting any active transactions LOG: autovacuum launcher shutting down LOG: shutting down done server stopped
As soon as the instance is down the snapshot can be reverted:
[root@centos7 ~] zfs rollback pgpool/pgdata@baserelease [root@centos7 ~] zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pgpool/pgdata@baserelease 1K - 202M -
When you check the data after you started the instance again it is exactly as it was before:
postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/pgdata/ postgres@centos7:/home/postgres/ [PG1] LOG: database system was not properly shut down; automatic recovery in progress postgres@centos7:/home/postgres/ [PG1] psql postgres psql (9.5.4 dbi services build) Type "help" for help. postgres= \d my_app_table Table "public.my_app_table" Column | Type | Modifiers --------+-----------------------+----------- a | integer | b | character varying(50) |
Notice the message about the automatic recovery, that is when the wal is replayed. Now you can just start your upgrade script again, revert in case of issues, start again, revert again, and so on.
Another use case: Rapid cloning of PostgreSQL instances (clones are writable, snapshots not). How does that work? This is where clones come into the game. For being able to clone you need a snapshot as clones depend on snapshots. Another thing to keep in mind is that you can not delete a snapshot when you have a clone still sitting on top of it. Lets see how it works:
As said we need a snapshot:
[root@centos7 ~] zfs snapshot pgpool/pgdata@clonebase
On top of this snapshot we can now create a clone:
[root@centos7 ~] zfs create pgpool/clones [root@centos7 ~] zfs clone pgpool/pgdata@clonebase pgpool/clones/1 [root@centos7 ~] zfs list NAME USED AVAIL REFER MOUNTPOINT pgpool 170M 9.46G 21K /pgpool pgpool/clones 20.5K 9.46G 19.5K /pgpool/clones pgpool/clones/1 1K 9.46G 169M /pgpool/clones/1 pgpool/pgdata 170M 9.46G 169M /pgpool/pgdata
Using the new clone we bring up another PostgreSQL instance in seconds, containing the exact data from the source of the clone:
postgres@centos7:/home/postgres/ [PG1] rm /pgpool/clones/1/*.pid postgres@centos7:/home/postgres/ [PG1] sed -i 's/#port = 5432/port=5433/g' /pgpool/clones/1/postgresql.conf postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/clones/1/ postgres@centos7:/home/postgres/ [PG1] psql -p 5433 postgres psql (9.5.4 dbi services build) Type "help" for help. postgres=
Quite cool and easy.
Conclusion: I am not sure if I’d use ZFS for production databases on Linux because I have not tested enough. But for development and testing purposes there are quite a few benefits such as snapshots and cloning. This can simply your processes a lot. You could even use snapshots and clones as a basis for your backups although I’d prefer barman or bart.
PS: To clean up:
[root@centos7 ~] zfs destroy pgpool/clones/1 [root@centos7 ~] zfs destroy pgpool/clones
Cet article Running PostgreSQL on ZFS on Linux – Fun with snapshots and clones est apparu en premier sur Blog dbi services.
- Set 1 - March 2014
- Set 2 - May 2014
- Set 3 - July 2014
- Set 4 - October 2014
- Set 5 - November 2014
- Set 6 - March 2015
- Set 7 - October 2015
- Set 8 - January 2016
- Set 9 - April 2016
We have now made available a tenth set of critical updates to AD
and TXK.These updates are included in EBS 12.2.6. You have a choice: you can apply the EBS 12.2.6 Release Update Pack or you can apply the AD and TXK updates separately.
We strongly recommend that you apply either EBS 12.2.6 or these new AD and TXK updates at your earliest convenience:
They can be downloaded from My Oracle Support, as shown by this example for AD:
Refer to the following My Oracle Support knowledge document for full installation instructions and associated tasks:
- Applying the Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2 (Doc ID 1617461.1)
What's New in this Patchset?
This patchset includes a large number of critical fixes for stability issues that will affect all customers. It also includes the following new features:
- Delta synchronization for more efficient Online Patching cycles, with streamlined handling of seed data and the Applications file system
- Improved log file structure to make diagnostics easier and more-readable
- Expanded Application Short Name to 16 characters
- Support for Logical Host Names
- Support for 64-bit JDK on Concurrent Manager tier
J-1 before the next SQL Saturday 2016 in Portugal.
I usually promote this event in France but this time this is for SQL Server Portuguese community. I will have the chance to talk about SQL Server AlwaysOn and troubleshooting scenarios. It will be also a good opportunity to meet portuguese people like Niko Neugebauer (@NikoNeugebauer), André Batista (@klunkySQL) and many others.
Moreover, Porto is definitely a beautiful city and a good way to finish the summer vacations.
Hope to see you there!
Cet article SQLSaturday #546 Porto – Troubleshooting scenarios with availability groups est apparu en premier sur Blog dbi services.