Feed aggregator

Default Password Hashes for 11g Oracle Database

Pete Finnigan - Tue, 2017-03-14 17:26
I often get Oracle Security related questions from people randomly sent to my inbox or occasionally on Social media and less on on this sites forum. I get questions on average probably 4 times per week in these ways. I....[Read More]

Posted by Pete On 14/03/17 At 06:16 PM

Categories: Security Blogs

Taste of KScope 2017 Webinars

Scott Spendolini - Tue, 2017-03-14 17:05

This Thursday, I’ll be participating in the Taste of KScope 2017 webinar series by presenting GET POST ORDS JSON: Web Services for APEX Decoded.  The webinar will begin at noon EDT on Thursday, March 16th.  The webinar is completely free, and you don’t need to be an ODTUG member to attend.

Here’s a summary of the abstract:

Web Services in the APEX world are becoming more and more popular.  However, there is still a lot of confusion as to what they are and how they could benefit the APEX developer.  After a review of the syntax and jargon associated with web services, this session will review and boil down web services to their basic components.  It will then demonstrate how APEX developers can start to use these powerful components - both to send and receive data from other sites.  

Not only will I be presenting this session at KScope later this year, but I’ve also done it a few times already, so most of the kinks are (hopefully) worked out.

You can register for the webinar here: https://attendee.gotowebinar.com/register/2300788935263147265

It's A Matter Of Perspective

Floyd Teter - Tue, 2017-03-14 14:09
So I suppose that if I'm going to blow the trumpet and announce the resurrection of this blog, I'd better write something meaningful...

I'm in Northern California at Oracle HQ this week. It's always fun to observe what's happening here in Silicon Valley.  For example, I can see the tech market is still good...lots of employment ads on billboards between the San Jose and San Francisco airports.  And the highly-publicized drought is clearly broken:  the area is as green as I've ever seen it.

I will say that I've also seen some divergent behavior in response to the breaking of the drought.  On one hand, I see lots of recently-installed xeriscape landscaping.  But on the other hand, I also see a bunch of recently repaired lawn grass - with lawn sprinklers watering every day.  I guess whether you're adapting with new water-wise landscaping or salvaging your lawn depends on your perspective.  Are drought-like conditions the new norm or is the weather in Northern California returning to normal after a long anomaly of dry weather?  I suppose it's all a matter of your perspective.

I see the same type of divergent behavior among SaaS customers.

Some customers see SaaS as driving a new business norm.  They embrace the trade-off of increasing simplicity to lower operational costs with the reduction in flexibility through business process customization.  Those customers see that they get more value from SaaS by accepting less flexibility in customizing the way they do business.

Other customers seem to simply look at SaaS as the latest trend to arrive in the enterprise tech world.  They're willing to have a vendor host their technology platform, but still want the flexibility to customize the software in order to make it fit their existing business processes.

It's possible for either type of customer to get what they want.  I'll maintain that the former type gets more value from SaaS than the latter.  But, in the end, I suppose the choice of adapting to the new norm or attempting to salvage what you had before really depends on your perspective.

Significant Improvement for WebLogic Start-Up Time on macOS Sierra

Andrejus Baranovski - Tue, 2017-03-14 11:59
I have faced really slow WebLogic start-up times after upgrade to recent versions of macOS Sierra. It turns out to be common problem related to JVM start-up on macOS systems, nothing to do with WebLogic itself. Solution is to register mapping between 127.0.0.1 and your computer name in hosts file, read more on Stack Overflow - Jvm takes a long time to resolve ip-address for localhost. This issue seems to appear with newer JVMs.

Originally WebLogic was starting up in 157 seconds:


After config was applied in hosts file, start-up time improved a lot, it is 24 seconds now:


Changes in hosts file - 127.0.0.1 was mapped with my computer name, along with localhost. Same applies for ::1 mapping:


You can get computer name in System Preferences -> Sharing:


Hope this hint will be useful for those developers, who are working on macOS.

Content and Experience Cloud REST API Consolidation

WebCenter Team - Tue, 2017-03-14 11:20

Authored by Victor Owuor, Senior Director, Software Development, Oracle

You are probably aware of our efforts to rebrand Content and Experience Cloud platform to offer a cohesive application suite that allows convenient development of applications, which take advantage of our product offerings.  It is our intent to present a consolidated package of our feature set, abstracting away the different applications that comprise the Content and Experience Cloud product suite.   A consolidation of our REST API is a critical part of that effort.

In previous releases, the REST API was in two separate packages, one for “Social” and another for “Documents.”  That separation did not reflect the needs of our developers, who, for example, may need to obtain a conversation related to a document.  Another use case that spanned both of those packages is a developer that wants to embed a document in a conversation.  Those use cases are typical for developers and it is our goal to streamline the experience when write such applications.  Other developer documentation, such as documentation of the Sites SDK or the DOCS Application Integration Framework was also separate.   We have started making changes to the product and documentation to address those issues.


The first change is already evident in documentation for Content and Experience Cloud, shown below.   We now have a single landing page for developers that clearly lists the separate aspects of the developer interfaces that we offer.  For the REST API, we have a link for Content Management that includes the documentation for the DOCS REST API, and other content management API Calls.  We also have a link for Collaboration that includes the documentation for what was previously the Social REST API, and other Collaboration API calls. Additionally, the same landing page includes information about the JavaScript SDK for developing Sites and the Application Integration Framework for extending Documents.

The REST service end-points will be as follows:

  1. Content Management
    • Documents - /documents/api/…Collaboration
    • Social - /social/api/… 

We have plans to do additional work to make it easier to develop using the API.   The most important of those plans is our effort to harmonize the treatment of various common REST resources, such as people, groups, documents across the suite, which we hope to achieve later this year.   We are also working to harmonize the authentication and security model across all REST end-points in the same timeframe.

Please try the new REST API and documentation and share any feedback or reaction in the comments.

Oracle Unveils Oracle US Tennis Awards for Young Professionals

Oracle Press Releases - Tue, 2017-03-14 11:00
Press Release
Oracle Unveils Oracle US Tennis Awards for Young Professionals Grants to assist former collegiate players in their pro careers

Indian Wells, Calif.—Mar 14, 2017

Oracle Corp. today announced the creation of the Oracle US Tennis Awards, player grants that are to be awarded annually at the BNP Paribas Open to assist young American players as they transition from college into the professional ranks.

The two $100,000 grants are to be awarded each year at the BNP Paribas Open to a male and female professional who have demonstrated exemplary sportsmanship and an aptitude for success on the pro tour. They must have played collegiate tennis prior to turning professional.

“Making the transition from college to the professional ranks is a real challenge,” said Oracle CEO Mark Hurd. “We hope these awards will provide young players with support to develop their games and improve their mental and physical fitness. Our goal is to grow the program and we invite input and support from other companies who are committed to U.S. athletics.”

The awards will be administered by the Intercollegiate Tennis Association, the governing body of college tennis.

“The ITA is proud of our partnership with Oracle,” said ITA CEO Timothy Russell. “Together we are growing the brand of college tennis and enhancing student-athlete experiences, and in doing so helping to return the leaders of tomorrow in America and around the world.”

Recipients will be selected by the newly created Oracle US Tennis Awards Advisory Council, a six-member body that includes individuals committed to the growth and improvement of American tennis. The inaugural members of the Advisory Council are:

  • Chris Evert: Former singles world No. 1; current ESPN tennis commentator; co-founder of Evert Tennis Academy.
  • Ilana Kloss: Former singles world No. 19; commissioner of Mylan World Team Tennis.
  • Peggy Michel: three-time grand slam doubles champion; played college tennis at Arizona State; current Assistant Tournament Director & Vice President of Sales at the BNP Paribas Open.
  • Dr. Timothy Russell: CEO ITA; college educator for three decades.
  • Martin Blackman: General Manager, USTA Player Development; played college tennis at Stanford.
  • Todd Martin: former singles world #4; CEO International Tennis Hall of Fame and Tournament Director, Dell Technologies Hall of Fame Open; played college tennis at Northwestern University.

“The transition into professional tennis is a great challenge that I experienced myself. The grants will be a great help to these young athletes,” said Todd Martin. “It’s great to see a company like Oracle step up to support American tennis.”

Added Chris Evert: “The people at Oracle understand that becoming a top player requires a strong support system. We are confident that these awards will help some young players through those daunting early years on the tour.”

Contact Info
Deborah Hellinger
Oracle
212.508.7935
deborah.hellinger@oracle.com
Dan Johnson
ITA
303.579.4878
djohnson@itatennis.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About The ITA

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees men's and women's varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men's and women's varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • 212.508.7935

Dan Johnson

  • 303.579.4878

Oracle Linux and Software Collections make it a great 'current' developer platform

Wim Coekaerts - Tue, 2017-03-14 10:57
Oracle Linux major releases happen every few years. Oracle Linux 7 is the current version and this was released back in 2014, Oracle Linux 6 is from 2011, etc... When a major release goes out the door, it sort of freezes the various packages at a point in time as well. It locks down which major version of glibc, etc.

Now, that doesn't mean that there won't be anything new added over time, of course security fixes and critical bugfixes get backported from new versions into these various packages and a good number of enhancements/features also get backported over the years. Very much so on the kernel side but in some cases or in a number of cases also in the various userspace packages. However for the most part the focus is on stability and consistency. This is also the case with the different tools and compiler/languages. A concrete example would be, OL7 provides Python 2.7.5. This base release of python will not change in OL7 in newer updates, doing a big chance would break compatibility etc so it's kept stable at 2.7.5.

A very important thing to keep reminding people of, however, again, is the fact that CVEs do get backported into these versions. I often hear someone ask if we ship a newer version of, say, openssl, because some CVE or other is fixed in that newer version - but typically that CVE would also be fixed in the versions we ship with OL. There is a difference between openssl the open source project and CVE's fixed 'upstream' and openssl shipped as part of Oracle Linux versions and maintained and bug fixed overtime with backports from upstream. We take care of critical bugs and security fixes in the current shipping versions.

Anyway - there are other Linux distributions out there that 'evolve' much more frequently and by doing so, out of the box tend to come with newer versions of libraries and tools and packages and that makes it very attractive for developers that are not bound to longer term stability and compatibility. So the developer goes off and installs the latest version of everything and writes their apps using that. That's a fine model in some cases but when you have enterprise apps that might be deployed for many years and have a dependency on certain versions of scripting languages or libraries or what have you, you can't just replace those with something that's much newer, in particular much newer major versions. I am sure many people will agree that if you have an application written in python using 2.7.5 and run that in production, you're not going to let the sysadmin or so just go rip that out and replace it with python 3.5 and assume it all just works and is transparently compatible....

So does that mean we are stuck? No... there is a yum repository called Software Collections Library which we make available to everyone on our freely accessible yum server. That Library gets updated on a regular basis, we are at version 2.3 right now, and it containers newer versions of many popular packages, typically newer compilers, toolkits etc, (such as GCC, Python, PHP, Ruby...) Things that developers want to use and are looking for more recent versions.

The channel is not enabled by default, you have to go in and edit /etc/yum.repos.d/public-yum-ol7.repo and set the ol7_software_collections' repo to enabled=1. When you do that, you can then go and install the different versions that are offered. You can just browse the repo using yum or just look online. (similar channels exist for Oracle Linux 6). When you go and install these different versions, they get installed in /opt and they won't replace the existing versions. So if you have python installed by default with OL7 (2.7.5) and install Python 3.5 from the software collections, this new version goes into /opt/rh/rh-python35. You can then use the scl utility to selectively enable which application uses which version.
An example :

scl enable rh-python35 -- bash 

One little caveat to keep in mind, if you have an early version of OL7 or OL6 installed, we do not modify the /etc/yum.repo.d/public-yum-ol7.repo file after initial installation (because we might overwrite changes you made) so it is always a good idea to get the latest version from our yum server. (You can find them here.) The channel/repo name might have changed or a new one could have been added or so...

As you can see, Oracle Linux is/can be a very current developer platform. The packages are there, they are just provided in a model that keeps stability and consistency. There is no need to go download upstream package source code and compile it yourself and replacing system toolkits/compilers that can cause incompatibilities.

Postgres Barman and DMK

Yann Neuhaus - Tue, 2017-03-14 10:21

As PostgreSQL is more and more present in our client’s infrastructure, I wanted to describe you the barman installation and configuration. Barman is the backup and recovery tool for PostgreSQL, I configured it using DMK out tool for infrastructure administrators on Oracle, MySQL, and PostgreSQL.

I used two virtual severs running under RedHat Enterprise Libux 7.1, one for PostgreSQL database server (pg1) ands the second for barman (pg2).

At first I install PostgreSQL 9.6 on both servers:

[root@pg1 ~]# wget https://download.postgresql.org/pub/repos/yum/9.6/redhat/
rhel-7-x86_64/pgdg-redhat96-9.6-3.noarch.rpm
--2017-02-06 15:08:05--  https://download.postgresql.org/pub/repos/yum/9.6/redhat
/rhel-7-x86_64/pgdg-redhat96-9.6-3.noarch.rpm
Resolving download.postgresql.org (download.postgresql.org)... 
217.196.149.55, 174.143.35.246, 87.238.57.227, ...
Connecting to download.postgresql.org (download.postgresql.org)|
217.196.149.55|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4816 (4.7K) [application/x-redhat-package-manager]
Saving to: âpgdg-redhat96-9.6-3.noarch.rpm
 
100%[======================================>] 4,816       
 
2017-02-06 15:08:05 (2.71 MB/s) - pgdg-redhat96-9.6-3.noarch.rpm saved 
 
[root@pg1 ~]# sudo yum localinstall -y pgdg-redhat96-9.6-3.noarch.rpm
Examining pgdg-redhat96-9.6-3.noarch.rpm: pgdg-redhat96-9.6-3.noarch
Marking pgdg-redhat96-9.6-3.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package pgdg-redhat96.noarch 0:9.6-3 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
================================================================================
 Package           Arch       Version     Repository                       Size
================================================================================
Installing:
 pgdg-redhat96     noarch     9.6-3       /pgdg-redhat96-9.6-3.noarch     2.7 k
 
Transaction Summary
================================================================================
Install  1 Package
 
Total size: 2.7 k
Installed size: 2.7 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : pgdg-redhat96-9.6-3.noarch                                   1/1
  Verifying  : pgdg-redhat96-9.6-3.noarch                                   1/1
 
Installed:
  pgdg-redhat96.noarch 0:9.6-3
 
Complete!

I install barman on the barman server (pg2):

[root@pg2 ~]# sudo yum install barman
pgdg96                                                   | 4.1 kB     00:00
(1/2): pgdg96/7Server/x86_64/group_gz                      |  249 B   00:00
(2/2): pgdg96/7Server/x86_64/primary_db                    | 129 kB   00:02
Resolving Dependencies
--> Running transaction check
---> Package barman.noarch 0:2.1-1.rhel7 will be installed
--> Processing Dependency: python-psycopg2 >= 2.4.2 for package:
barman-2.1-1.rhel7.noarch
--> Processing Dependency: python-argh >= 0.21.2 for package: 
barman-2.1-1.rhel7.noarch
--> Processing Dependency: python-dateutil for package: 
barman-2.1-1.rhel7.noarch
--> Processing Dependency: python-argcomplete for package: 
barman-2.1-1.rhel7.noarch
--> Running transaction check
---> Package python-argcomplete.noarch 0:0.3.7-1.rhel7 will be installed
---> Package python-argh.noarch 0:0.23.0-1.rhel7 will be installed
---> Package python-dateutil.noarch 1:2.5.3-3.rhel7 will be installed
--> Processing Dependency: python-six for package: 1:
python-dateutil-2.5.3-3.rhel7.noarch
---> Package python-psycopg2.x86_64 0:2.6.2-3.rhel7 will be installed
--> Processing Dependency: postgresql96-libs for package: 
python-psycopg2-2.6.2-3.rhel7.x86_64
--> Running transaction check
---> Package postgresql96-libs.x86_64 0:9.6.1-1PGDG.rhel7 will be installed
---> Package python-six.noarch 0:1.9.0-2.el7 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
================================================================================
 Package                Arch       Version                 Repository      Size
================================================================================
Installing:
 barman                 noarch     2.1-1.rhel7             pgdg96         248 k
Installing for dependencies:
 postgresql96-libs      x86_64     9.6.1-1PGDG.rhel7       pgdg96         308 k
 python-argcomplete     noarch     0.3.7-1.rhel7           pgdg96          23 k
 python-argh            noarch     0.23.0-1.rhel7          pgdg96          33 k
 python-dateutil        noarch     1:2.5.3-3.rhel7         pgdg96         241 k
 python-psycopg2        x86_64     2.6.2-3.rhel7           pgdg96         131 k
 python-six             noarch     1.9.0-2.el7             ol7_latest      28 k
 
Transaction Summary
================================================================================
Install  1 Package (+6 Dependent packages)
 
Total download size: 1.0 M
Installed size: 3.6 M
Is this ok [y/d/N]: y
Downloading packages:
(1/7): barman-2.1-1.rhel7.noarch.rpm                       | 248 kB   00:03
(2/7): python-argcomplete-0.3.7-1.rhel7.noarch.rpm         |  23 kB   00:00
(3/7): python-argh-0.23.0-1.rhel7.noarch.rpm               |  33 kB   00:00
(4/7): postgresql96-libs-9.6.1-1PGDG.rhel7.x86_64.rpm      | 308 kB   00:04
(5/7): python-six-1.9.0-2.el7.noarch.rpm                   |  28 kB   00:00
(6/7): python-dateutil-2.5.3-3.rhel7.noarch.rpm            | 241 kB   00:01
(7/7): python-psycopg2-2.6.2-3.rhel7.x86_64.rpm            | 131 kB   00:01
--------------------------------------------------------------------------------
Total                                              163 kB/s | 1.0 MB  00:06
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python-argh-0.23.0-1.rhel7.noarch                            1/7
  Installing : postgresql96-libs-9.6.1-1PGDG.rhel7.x86_64                   2/7
  Installing : python-psycopg2-2.6.2-3.rhel7.x86_64                         3/7
  Installing : python-argcomplete-0.3.7-1.rhel7.noarch                      4/7
  Installing : python-six-1.9.0-2.el7.noarch                                5/7
  Installing : 1:python-dateutil-2.5.3-3.rhel7.noarch                       6/7
  Installing : barman-2.1-1.rhel7.noarch                                    7/7
  Verifying  : python-psycopg2-2.6.2-3.rhel7.x86_64                         1/7
  Verifying  : python-six-1.9.0-2.el7.noarch                                2/7
  Verifying  : python-argcomplete-0.3.7-1.rhel7.noarch                      3/7
  Verifying  : postgresql96-libs-9.6.1-1PGDG.rhel7.x86_64                   4/7
  Verifying  : python-argh-0.23.0-1.rhel7.noarch                            5/7
  Verifying  : barman-2.1-1.rhel7.noarch                                    6/7
  Verifying  : 1:python-dateutil-2.5.3-3.rhel7.noarch                       7/7
 
Installed:
  barman.noarch 0:2.1-1.rhel7
 
Dependency Installed:
  postgresql96-libs.x86_64 0:9.6.1-1PGDG.rhel7
  python-argcomplete.noarch 0:0.3.7-1.rhel7
  python-argh.noarch 0:0.23.0-1.rhel7
  python-dateutil.noarch 1:2.5.3-3.rhel7
  python-psycopg2.x86_64 0:2.6.2-3.rhel7
  python-six.noarch 0:1.9.0-2.el7
Complete!

Everything is installed on both servers :

– PostgreSQL 9.6

– DMK last version

– barman

Now we configure as follows:

The barman server is pg2 : 192.168.1.101

The database server is pg1 : 192.168.1.100

 

On the database server, we create a barman user:

postgres@:5432) [postgres] > create user barman superuser login encrypted password 
'barman';
CREATE ROLE

And a barman_streaming user:

postgres@: [postgres] > create user barman_streaming replication encrypted password 
'barman';
CREATE ROLE

We modify the following parameters max_replication_slots (which specifies the maximum number of replication slots the server can support), and max_wal_senders (specifies the maximum number of simultaneously running wal sender processes):

postgres@:5432) [postgres] > alter system set max_replication_slots=10;
ALTER SYSTEM
postgres@:5432) [postgres] > alter system set max_wal_senders=10;
ALTER SYSTEM

As those previous parameters have been modified, we need to restart the database, we use pgrestart which is a DMK alias for pg_ctl -D ${PGDATA} restart -m fast

postgres@pg1:/home/postgres/ [PG1] pgrestart
waiting for server to shut down.... done
server stopped
server starting
postgres@pg1:/home/postgres/ [PG1] 2017-02-06 15:59:14.756 CET - 1 - 17008 -  
- @ LOG:  redirecting log output to logging collector process
2017-02-06 15:59:14.756 CET - 2 - 17008 -  - 
@ HINT:  Future log output will appear in directory 
"/u01/app/postgres/admin/PG1/pg_log".

We modify the pg_hba.conf on the barman server in order to allow connections from the barman server to the database server as follows:

host    all             barman          192.168.1.101/24       md5
host    replication     barman_streaming 192.168.1.101/24      md5

We modify the .pgpass file on the barman server in order not to be asked for passwords:

postgres@pg2:/home/postgres/ [pg96] cat .pgpass
*:*:*:postgres:postgres
192.168.1.100:*:*:barman:barman
192.168.1.100:*:*:barman_streaming:barman

Finally we test the connection from the barman server to the database server:

postgres@pg2:/home/postgres/ [pg96] psql -c 'select version()'
 -U barman -h 192.168.1.100 -p 5432 postgres
                                                 version
 
--------------------------------------------------------------------------------

 PostgreSQL 9.6.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (
Red Hat 4.8.5-11), 64-bit
(1 row)
postgres@pg2:/home/postgres/ [pg96] psql -U barman_streaming -h 192.168.1.100 
-p 5432 -c "IDENTIFY_SYSTEM" replication=1
      systemid       | timeline |  xlogpos  | dbname
---------------------+----------+-----------+--------
 6384063115439945376 |        1 | 0/F0006F0 |
(1 row)

Now it’s time to create a configuration file pg96.conf in $DMK_HOME/etc/barman.d in the barman server:

[pg96]
description =  "PostgreSQL 9.6 server"
conninfo = host=192.168.1.100 port=5432 user=barman dbname=postgres
backup_method = postgres
streaming_conninfo = host=192.168.1.100 port=5432 user=barman_streaming 
dbname=postgres
streaming_wals_directory = /u99/received_wal
streaming_archiver = on
slot_name = barman

We create a barman.conf file in $DMK_HOME/etc as follows, mainly defining the barman_user, the configuration file directory and the barman backup home, the barman lock directory and the log directory:

postgres@pg2:/u01/app/postgres/local/dmk/etc/ [pg96] cat barman.conf
; Barman, Backup and Recovery Manager for PostgreSQL
; http://www.pgbarman.org/ - http://www.2ndQuadrant.com/
;
; Main configuration file
 
[barman]
; System user
barman_user = postgres
 
; Directory of configuration files. Place your sections in separate files 
with .conf extension
; For example place the 'main' server section in /etc/barman.d/main.conf
configuration_files_directory = /u01/app/postgres/local/dmk/etc/barman.d
 
; Main directory
barman_home = /u99/backup
 
; Locks directory - default: %(barman_home)s
barman_lock_directory = /u01/app/postgres/local/dmk/etc/
 
; Log location
log_file = /u01/app/postgres/local/dmk/log/barman.log
 
; Log level (see https://docs.python.org/3/library/logging.html#levels)
log_level = DEBUG
 
; Default compression level: possible values are None (default), 
bzip2, gzip, pigz, pygzip or pybzip2
compression = gzip
 
; Pre/post backup hook scripts
;pre_backup_script = env | grep ^BARMAN
;pre_backup_retry_script = env | grep ^BARMAN
;post_backup_retry_script = env | grep ^BARMAN
;post_backup_script = env | grep ^BARMAN
 
; Pre/post archive hook scripts
;pre_archive_script = env | grep ^BARMAN
;pre_archive_retry_script = env | grep ^BARMAN
;post_archive_retry_script = env | grep ^BARMAN
;post_archive_script = env | grep ^BARMAN
 
; Global retention policy (REDUNDANCY or RECOVERY WINDOW) - default empty
retention_policy = RECOVERY WINDOW OF 4 WEEKS
 
; Global bandwidth limit in KBPS - default 0 (meaning no limit)
;bandwidth_limit = 4000
 
; Immediate checkpoint for backup command - default false
;immediate_checkpoint = false
 
; Enable network compression for data transfers - default false
;network_compression = false
 
; Number of retries of data copy during base backup after an error - default 0
;basebackup_retry_times = 0
 
; Number of seconds of wait after a failed copy, before retrying - default 30
;basebackup_retry_sleep = 30
 
; Maximum execution time, in seconds, per server
; for a barman check command - default 30
;check_timeout = 30
 
; Time frame that must contain the latest backup date.
; If the latest backup is older than the time frame, barman check
; command will report an error to the user.
; If empty, the latest backup is always considered valid.
; Syntax for this option is: "i (DAYS | WEEKS | MONTHS)" where i is an
; integer > 0 which identifies the number of days | weeks | months of
; validity of the latest backup for this check. Also known as 'smelly backup'.
;last_backup_maximum_age =
 
; Minimum number of required backups (redundancy)
;minimum_redundancy = 1

 

In order to enable streaming of transaction logs and to use replication slots, we run the following command on the barman server:

postgres@pg2:/u01/app/postgres/local/dmk/etc/ [pg96] barman receive-wal 
--create-slot pg96
Creating physical replication slot 'barman' on server 'pg96'
Replication slot 'barman' created

Then we can test:

We can force a log switch on the database server:

postgres@pg2:/u01/app/postgres/local/dmk/etc/ [pg96] barman switch-xlog 
--force pg96
The xlog file 00000001000000000000000F has been closed on server 'pg96'

 

We start receive wal:

postgres@pg2:/u99/received_wal/ [pg96] barman -c 
/u01/app/postgres/local/dmk/etc/barman.conf receive-wal pg96
Starting receive-wal for server pg96
pg96: pg_receivexlog: starting log streaming at 0/68000000 (timeline 3)
pg96: pg_receivexlog: finished segment at 0/69000000 (timeline 3)
pg96: pg_receivexlog: finished segment at 0/6A000000 (timeline 3)
pg96: pg_receivexlog: finished segment at 0/6B000000 (timeline 3)
pg96: pg_receivexlog: finished segment at 0/6C000000 (timeline 3)

 

We can check the barman configuration:

postgres@pg2:/u99/restore_test/ [pg96] barman check pg96
Server pg96:
                    PostgreSQL: OK
                    superuser: OK
                    PostgreSQL streaming: OK
                    wal_level: OK
                    replication slot: OK
                    directories: OK
                    retention policy settings: OK
                    backup maximum age: OK (no last_backup_maximum_age provided)
                    compression settings: OK
                    failed backups: FAILED (there are 1 failed backups)
                    minimum redundancy requirements: OK (have 3 backups, 
                    expected at least 0)
                    pg_basebackup: OK
                    pg_basebackup compatible: OK
                    pg_basebackup supports tablespaces mapping: OK
                    pg_receivexlog: OK
                    pg_receivexlog compatible: OK
                    receive-wal running: OK
                    archiver errors: OK

We can run a barman archive-wal command:

postgres@pg2:/home/postgres/ [pg96] barman archive-wal pg96
Processing xlog segments from streaming for pg96
                    00000003.history
                    000000030000000000000067
                    000000030000000000000068

And finally you can run a backup with the command:

postgres@pg2:/home/postgres/ [pg96] barman backup pg96
Starting backup using postgres method for server pg96 in 
/u99/backup/pg96/base/20170214T103226
Backup start at xlog location: 0/69000060 (000000030000000000000069, 00000060)
Copying files.
Copy done.
Finalising the backup.
Backup size: 60.1 MiB
Backup end at xlog location: 0/6B000000 (00000003000000000000006A, 00000000)
Backup completed
Processing xlog segments from streaming for pg96
                    000000030000000000000069

We can list the backups :

postgres@pg2:/u02/pgdata/ [pg96] barman list-backup pg96
pg96 20170214T103226 - Tue Feb 14 09:32:27 2017 - Size: 60.2 MiB - WAL Size: 0 B 
(tablespaces: tab1:/u02/pgdata/PG1/mytab)
pg96 20170207T061338 - Tue Feb  7 06:19:38 2017 - Size: 29.0 MiB - WAL Size: 0 B
pg96 20170207T060633 - Tue Feb  7 06:12:33 2017 - Size: 29.0 MiB - WAL Size: 0 B

 

We have the possibility to test a restore for example on the barman server :

postgres@pg2:/u02/pgdata/ [pg96] barman recover pg96 20170214T103226 
/u99/restore_test/
Starting local restore for server pg96 using backup 20170214T103226
Destination directory: /u99/restore_test/
                    24648, tab1, /u02/pgdata/PG1/mytab
Copying the base backup.
Copying required WAL segments.
Generating archive status files
Identify dangerous settings in destination directory.
 
IMPORTANT
These settings have been modified to prevent data losses
 
postgresql.conf line 71: archive_command = false
postgresql.auto.conf line 4: archive_command = false

Your PostgreSQL server has been successfully prepared for recovery, the /u99/test_restore directory contains:

postgres@pg2:/u99/restore_test/ [pg96] ll

total 64
-rw-------  1 postgres postgres  208 Feb 14 10:32 backup_label
-rw-------  1 postgres postgres  207 Feb 14 10:32 backup_label.old
drwx------ 10 postgres postgres   98 Feb 14 10:32 base
drwx------  2 postgres postgres 4096 Feb 14 10:32 global
drwx------  2 postgres postgres    6 Feb 14 10:32 mytab
drwx------  2 postgres postgres   17 Feb 14 10:32 pg_clog
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_commit_ts
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_dynshmem
-rw-------  1 postgres postgres 4416 Feb 14 10:32 pg_hba.conf
-rw-------  1 postgres postgres 4211 Feb 14 10:32 pg_hba.conf_conf
-rw-------  1 postgres postgres 1636 Feb 14 10:32 pg_ident.conf
drwx------  4 postgres postgres   65 Feb 14 10:32 pg_logical
drwx------  4 postgres postgres   34 Feb 14 10:32 pg_multixact
drwx------  2 postgres postgres   17 Feb 14 10:32 pg_notify
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_replslot
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_serial
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_snapshots
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_stat
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_stat_tmp
drwx------  2 postgres postgres   17 Feb 14 10:32 pg_subtrans
drwx------  2 postgres postgres   18 Feb 14 10:32 pg_tblspc
drwx------  2 postgres postgres    6 Feb 14 10:32 pg_twophase
-rw-------  1 postgres postgres    4 Feb 14 10:32 PG_VERSION
drwx------  3 postgres postgres   81 Feb 14 10:39 pg_xlog
-rw-------  1 postgres postgres  391 Feb 14 10:39 postgresql.auto.conf
-rw-------  1 postgres postgres  358 Feb 14 10:32 postgresql.auto.conf.origin
-rw-------  1 postgres postgres 7144 Feb 14 10:39 postgresql.conf
-rw-------  1 postgres postgres 7111 Feb 14 10:32 postgresql.conf.origin
-rw-------  1 postgres postgres   56 Feb 14 10:32 recovery.done

If you need to  restore your backup on the pg1 original database server, you have to use the –remote-ssh-command as follows (you specify the hostname where you want restore, and the PGDATA directory)

postgres@pg2:/home/postgres/.ssh/ [pg96] barman recover --remote-ssh-command "ssh postgres@pg1" pg96 20170214T103226 /u02/pgdata/PG1
Starting remote restore for server pg96 using backup 20170214T103226
Destination directory: /u02/pgdata/PG1
       24648, tab1, /u02/pgdata/PG1/mytab
Copying the base backup.
Copying required WAL segments.
Generating archive status files
Identify dangerous settings in destination directory.
 
IMPORTANT
These settings have been modified to prevent data losses
 
postgresql.conf line 71: archive_command = false
postgresql.auto.conf line 4: archive_command = false
 
Your PostgreSQL server has been successfully prepared for recovery!

You also have the possibility to realise a point in time recovery.

In my PG1 database I create a table employes and insert some data :

postgres@[local]:5432) [blubb] > create table employes (name varchar(10));
CREATE TABLE
(postgres@[local]:5432) [blubb] > insert into employes values ('fiona');
INSERT 0 1
(postgres@[local]:5432) [blubb] > insert into employes values ('cathy');
INSERT 0 1
(postgres@[local]:5432) [blubb] > insert into employes values ('helene');
INSERT 0 1
(postgres@[local]:5432) [blubb] > select * from employes;
  name  
--------
 fiona
 cathy
 helene

A few minutes later I insert some more records in the employes table:

postgres@[local]:5432) [blubb] > insert into employes values ('larry');
INSERT 0 1
(postgres@[local]:5432) [blubb] > insert into employes values ('bill');
INSERT 0 1
(postgres@[local]:5432) [blubb] > insert into employes values ('steve');
INSERT 0 1
(postgres@[local]:5432) [blubb] > select * from employes;
  name  
--------
 fiona
 cathy
 helene
 larry
 bill
 steve

The first data were create at 15:15, let’s see if the pitr barman restore works correctly:

I stop the PG1 database :

postgres@pg1:/u02/pgdata/ [PG1] pgstop
waiting for server to shut down....... done
server stopped

I delete the PGDATA directory:

postgres@pg1:/u02/pgdata/ [PG1] rm -rf PG1

And from the barman server I run the pitr recovery command using the –target-time argument:

postgres@pg2:/home/postgres/ [pg96] barman recover --remote-ssh-command "ssh postgres@pg1" pg96 
--target-time "2017-02-14 15:15:48"  20170214T141055 /u02/pgdata/PG1 
Starting remote restore for server pg96 using backup 20170214T141055
Destination directory: /u02/pgdata/PG1
Doing PITR. Recovery target time: '2017-02-14 15:15:48'
       24648, tab1, /u02/pgdata/PG1/mytab
Copying the base backup.
Copying required WAL segments.
Generating recovery.conf
Identify dangerous settings in destination directory.
 
IMPORTANT
These settings have been modified to prevent data losses
 
postgresql.conf line 72: archive_command = false
postgresql.auto.conf line 4: archive_command = false
 
Your PostgreSQL server has been successfully prepared for recovery!

I restart my PG1 database the data are correctly restored, just before the Larry, Bill and Steve insertion into the employes tables

postgres@[local]:5432) [blubb] > select * from employes;
  name  
--------
 fiona
 cathy
 helene
(3 rows)

 

 

Cet article Postgres Barman and DMK est apparu en premier sur Blog dbi services.

Oracle 12.2 and Transparent Data Encryption

Yann Neuhaus - Tue, 2017-03-14 10:20

Since the new Oracle 12.2.0 version is released, I decided to test the Transparent Data Encryption as new features are available. The following tests have been made in a multitenant environment, DB1 and two pluggable databases DB1PDB1 and DB1PDB2.

The first step consists in creating a software keystore. A software keystore is a container that stores the Transparent Data Encryption key. We define its location in the sqlnet.ora file if we need to use it for a software keystore location:

In the sqlnet.ora file, we have to define the ENCRYPTION_WALLET_LOCATION parameter:

ENCRYPTION_WALLET_LOCATION=
 (SOURCE=
  (METHOD=FILE)
   (METHOD_DATA=
    (DIRECTORY=/u00/app/oracle/local/wallet)))

We can verify in the view:

SQL> select * from v$encryption_wallet;

WRL_TYPE  WRL_PARAMETER                  STATUS	WALLET_TYPE	WALLET_OR   FULLY_BAC      CON_ID

FILE    /u00/app/oracle/local/wallet/     NOT_AVAILABLE		UNKNOWN      SINGLE       UNDEFINED

Then we create the software keystore using sqlplus. We must be connected with a user with the ADMINISTER KEY MANAGEMENT or SYSKM privilege:

SQL> connect c##sec_admin as syskm
Enter password: 
Connected.

SQL> administer key management create keystore '/u00/app/oracle/local/wallet' identified by manager; 

keystore altered.

Once the keystore is created the ewallet.p12 is generated in the keystore file location:

oracle@localhost:/u00/app/oracle/local/wallet/ [db1] ls
afiedt.buf  ewallet.p12

Therefore, depending of the type of the keystore we have created, we must manually open the keystore. We can check in the v$encryption_wallet view to see if the keystore is opened.

If not you have to run the following command:

oracle@localhost:/u00/app/oracle/local/wallet/ [db1] sqlplus c##sec_admin as syskm

SQL*Plus: Release 12.2.0.1.0 Production on Mon Mar 13 11:59:47 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> administer key management set keystore open identified by manager container = ALL;

keystore altered.

If we ask the view:

SQL> select * from v$encryption_wallet;

WRL_TYPE.    WRL_PARAMETER                STATUS             WALLET_TYPE  WALLET_OR   FULLY_BAC   CON_ID

FILE     /u00/app/oracle/local/wallet/  OPEN_NO_MASTER_KEY    PASSWORD 	    SINGLE    UNDEFINED

Now we must set the Software TDE master encryption key, once the keystore is open, as we are in a multitenant environment, we have to specify CONTAINER=ALL in order to set the keystone in all the PDBs:

SQL> administer key management set keystore close identified by manager;

keystore altered.

SQL> administer key management set keystore open identified by manager  container =all;

keystore altered.

SQL> administer key management set key identified by manager with backup using 'kex_backup' container =ALL;

keystore altered.

Now the v$encryption_wallet view is up to date:

SQL> select * from v$encryption_wallet;

WRL_TYPE   WRL_PARAMETER.               STATUS  WALLET_TYPE	    WALLET_OR FULLY_BAC   CON_ID

FILE.   /u00/app/oracle/local/wallet/.   OPEN	 PASSWORD 	    SINGLE      NO          1

When you startup your CDB and your PDBs, you must do things in a good way:

You shutdown and startup the database

oracle@localhost:/u00/app/oracle/admin/db1/ [db1] sq

SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 14 13:53:09 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 3405774848 bytes
Fixed Size		    8798456 bytes
Variable Size		  805310216 bytes
Database Buffers	 2583691264 bytes
Redo Buffers		    7974912 bytes
Database mounted.
Database opened.

You open the wallet:

SQL> administer key management set keystore open identified by manager container = all;

keystore altered.

The pluggable databases are not yet opened:

SQL> connect sys/manager@db1pdb1
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0


Warning: You are no longer connected to ORACLE.

You start the pluggable databases:

SQL> connect / as sysdba
Connected.
SQL> alter pluggable database all open;

Pluggable database altered.

The wallet is closed on the pluggable databases:

SQL> connect sys/manager@db1pdb1 as sysdba
Connected.
SQL> select status from v$encryption_wallet;

STATUS
------------------------------
CLOSED

You first have to close the wallet then to open it again:

SQL> connect / as sysdba
Connected.
SQL> administer key management set keystore open identified by manager container = all;
administer key management set keystore open identified by manager container = all
*
ERROR at line 1:
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open


SQL> administer key management set keystore close identified by manager;

keystore altered.

SQL> administer key management set keystore open identified by manager container = all;

keystore altered.

The wallet is opened on every pluggable database:

SQL> connect sys/manager@db1pdb1 as sysdba
Connected.
SQL> select status from v$encryption_wallet;

STATUS
------------------------------
OPEN

SQL> connect sys/manager@db1pdb2 as sysdba
Connected.
SQL> select status from v$encryption_wallet;

STATUS
------------------------------
OPEN

Once the software keytore is set, you have the possibility now to encrypt your data.You have the possibility to encrypt columns in tables, or realise encryption in tablespaces or databases.

Concerning the columns in a table, you can encrypt many data types, Oracle recommend not to use TDE in case of transportable tablespace, or columns used in foreign keys constraints. The TDE default algorithm used is AES192.

Let’s create the classical empire table and insert some values:

SQL> create table emp1 (name varchar2(30), salary number(7) encrypt);

Table created.


SQL> insert into emp1 values ('Larry', 1000000);

1 row created.

SQL> select * from emp1;

NAME				   SALARY
------------------------------ ----------
Larry				  1000000

If now we close the keystore, the data are not viewable anymore:

SQL> administer key management set keystore close identified by manager container = all;

keystore altered.

SQL> connect psi/psi@db1pdb1
Connected.
SQL> select * from emp1;
select * from emp1
*
ERROR at line 1:
ORA-28365: wallet is not open


SQL> select name from emp1;

NAME
------------------------------
Larry

SQL> select name, salary from emp1;
select name, salary from emp1
                         *
ERROR at line 1:
ORA-28365: wallet is not open

We can also use non default algorithms as 3DES168, AES128, AES256, for example:

SQL> create table emp2 (
  2  name varchar2(30),
  3  salary number(7) encrypt using 'AES256');

Table created.

If your table has a high number of rows and encrypted columns, you have the possibility to use the NOMAC parameter to bypass the TDE checks and to save some disk space:

SQL> create table emp3 (
  2  name varchar2(30),
  3  salary number (7) encrypt 'NOMAC');

Table created.

For existing tables, you can add encrypted columns with the ALTER table XXX add SQL statement, or you can encrypt an existing column with the alter table modify statement:

SQL> create table emp4 (name varchar2(30));

Table created.

SQL> alter table emp4 add (salary number (7) encrypt);

Table altered.

SQL> create table emp5 (
  2  name varchar2(30),
  3  salary number(7));

Table created.

SQL> alter table emp5 modify (salary encrypt);

Table altered.

Eventually, you can turn off the encryption for a table:

SQL> alter table emp5 modify (salary decrypt);

Table altered.

One of the main 12.2 new feature is the tablespace encryption. You have now the possibility to encrypt new and existing tablespace, you can also encrypt the database including the SYS SYSAUX TEMP and UNDO tablespaces in online mode.

For example, in the previous Oracle versions, you had the possibility to encrypt tablespace when they were in offline mode or the database in mount state, in 12.2 we can encrypt in online mode.

The encryption for the TEMP tablespace is the same as the Oracle previous releases, you cannot convert the TEMP tablespace, but you can create a new temporary encrypted tablespace and make it default temporary tablespace.

You can encrypt the UNDO tablespace, but Oracle recommends not to decrypt the tablespace once it has been encrypted.

At first the compatible parameter must be set to 11.2.0 when encrypting tablespaces, and at the 12.2.0.0 when encrypting SYS SYSAUX or UNDO tablespaces.

SQL> create tablespace PSI_ENCRYPT
  2  datafile '/u01/oradata/db1/db1pdb1/psi_encrypt.dbf' size 10M
  3  encryption using 'AES128' encrypt;

Tablespace created.

We have the possibility to realise Online conversion for existing tablespaces:

SQL> select file_name from dba_data_files where tablespace_name = 'PSI';

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/db1/db1pdb1/psi.dbf

The compatible parameter is set to 12.2.0:

SQL> show parameter compatible

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
compatible			     string	 12.2.0

Now you have the possibility to encrypt the data file by using the following command, be sure that you have available free space:

SQL> alter tablespace PSI ENCRYPTION online using 'AES256' ENCRYPT FILE_NAME_CONVERT = ('psi.dbf', 'psi_encrypt.dbf');

Tablespace altered.
SQL> select file_name from dba_data_files where tablespace_name = 'PSI';

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/db1/db1pdb1/psi_encrypt.dbf

You can also decrypt online a tablespace:

QL> alter tablespace PSI ENCRYPTION ONLINE DECRYPT FILE_NAME_CONVERT = ('psi_encrypt.dbf', 'psi.dbf');

Tablespace altered.

SQL> select file_name from dba_data_files where tablespace_name = 'PSI';

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/db1/db1pdb1/psi.dbf

Therefore our PSI tablespace is not encrypted anymore, let’s create a non-encrypted table, insert some values in it, and perform an encryption on the tablespace, then close the wallet and see what happens:

SQL> select file_name from dba_data_files where tablespace_name = 'PSI';

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/db1/db1pdb1/psi.dbf

SQL> connect psi/psi@db1pdb1
Connected.
SQL> create table emp (name varchar2(30), salary number(7));

Table created.

SQL> insert into emp values ('Larry', 1000000);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from emp;

NAME				   SALARY
------------------------------ ----------
Larry				  1000000

SQL> select tablespace_name from user_tables where table_name = 'EMP';

TABLESPACE_NAME
------------------------------
PSI

SQL> alter tablespace PSI ENCRYPTION online using 'AES256' ENCRYPT FILE_NAME_CONVERT = ('psi.dbf', 'psi_encrypt.dbf');

Tablespace altered.

SQL> select * from emp;

NAME				   SALARY
------------------------------ ----------
Larry				  1000000

oracle@localhost:/u01/oradata/db1/ [db1] sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Mon Mar 13 16:11:18 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> administer key management set keystore close identified by manager container =all;

keystore altered.

SQL> connect psi/psi@db1pdb1
Connected.
SQL> select * from emp;
select * from emp
              *
ERROR at line 1:
ORA-28365: wallet is not open

It works fine, non encrypted tables in a tablespace are encrypted when the tablespace is encrypted.

When the tablespace is encrypted, the strings command gives no result:

oracle@localhost:/u01/oradata/db1/db1pdb1/ [db1] strings psi_encrypt.dbf | grep -i Larry
oracle@localhost:/u01/oradata/db1/db1pdb1/ [db1]

When we open the wallet and decrypt the tablespace, we can find information in the datafile:

oracle@localhost:/u01/oradata/db1/db1pdb1/ [db1] strings psi.dbf | grep Larry
Larry

Now in 12.2 Oracle version, you can convert online the entire database, i.e the SYSTEM SYSAUX TEMP and UNDO tablespace. The commands are the same as for a data tablespace as seen previously: always the same precautions have enough free space and the compatible parameter set to 12.2.0, just a little difference you cannot specify an encryption key:

For example let’s encrypt the SYSTEM tablespace:

SQL> alter tablespace SYSTEM ENCRYPTION ONLINE ENCRYPT FILE_NAME_CONVERT = ('system01.dbf','system01_encrypt.dbf');

Tablespace altered.

SQL> select file_name from dba_data_files where tablespace_name = 'SYSTEM';

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/db1/db1pdb1/system01_encrypt.dbf

For the temporary tablespace, we have to drop the existing temporary tablespace , and create a new one encrypted as follows:

SQL> create temporary tablespace TEMP_ENCRYPT

2  tempfile ‘/u01/oradata/db1/db1pdb1/temp_encrypt.dbf’ size 100M

3  ENCRYPTION ENCRYPT;

Tablespace created.

SQL> alter database default temporary tablespace TEMP_ENCRYPT;

Database altered.

SQL> drop tablespace TEMP;

Tablespace dropped.

For the undo tablespace:

SQL> alter tablespace UNDOTBS1 ENCRYPTION ONLINE ENCRYPT FILE_NAME_CONVERT = (‘undotbs01.dbf’,’undotbs01_encrypt.dbf’);

Tablespace altered.

SQL> connect sys/manager@db1pdb1 as sysdba
Connected.
SQL> administer key management set keystore close identified by manager;
administer key management set keystore close identified by manager
*
ERROR at line 1:
ORA-28439: cannot close wallet when SYSTEM, SYSAUX, UNDO, or TEMP tablespaces
are encrypted

On the pluggable db1pdb2, as the tablespaces are not encrypted, the wallet can be closed:

SQL> connect sys/manager@db1pdb2 as sysdba
Connected.
SQL> administer key management set keystore close identified by manager;

keystore altered.

I also wanted to test the expel and impdp behaviour between pluggable databases, as we are in a multitenant environment, we have to ensure the wallet is opened in the PDBs

In order to export a table, you have to add the ENCRYPTION parameter and the ENCRYPTION_PWD_PROMPT parameter  for security reasons:

oracle@localhost:/home/oracle/ [DB1PDB1] expdp system@db1pdb1 tables=psi.emp directory=DATA_PUMP_DIR dumpfile=emp.dmp ENCRYPTION=ENCRYPTED_COLUMNS_ONLY ENCRYPTION_PWD_PROMPT=YES

Export: Release 12.2.0.1.0 - Production on Tue Mar 14 11:53:52 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Encryption Password: 
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/********@db1pdb1 tables=psi.emp directory=DATA_PUMP_DIR dumpfile=emp.dmp ENCRYPTION=ENCRYPTED_COLUMNS_ONLY ENCRYPTION_PWD_PROMPT=YES 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "PSI"."EMP"                                 5.523 KB       1 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /u00/app/oracle/admin/db1/dpdump/4A3D428970DA5D68E055000000000001/emp.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Tue Mar 14 11:54:16 2017 elapsed 0 00:00:21

In the same way if we want to import the emp table in the second pluggable database, the wallet must be opened , otherwise it will not work:

racle@localhost:/home/oracle/ [DB1PDB1] impdp system@db1pdb2 tables=psi.emp directory=DATA_PUMP_DIR dumpfile=emp.dmp ENCRYPTION_PWD_PROMPT=YES

Import: Release 12.2.0.1.0 - Production on Tue Mar 14 12:15:24 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Encryption Password: 
ORA-39002: invalid operation
ORA-39180: unable to encrypt ENCRYPTION_PASSWORD
ORA-28365: wallet is not open

you open the wallet:
SQL> administer key management set keystore open identified by manager;

keystore altered.

The impdp command runs fine:

oracle@localhost:/home/oracle/ [DB1PDB1] impdp system@db1pdb2 tables=psi.emp directory=DATA_PUMP_DIR dumpfile=emp.dmp ENCRYPTION_PWD_PROMPT=YES

Import: Release 12.2.0.1.0 - Production on Tue Mar 14 12:21:47 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Encryption Password: 
ORA-39175: Encryption password is not needed.
Master table "SYSTEM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TABLE_01":  system/********@db1pdb2 tables=psi.emp directory=DATA_PUMP_DIR dumpfile=emp.dmp ENCRYPTION_PWD_PROMPT=YES 
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at Tue Mar 14 12:21:55 2017 elapsed 0 00:00:05

But the generated dumpfile is not encrypted and you can find sensitive data in this file:

oracle@localhost:/u00/app/oracle/admin/db1/dpdump/ [db1] strings emp.dmp | grep -i Larry
Larry

Oracle offers a solution to encrypt the dump file, you can use the ENCRYPTION_MODE parameter set to TRANSPARENT or DUAL to realise your expdp command. By using TRANSPARENT, you do not need a password, the dump file is encrypted transparently, the keystone must be present and open on the target database. By specifying DUAL, you need a password and the dump file is encrypted using the TDE master key encryption.

oracle@localhost:/home/oracle/ [db1] expdp system@db1pdb1 tables=psi.emp directory=DATA_PUMP_DIR ENCRYPTION=ALL ENCRYPTION_PWD_PROMPT=YES ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=DUAL dumpfile=emp_encrypt.dmp

Export: Release 12.2.0.1.0 - Production on Tue Mar 14 12:44:18 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Encryption Password: 
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/********@db1pdb1 tables=psi.emp directory=DATA_PUMP_DIR ENCRYPTION=ALL ENCRYPTION_PWD_PROMPT=YES ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=DUAL dumpfile=emp_encrypt.dmp 
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "PSI"."EMP"                                 5.531 KB       1 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /u00/app/oracle/admin/db1/dpdump/4A3D428970DA5D68E055000000000001/emp_encrypt.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully

And now we cannot retrieve sensitive data from the dump file:

oracle@localhost:/u00/app/oracle/admin/db1/dpdump/ [db1] strings emp_encrypt.dmp | grep -i Larry
oracle@localhost:/u00/app/oracle/admin/db1/dpdump/ [db1]

 

Conclusion:

Concerning the Transparent Data Encryption in the last 12.2.0.1 Oracle version, I will mainly retain the SYSTEM, SYSAUX, UNDO or TEMP encryption giving more security for sensitive data, but be careful even if this functionality is documented in the Oracle documentation, Oracle also writes:

“Do not attempt to encrypt database internal objects such as SYSTEM, SYSAUX, UNDO or TEMP tablespaces using TDE tablespace encryption. You should focus TDE tablespace encryption on tablespaces that hold application data, not on these core components of the Oracle database.”


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

Cet article Oracle 12.2 and Transparent Data Encryption est apparu en premier sur Blog dbi services.

12cR1 RMAN Restore/Duplicate from ASM to Non ASM takes a longer time waiting for the ASMB process

Syed Jaffar - Tue, 2017-03-14 08:51
Yet another exciting journey with Oracle bugs and challenges. Here is the story for you.

One of our recent successful migrations was a single instance Oracle EBS 12cR1 database to Oracle Super Cluster M7 as a RAC database with 2 instances on the same DB version (12.1.0.2). Subsequently, the customer wants to run through EBS cloning and set up an Oracle active data guard configuration.

The target systems are not Super Cluster. The requirement to clone and set up an Oracle data guard is to configure as a single-instance database onto a filesystem (non-ASM). After initiating the cloning procedure using the DUPLICATE TARGET DATABASE method trough RMAN, we noticed that RMAN is taking significant time to restore(ship) the data files to the remote server. Also, the following warning messages were appeared in the alert.log:



ASMB started with pid=63, OS id=18085
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
Starting background process ASMB
Sat Mar 11 13:53:24 2017
ASMB started with pid=63, OS id=18087
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
Starting background process ASMB
Sat Mar 11 13:53:27 2017
ASMB started with pid=63, OS id=18089
WARNING: failed to start ASMB (connection failed) state=0x1 sid=''
WARNING: ASMB exiting with error
Starting background process ASMB
 

The situation raised couple of concerns in our minds:
  1. Why is the restore is too slow from RMAN? (while there is no Network latency and DB files are not so big sized)
  2. Why Oracle is looking for an ASM instance on a non-Cluster home? (not even a standard Grid home)
After some initial investigation, we come across following MOS Docs:
  • '12c RMAN Operations from ASM To Non-ASM Slow (Doc ID 2081537.1)'. 
  • WARNING: failed to start ASMB after RAC Database on ASM converted to Single Instance Non-ASM Database (Doc ID 2138520.1)
According to the above MOS Docs, this is an expected behavior  due to an  Unpublished BUG 19503821:  RMAN CATALOG EXTREMELY SLOW WHEN MIGRATING DATABASE FROM ASM TO FILE SYSTEM

You need to apply a patch 19503821 to overcome from the bug.


If you similar demand, make sure you apply the patch in your environmet before you proceed with the restore/duplicate procedure.

-- Excerpt from the above notes:

APPLIES TO:
Oracle Database - Enterprise Edition - Version 12.1.0.1 to 12.1.0.2 [Release 12.1]
 Information in this document applies to any platform.
 
SYMPTOMS:

1*. RAC Database with ASM has been converted or restored to Standalone Single Instance Non-ASM Database.
2*. From the RDBMS alert.log, it is showing continuous following messages.

3*.RMAN Restore/Duplicate from ASM to Non ASM in 12.1 take a longer time waiting for the ASMB process.
4*.Any RMAN command at the mount state which involves Non ASM location can take more time.

 SOLUTION:


Apply the patch 19503821, if not available for your version/OS then please log a SR with the support to get the patch for your version.

SQL Server 2016: Does Dynamic Data Masking work with Temporal Table?

Yann Neuhaus - Tue, 2017-03-14 06:14

In the last IT Tagen 2016, I presented the Dynamic Data Masking (DDM) and how it worked.
To add a little fun, I applied the DDM to a temporal table to see if the history table inherit also from DDM’s rules.
In this blog, I explain all the different steps to reproduce my last demo.

Step 1: Create the table and the temporal table in the database DDM_TEST
USE [DDM_TEST]
GO

CREATE TABLE [dbo].[Confidential](
[ID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
[Name] [nvarchar](70)NULL,
[CreditCard] [nvarchar](16)NULL,
[Salary] [int] NULL,
[Email] [nvarchar](60)NULL,
  [StartDate] datetime2 GENERATED ALWAYS AS ROW START NOT NULL,
  [EndDate] datetime2 GENERATED ALWAYS AS ROW END NOT NULL  
   , PERIOD FOR SYSTEM_TIME (StartDate,EndDate)
)  WITH (SYSTEM_VERSIONING=ON(HISTORY_TABLE = [dbo].[ConfidentialHistory]))

The table has sensitive data like the Salary and the Credit Card number.
As you can see, I add an history table [dbo].[ConfidentialHistory].
I insert 6 rows in my table and select both tables.

insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email]) values (N'Stephane',N'3546748598467584',113459,N'sts@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email]) values (N'David',N'3546746598450989',143576,'dab@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Nathan',N'3890098321457893',118900,'nac@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Olivier',N'3564890234785612',98000,'olt@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Alain',N'9897436900989342',85900,'ala@dbi-services.com')
insert into [dbo].[Confidential]([Name],[CreditCard],[Salary],[Email])  values (N'Fabrice',N'908323468902134',102345,'fad@dbi-services.com')

select * from [dbo].[Confidential]
select * from [dbo].[ConfidentialHistory]

DDM_TemporalTable01
With just inserts, you have no entries in the history table.
After an update for the Salary of Stephane, you can see now the old value in the history table.
To see both tables I use the new option in the SELECT “FOR SYSTEM_TIME ALL”.
DDM_TemporalTable02
The context is in place. Now I will apply the DDM

Step 2: create the DDM rules

I apply masks on all columns from my table with different function like default, partial or email.

Use DDM_TEST
ALTER Table Confidential
ALTER COLUMN NAME ADD MASKED WITH (FUNCTION='default()')
ALTER Table Confidential
ALTER COLUMN SALARY ADD MASKED WITH (FUNCTION='default()')
ALTER Table Confidential
ALTER COLUMN creditcard ADD MASKED WITH (FUNCTION='partial(1,"XXXX",2)')
ALTER Table Confidential
ALTER COLUMN email ADD MASKED WITH (FUNCTION='email()')

DDM_TemporalTable03
As you can see if I read the table, nothing appends because I’m sysadmin of course!
Now, I begin tests with a user who can just read the table.

Step 3: Test the case

The user that I create need to have SELECT permissions on both tables (System-Versioned and History)

USE DDM_TEST;
CREATE USER TestDemo WITHOUT LOGIN
GRANT SELECT ON Confidential TO TestDemo
GRANT SELECT ON ConfidentialHistory TO TestDemo
I execute all SELECT queries as this user:
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[Confidential] 
REVERT
EXECUTE AS USER='TestDemo'
SELECT * FROM [dbo].[ConfidentialHistory]
REVERT
EXECUTE AS USER='TestDemo'
select * from [dbo].[Confidential]  FOR SYSTEM_TIME ALL
REVERT

DDM_TemporalTable04
As you can see, the 3 selects mask data for this user. Nice, isn’t it?
Finally, the Dynamic Data Masking work with Temporal Table very well and can be used to mask all data including historic data from users.

 

Cet article SQL Server 2016: Does Dynamic Data Masking work with Temporal Table? est apparu en premier sur Blog dbi services.

Show greyscale icon as red

Jeff Kemp - Tue, 2017-03-14 03:03

I have an editable tabular form using Apex’s old greyscale edit link icons:

greyscale-icons

The users complained that they currently have to click each link to drill down to the detail records to find and fix any errors; they wanted the screen to indicate which detail records were already fine and which ones needed attention.

Since screen real-estate is limited here, I wanted to indicate the problems by showing a red edit link instead of the default greyscale one; since this application is using an old theme I didn’t feel like converting it to use Font Awesome (not yet, at least) and neither did I want to create a whole new image and upload it. Instead, I tried a CSS trick to convert the greyscale image to a red shade.

I used this informative post to work out what I needed: https://css-tricks.com/color-filters-can-turn-your-gray-skies-blue/

WARNING: Unfortunately this trick does NOT work in IE (tested in IE11). Blast.

Firstly, I added a column to the underlying query that determines if the error needs to be indicated or not:

select ...,
       case when {error condition}
       then 'btnerr' end as year1_err
from mytable...

I set the new column type to Hidden Column.

The link column is rendered using a Link-type column, with Link Text set to:

<img src="#IMAGE_PREFIX#e2.gif" alt="">

I changed this to:

<img src="#IMAGE_PREFIX#e2.gif" alt="" class="#YEAR1_ERR#">

What this does is if there is an error for a particular record, the class "btnerr" is added to the img tag. Rows with no error will simply have class="" which does nothing.

Now, to make the greyscale image show up as red, I need to add an SVG filter to the HTML Header in the page:

<svg style="display:none"><defs>
  <filter id="redshader">
    <feColorMatrix type="matrix"
      values="0.7 0.7 0.7 0 0
              0.2 0.2 0.2 0 0
              0.2 0.2 0.2 0 0
              0   0   0   1 0"/>
  </filter>
</defs></svg>

I made up the values for the R G B lines with some trial and error. The filter is applied to the buttons with the btnerr class with this CSS in the Inline CSS property of the page:

img.btnerr {filter:url(#redshader);}

The result is quite effective:

greyscale-colorize

But, as I noted earlier, this solution does not work in IE, so that’s a big fail.

NOTE: if this application was using the Universal Theme I would simply apply a simple font color style to the icon since it would be using a font instead of an image icon.


Filed under: APEX Tagged: APEX, CSS, tips-&-tricks

Webcast: "Build, Deploy and Manage Smartphone Apps for EBS"

Steven Chan - Tue, 2017-03-14 02:05

Build EBS smartphone appsOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for a primer on how to build your own mobile apps for EBS, see:

Vijay Shanmugam, Director Product Development, explains the technologies and approach used to build Oracle's smartphone applications for Oracle E-Business Suite. You will learn how to deploy and manage iOS and Android mobile applications from application stores, how to use enterprise deployment to distribute controlled versions of the mobile applications within your organization and how to use a combination of Oracle E-Business Suite Mobile Foundation, Oracle E-Business Suite REST services and Oracle Mobile Application Framework (MAF) to develop custom smartphone applications for Oracle E-Business Suite to meet your needs. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

Mary Beth Haglin Porn Videos, teacher: 'My student lover seduced me'

iAdvise - Tue, 2017-03-14 01:45
Mary Beth Haglin, from Iowa, turned herself into the police back in July, but now she insists that she was ‘swept off her feet’ by the anonymous 17-year-old male student.

Speaking to Inside Edition, the substitute teacher said:

He would come into my classroom, grab a Post-It, write something and stick it to my desk on his way out. One read, ‘I love you so much, my empress’. He would always call me ‘my empress’.

And Mary Beth fell for his way with words, having sex with the minor ‘almost every day’ at a nearby car park and sending numerous revealing selfies.

However, the teacher claims that the relationship was more than just sexual – she believes it was love.


She added:

I was completely head over heels. We met several times a week, not every time was just to have sex. There were times we would sit and talk. I thought in my mind this was some sort of real relationship. - Plus d'infos sur Mo Ti News - http://motinews.info/news/video-mary-beth-haglin-teacher-my-student-lover-seduced-me

But their lustful, and massively illegal, romance came to an abrupt end when the pair were spotted by another student and Haglin was fired and charged with sexual exploitation.

She then quickly handed herself into police and is set to be tried on November 14.

Speaking about her trial, she said:

I want to go back and smack myself and ask: ‘What were you thinking Mary Beth?’

If convicted, Mary Beth will face up to two-years in prison and will become a registered sex offender.  Clik Here to watch Mary Beth Haglin Porn Videos
Categories: APPS Blogs

UTL FILE FRENAME (mv) doesn't work - ORA-29292

Tom Kyte - Mon, 2017-03-13 23:06
I have a a shared sftp directory mounted \xx.xx.xx.xx\SFTP_AX\ /sftp/ cifs user,uid=54321,gid=54321,suid,username=user,password=pass,workgroup=application,file_mode=0775,dir_mode=0775,rw 0 0 Then a PL/SQL that moves my file to another mounted dir...
Categories: DBA Blogs

Google acquisitions samples with vizualizations

Nilesh Jethwa - Mon, 2017-03-13 15:10

Google has acquired around 184 companies as of October 2015, with its largest acquisition being the purchase of Motorola Mobility, a mobile device manufacturing company, for $12.5 billion. Not all the acquisition figures are available but aggregating all the public known amounts, Google has spent atleast 28 billion USD on acquisitions.

With the recent re-structuring, Google became a subsidiary of Alphabet Inc., which now owns most of the parts.

Using infocaptor dashboard software, we analyze the list of companies, products and services that Google has acquired since 2001.

Read more at http://www.infocaptor.com/dashboard/list-of-google-acquisitions-explained-with-visualizations

Firefox ESR 52 Certified with EBS 12.1 and 12.2

Steven Chan - Mon, 2017-03-13 12:04

Firefox ESR logo

Mozilla Firefox 52 Extended Support Release (ESR) is certified as a Windows-based client browser for Oracle E-Business Suite 12.1 and 12.2.

What is Mozilla Firefox ESR?

Mozilla offers an Extended Support Release based on an official release of Firefox for organizations that are unable to mass-deploy new consumer-oriented versions of Firefox every six weeks.  From the Mozilla ESR FAQ:

What does the Mozilla Firefox ESR life cycle look like?

Releases will be maintained for approximately one year, with point releases containing security updates coinciding with regular Firefox releases. The ESR will also have a two cycle (12 week) overlap between the time of a new release and the end-of-life of the previous release to permit testing and certification prior to deploying a new version.

Maintenance of each ESR, through point releases, is limited to high-risk/high-impact security vulnerabilities and in rare cases may also include off-schedule releases that address live security vulnerabilities. Backports of any functional enhancements and/or stability fixes are not in scope.

At the end of the support period for an ESR version:

  • the release will reach its end-of-life
  • no further updates will be offered for that version
  • an update to the next version will be offered through the application update service

E-Business Suite to be certified with Firefox Extended Support Releases Only

New personal versions of Firefox on the Rapid Release channel are released roughly every six weeks.  It is impractical for us to certify these new personal Rapid Release versions of Firefox with the Oracle E-Business Suite because a given Firefox release is generally obsolete by the time we complete the certification.

From Firefox 10 and onwards, Oracle E-Business Suite is certified only with selected Firefox Extended Support Release versions. Oracle has no current plans to certify new Firefox personal releases on the Rapid Release channel with the E-Business Suite.

Plug-in Support removed in Firefox Rapid Release 52

Mozilla has removed plug-in support in Firefox Rapid Release 52.  This means that the Rapid Release version of Firefox cannot run Forms-based content in EBS. 

Firefox Extended Support Release continues to offer plug-in support.  End-users who need to use Forms-based content in EBS must run the Firefox Extended Support Release. 

Will EBS offer an alternative to plug-ins?

Yes. We are working on an update to EBS that allows Forms-based content to run in browsers that do not have plug-in support.  See:

When will the new Java Web Start alternative be released?

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.   

EBS patching policy for Firefox compatibility issues

Mozilla stresses their goal of ensuring that Firefox personal versions will continue to offer the same level of application compatibility as Firefox Extended Support Releases. 

Oracle E-Business Suite Development will issue new E-Business Suite patches or workarounds that can be reproduced with Firefox Extended Support Releases.  If you report compatibility issues with Firefox personal releases that cannot be reproduced with Firefox Extended Support Releases, your options are:

  1. Deploy a certified Firefox Extended Support Release version instead of the Firefox personal version
  2. Report the incompatibility between Firefox ESR and Firefox personal to Mozilla
  3. Use Internet Explorer (on Windows) or Safari (on Mac OS X) until Mozilla resolves the issue
EBS Compatibility with Firefox ESR security updates

Mozilla may release new updates to Firefox ESR versions to address high-risk/high-impact security issues.  These updates are considered to be certified with the E-Business Suite on the day that they're released.  You do not need to wait for a certification from Oracle before deploying these new Firefox ESR security updates.

Certified desktop operating systems
  • Windows 10 (32-bit and 64-bit)
  • Windows 8.1 (32-bit and 64-bit)
  • Windows 7 SP1 (32-bit and 64-bit)

References

Related Articles

Categories: APPS Blogs

1.5million PageViews

Hemant K Chitale - Mon, 2017-03-13 10:01
My Oracle Blog now has had 1.5million PageViews.


The 1million PageViews mark was hit in March 2015.
.
.
.

Categories: DBA Blogs

12cR1 RAC Posts -- 8c : Ignorable "Errors" during the DUPLICATE

Hemant K Chitale - Mon, 2017-03-13 09:56
In yesterday's post, I had shown a DUPLICATE DATABASE from RAC-ASM to SingleInstance-FileSystem.

During the course of the DUPLICATE DATABASE run, the Standby alert log seemingly reported errors.  I chose to ignore the "errors" as I know the DUPLICATE was running successfully.

For your reference, here are some of the good messages :

Sun Mar 12 23:17:55 2017
ALTER SYSTEM SET control_files='/u01/app/oracle/oradata/STBY/controlfile/o1_mf_ddbs0p3w_.ctl','/u01/app/oracle/fast_recovery_area/STBY/controlfile/o1_mf_ddbs0p40_.ctl' COMMENT='Set by RMAN' SCOPE=SPFILE;
Sun Mar 12 23:17:59 2017
ALTER SYSTEM SET control_files='/u01/app/oracle/oradata/STBY/controlfile/o1_mf_ddbs0p3w_.ctl','/u01/app/oracle/fast_recovery_area/STBY/controlfile/o1_mf_ddbs0p40_.ctl' COMMENT='Set by RMAN' SCOPE=SPFILE;

Sun Mar 12 23:21:41 2017
Switch of datafile 1 complete to datafile copy
checkpoint is 3315193
Switch of datafile 3 complete to datafile copy
checkpoint is 3315211
Switch of datafile 4 complete to datafile copy
checkpoint is 3315270
Switch of datafile 5 complete to datafile copy
checkpoint is 1754712
Switch of datafile 6 complete to datafile copy
checkpoint is 3315280
Switch of datafile 7 complete to datafile copy
checkpoint is 1754712
Switch of datafile 8 complete to datafile copy
checkpoint is 3315276
Switch of datafile 9 complete to datafile copy
checkpoint is 2978862
Switch of datafile 10 complete to datafile copy
checkpoint is 2978862
Switch of datafile 11 complete to datafile copy
checkpoint is 2978862


And here are the errors (about the online redo logs) continuously being reported.  These are messages which I ignored because the Standby has no online redo logs yet.

Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_1.257.931825281'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 1 thread 1: '/u01/app/oracle/oradata/STBY/onlinelog/group_1.283.931825279'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_1.257.931825281'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 1 thread 1: '/u01/app/oracle/oradata/STBY/onlinelog/group_1.283.931825279'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_2.258.931825287'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/STBY/onlinelog/group_2.284.931825283'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_2.258.931825287'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/STBY/onlinelog/group_2.284.931825283'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 3 of thread 2
ORA-00312: online log 3 thread 2: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_3.259.931826417'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 3 thread 2: '/u01/app/oracle/oradata/STBY/onlinelog/group_3.290.931826413'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 3 of thread 2
ORA-00312: online log 3 thread 2: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_3.259.931826417'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 3 thread 2: '/u01/app/oracle/oradata/STBY/onlinelog/group_3.290.931826413'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 4 of thread 2
ORA-00312: online log 4 thread 2: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_4.260.931826421'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 4 thread 2: '/u01/app/oracle/oradata/STBY/onlinelog/group_4.291.931826417'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 4 of thread 2
ORA-00312: online log 4 thread 2: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_4.260.931826421'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 4 thread 2: '/u01/app/oracle/oradata/STBY/onlinelog/group_4.291.931826417'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 5 of thread 0
ORA-00312: online log 5 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_5.303.937936343'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 5 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_5.292.937936339'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 5 of thread 0
ORA-00312: online log 5 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_5.303.937936343'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 5 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_5.292.937936339'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 6 of thread 0
ORA-00312: online log 6 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_6.304.937936363'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 6 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_6.298.937936361'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 6 of thread 0
ORA-00312: online log 6 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_6.304.937936363'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 6 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_6.298.937936361'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 7 of thread 0
ORA-00312: online log 7 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_7.305.937936377'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 7 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_7.299.937936375'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 7 of thread 0
ORA-00312: online log 7 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_7.305.937936377'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 7 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_7.299.937936375'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 8 of thread 0
ORA-00312: online log 8 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_8.306.937936389'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 8 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_8.300.937936389'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 8 of thread 0
ORA-00312: online log 8 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_8.306.937936389'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 8 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_8.300.937936389'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 9 of thread 0
ORA-00312: online log 9 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_9.307.937936405'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 9 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_9.301.937936403'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Sun Mar 12 23:18:22 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_lgwr_16416.trc:
ORA-00313: open failed for members of log group 9 of thread 0
ORA-00312: online log 9 thread 0: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_9.307.937936405'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00312: online log 9 thread 0: '/u01/app/oracle/oradata/STBY/onlinelog/group_9.301.937936403'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

alter database clear logfile group 1
Clearing online log 1 of thread 1 sequence number 41
Sun Mar 12 23:21:42 2017
Errors in file /u01/app/oracle/diag/rdbms/stby/STBY/trace/STBY_ora_16474.trc:
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '/u01/app/oracle/fast_recovery_area/STBY/onlinelog/group_1.257.931825281'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3



It is also interesting to see how CONTROL_FILES, DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT are reported in the alert log :

  
control_files = "/u01/app/oracle/oradata/STBY/controlfile/o1_mf_ddbs0p3w_.ctl"
control_files = "/u01/app/oracle/fast_recovery_area/STBY/controlfile/o1_mf_ddbs0p40_.ctl"
db_file_name_convert = "+DATA/RAC"
db_file_name_convert = "/u01/app/oracle/oradata/STBY"
db_file_name_convert = "+FRA/RAC"
db_file_name_convert = "/u01/app/oracle/fast_recovery_area/STBY"
log_file_name_convert = "+DATA/RAC"
log_file_name_convert = "/u01/app/oracle/oradata/STBY"
log_file_name_convert = "+FRA/RAC"
log_file_name_convert = "/u01/app/oracle/fast_recovery_area/STBY"

Each component of the entry in the init parameter file is reported on a separate line in the alert log.
.
.
.

Categories: DBA Blogs

Replacing the Google Search Appliance (GSA) with ​Redstone’s Distributed Index and Search UI

WebCenter Team - Mon, 2017-03-13 08:55
Replacing the Google Search Appliance (GSA) with ​Redstone’s Distributed Index and Search UI
When: Tuesday, March 21st at 3:00 PM CT

WebCenter Content customers – now that Google has announced the discontinuation of their Google Search Appliance, are you looking for a proven, lower-cost alternative? 

Redstone has the solution for you!

During this live webcast, Redstone will provide an overview of our Distributed Index and Search UI solution.  We’ll demonstrate how your organization can seamlessly transition away from the GSA.  You’ll be able to provide your end users with a great search experience at a lower cost.

Additionally, you’ll hear from special guest Be The Match operated by the National Marrow Donor Program and their journey from the GSA to Distributed Index.

After the live demonstration, we’ll field questions from the audience.

Be The Match operated by the National Marrow Donor Program

  • Heather Helm, Product Owner/Business Sponsor
  • Andrew Chilson, Manager, IT Enterprise Application Systems

Pages

Subscribe to Oracle FAQ aggregator