Skip navigation.

DBA Blogs

Packt - Time to learn Oracle and Linux

Surachart Opun - Sat, 2016-01-23 00:01
What is your resolution for learning? Learn Oracle, Learn Linux or both. It' s a good news for people who are interested in improving Oracle and Linux skills. Packt Promotional (discount of 50%) for eBooks & videos from today until 23rd Feb, 2016. 

 XM6lxr0 for OracleOracle (Code:XM6lxr0) 
 ILYTW for LinuxLinux (Code:ILYTW)Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Oracle Partner Day Central Europe - 10th March 2016

JANUARY 2016 ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-01-21 [del.icio.us]

Categories: DBA Blogs

Partner Webcast – Implementing Microservices Architecture Using Oracle Fusion Middleware

In IT circles mobile, big data, and the Internet of Things (IoT) continue to draw lots of attention. But each of those trends had a beginning—a point at which what was a flat line on a trend graph...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL On The Edge #7 – Azure Data Factory

Pythian Group - Thu, 2016-01-21 08:46

As the amount of data generated around us continues to grow exponentially, organizations have to keep coming up with the solutions of our new technological landscape. Data integration has been part of this challenge for many years now and there are many tools that have been developed specifically for these needs. Some tools are geared specifically from moving data from point A to point B, other tools provide a full ETL (Extract-Transform-Load) solution that can work with many products using all kinds of different drivers.

For many years, the first party tool of choice for SQL Server professionals has been SSIS. Interestingly, even though it’s called SQL Server Integration Services, SSIS is really a general purpose ETL tool. If you want to extract data from Oracle, transform it with the full expressive capabilities of .NET and then upload it to a partner’s FTP as a flat file, you can do it in SSIS!

As we continue our journey into the cloud and hybrid environments, more tools will start coming up that will work as an ETL PaaS offering. You won’t have to manage the pipeline’s OS, hardware or underlying software, you’ll just create your data pipelines and be off to the races.

What is it?
Azure Data Factory (ADF) is Microsoft’s cloud offering for data integration and processing as a service. You don’t have to install any bits or manage any software, you’re only responsible of creating the pipelines. Since it’s developed to run inside the Azure the tool also has some pre-made hooks that make it really easy to interoperate with other Azure services such as blob storage, HDInsight or Azure Machine Learning.

On premises you would need a machine (VM or physical), you would need a license for your ETL tool (let’s say SSIS), then you would need to keep SSIS patched up, the machine up to date, think about software and hardware refreshes and so on. Using ADF, you can focus on the pipeline itself and not have to worry about what underlying sofware and hardware is actually making it work. The service supports a wide array of sources and targets (and continues to grow) and also robust options for scheduling the pipeline or running continuously to look for new slices of data.

When should you use it?
If you’re thinking about creating a new SSIS package and find that your sources are all web or cloud based then ADF is a good choice. Build a prototype of your pipeline, make sure that it supports your expected transformations and then you can operationalize it on the cloud. As a PaaS offering, it takes away the time, cost and effort of having to deal with the underlying bits and you can just focus on delivering quality data pipelines in a shorter timeframe.

Service Limitations

Like all new things in Azure, there are still some service limitations. The biggest one at the moment is that the service is only available in the West US and North Europe regions. If you don’t have resources in those regions and will be moving a lot of data then I would advise to start learning the service and prototyping but not put in production the pipelines. The reason for that is that any data movement from outside the region will have an outbound transfer cost. If your resources are in those regions then there’s no charge and you can ignore this warning.

Demo
In the Demo video we’ll look at the user interface of Azure Data Factory, how to add a source and target, scheduling and checking the status of the pipeline. Enjoy!

Discover more about our expertise with SQL Server in the Cloud.

Categories: DBA Blogs

Transformation Hackathon Romania - Results are in!

A few weeks ago we ran Oracle Transformation Hackathon in Bucharest, Romania. We had great fun of two days innovating with Oracle technologies as you can see below. Now, the results are in! The...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Enterprise Manager 13c is Available for you!

Oracle announced Oracle Enterprise Manager13c the latest version of Oracle Enterprise Manager . Oracle EM tool is the best tool to monitor your infrastructure, ranging from multiple database...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

RAC - Missing Cvuqdisk package

Learn DB Concepts with me... - Mon, 2016-01-18 11:07
If you are trying to install the RAC, then you might come up across a missing cvuqdisk* rpm error.This package comes with the Grid install software.Once you have downloaded the Grid infrastructure software. You will find the Cvuqdisk* package in the rpm directory.


grid@RAC01:[/u01/app/ORCL_SFTW/grid_software] $ ls

install  readme.html  response  rpm  runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html

Now change dir to rpm :

grid@RAC01:[/u01/app/ORCL_SFTW/grid_software] $ cd rpm

grid@RAC01:[/u01/app/ORCL_SFTW/grid_software/rpm] $ exit
logout

atoorpu@RAC01:[/u01/app] $ cd /u01/app/ORCL_SFTW/grid_software/rpm


You need to be logged in as root to install the package.

atoorpu@RAC01:[/u01/app/ORCL_SFTW/grid_software/rpm] $ sudo rpm -Uvh cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]


NOTE : Now lets see if the package was installed or not. Check this on both servers.
 
atoorpu@RAC01:[/u01/app/ORCL_SFTW/grid_software/rpm] $ rpm -qa |grep cvuqdisk-1.0*
cvuqdisk-1.0.9-1.x86_64
Categories: DBA Blogs

Count rows in all tables in a schema- oracle

Learn DB Concepts with me... - Sun, 2016-01-17 22:28
Below PL/SQL block will allow you to count rows in all tables in a given schema: 


Plsql :

declare
    v_count integer;
begin

    for r in (select table_name, owner from all_tables
              where owner in ('&owner'))
    loop
        execute immediate 'select count(*) from ' ||r.owner||'.'|| r.table_name
            into v_count;
        --INSERT INTO STATS_TABLE(TABLE_NAME,SCHEMA_NAME,RECORD_COUNT,CREATED) VALUES (r.table_name,r.owner,v_count,SYSDATE);
        DBMS_OUTPUT.PUT_LINE('TABLE OWNER : '||lpad(to_char(r.owner),10)||', Table_name : '||lpad(to_char(r.table_name),10)||
        ', No or rows: '||rpad(to_char(v_count),6));
    end loop;

end;
 /


 Sample output:

TABLE OWNER :      ATEST, Table_name :  TEST_TAB1, No or rows: 11   
TABLE OWNER :      ATEST, Table_name :     LOOKUP, No or rows: 3    
TABLE OWNER :      ATEST, Table_name : BIG_TABLE2, No or rows: 100000
TABLE OWNER :      ATEST, Table_name :  BIG_TABLE, No or rows: 100000

Note :
when run this will prompt you for the schema you want to count the the table rows.You can alter this block to count for all tables in multiple schemas by changing below part to list of schemas like below.

or r in (select table_name, owner from all_tables
              where owner in ('&owner')

You can change the &owner  to 'SCHEMA1','SCHEMA2','SCHEMA3'...
Categories: DBA Blogs

Partner Webcast – Oracle Integration Cloud Service: Real Time Synchronization for cloud and on–premise applications

Today’s organizations are facing a rapid transition from on-premises-only application integration to a more diverse integration landscape that involves combining on-premises with public and private...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

How to Migrate from On-Premises to Azure SQL Database

Pythian Group - Fri, 2016-01-15 15:14

The Azure SQL Database is improving its capabilities day-by-day. The “Cloud-first” strategy used by Microsoft is also an incentive to start using the Azure’s SQL Database as a Service (DaaS) offer.

In this article I’ll explain all the steps to move your database from on-premises to Azure, using three different approaches. You will need to choose the right one based on your migration strategy and on the database that you are migrating. Don’t forget that not all the features supported on-premises are supported on Azure, so some additional work may be needed prior to the migration.

I’ll show how to migrate a database to Azure SQL Database by using two general methods:

  • Using the SQL Server Management Studio – Recommended when there are no code compatibility issues blocking the cloud migration.
  • Using the SQL Server Data Tools – This approach is highly recommended when there are migration barriers, as the process of detecting and fixing the issues is simpler and more direct.

If you are in doubt about which one to use, the recommendation is to start by using the SQL Server Management Studio approach and, in case of failures, proceed with the SQL Server Data Tools.

Migrate Using SQL Server Management Studio

SQL Server Management Studio (SSMS) offers two direct ways to transfer a database to an Azure SQL Database. To proceed, connect to the SQL Server instance and run either the “SQL Database Deployment Wizard” or the “Export data-tier application” option from SQL Server Management Studio.

img1

If you cannot find the preferred option, you will need to update your SQL Server Management Studio (SSMS), which is now a free standalone product. You can do this by downloading the latest version.

The primary difference between the two options is that the “Deploy“ option requires an existing Database server in Azure and will directly deploy the on-premises database to that location.  The difference is that afterwards the “Export” option will create a file to be imported from the Azure portal. The exported file can be loaded straight to an Azure Blob Storage account, which will help avoid an extra step to copy the file (recommended).

NOTE: For both options, an Azure Blob Storage account with a container and an Azure SQL server are needed.

Migration Steps Using the Deployment Wizard
  1. Right-click the database and select the Deploy Database to Microsoft Azure SQL Databaseimg2
  2. Fill in the required fields.
    The server information is for the target (Azure SQL Database server). The settings to define the price tier are also configured at this stage. The bacpac file will be created locally and then applied on the Azure SQL Server, and because of this, we will need to store the bacpac file in a temporary place in the server.
  3. Click Next.

img3

  1. Review the settings and click Finish.img4
  2. Wait for the process to complete.
    At this stage the wizard will validate the database, create the DACPAC file, and apply the Azure SQL Server to create the database.

img5

  1. The database is now ready to use the server admin account to access the Azure SQL Server.

 

Migration Steps using the Export Data-Tier Application Process
  1. Right-click the database and select the Export Data-tier Application.011316_1528_HOWDOYOUMIG6.png
  2. Save the file in an Azure Blob Storage Account. You will need the account name and access key.
  3. Select the container and click Next.011316_1528_HOWDOYOUMIG7.png
  4. Click Finish, and wait for the processing to complete.
  5. Once the process completes a “Success” message is seen as shown in the screen below. Otherwise, there are items needing to be resolved to make the database capable of being converted into an Azure SQL Database.011316_1528_HOWDOYOUMIG8.png
  6. Connect to the Azure portal and choose the SQL Servers.
  7. Select the SQL Server location where the database should be created, and then click the Import Database icon as shown below.011316_1528_HOWDOYOUMIG9.png
  8. Complete the required settings, including the BACPAC file location, price tier, and server administrator’s password, and then click Create.011316_1528_HOWDOYOUMIG10.png
  9. Once the process completes, the database will be seen in the list.

011316_1528_HOWDOYOUMIG11.png

Migrate Using SQL Server Data Tools

By using the SSMS to migrate the database using a DACPAC, we don’t have the needed flexibility to properly detect and fix the found issues. For this purpose, the SQL Server Data Tools – Business Intelligence is a better option to analyze the database objects. To proceed with this option, follow the steps below.

 

Creating the Main Project
  1. Using the SQL Server Data Tools BI, click the SQL Server Object Explorer tab and connect to the on-premises instance:

011316_1528_HOWDOYOUMIG12.png

  1. Right-click the database to be migrated to Azure, and then click Create New Project.
  2. Add a name to the project and select a path to save the project files.
  3. Click next and wait for the processing to complete.

011316_1528_HOWDOYOUMIG13.png

  1. After the project is created, right-click the project root, go to properties and change the Target Platform to Azure SQL Database. Save and close.

011316_1528_HOWDOYOUMIG14.png

  1. Right-click the project and click Rebuild. If problems are detected, all the errors will be shown in the Error List.

011316_1528_HOWDOYOUMIG15.png

  1. Go to File->New->Project, give a project name (I will name it AWAzure) and in the Solution option, click Add to solution:

011316_1528_HOWDOYOUMIG16.png

 

 

Creating the New Schema

In order to filter the non-supported features and find the code to be corrected, the next step is a Schema Comparison creation. Follow the steps shown:

011316_1528_HOWDOYOUMIG17.png

  1. Now, select the options. Click the icon shown.

011316_1528_HOWDOYOUMIG18.png

  1. In the Schema Compare Options window, click to clear the following known non-supported items:
  • Aggregates
  • Application Roles
  • Assemblies
  • Asymmetric Keys
  • Broker Providers
  • Certificates
  • Contracts
  • Defaults
  • Extended Properties
  • Filegroups
  • FIleTables
  • Full-Text Stoplists
  • Full-Text Catalogs
  • Full-Text Indexes
  • Message Types
  • Partition Functions
  • Partition Schemes
  • Queues
  • Remote Service Bindings
  • Rules
  • Sequences
  • Services
  • Symmetric Keys
  • Used-Defined Types (CLR)
  • XML Indexes
  • XML Schemas Collections
  1. Click Ok and save the Schema Comparison, as it can be useful later.
  2. Select the source: The On-premises database.

011316_1528_HOWDOYOUMIG19.png

  1. Select the Target: The empty SQL Server create project.

011316_1528_HOWDOYOUMIG20.png

We will have the following:

011316_1528_HOWDOYOUMIG21.png

  1. Now, click Compare. Wait for the process to complete and then click Update (click YES in the confirmation pop-up), to update the selected target.
  1. Next, go to the AWAzure (the target) project, right-click on the root, go to properties, and change the Target Platform to Azure SQL Database.
  1. Click Save and Close the screen.

 

Resolving Problems

Now it’s time to resolve the problems. Check the errors tab and double click on each found item to open the code. Resolve the issue and save the file.

011316_1528_HOWDOYOUMIG22.png

Use the filter to ensure you are dealing with the right project.

011316_1528_HOWDOYOUMIG23.png

 

 

Deploying the Schema

After the schema revision, we can publish the database.

  1. To publish the database, right click the AWAzure project, and click Publish.
  1. Edit the target instance and connect to the Azure SQL Server:

011316_1528_HOWDOYOUMIG24.png

  1. Fill in the database name and click Publish.

img6

Moving the Data

The schema is deployed. Now it is time to move the data. To do this, use the Import and Export Wizard, from the SQL Server Management Studio.

  1. Connect to the on-premises instance, right click the database used as the data source and follow the steps shown:

img7

  1. In the wizard, confirm the Server name and the source database, and then click Next. 

img8

Now, do the same for the Azure SQL Database.

  1. In the Destination field, select SQL Server Native Client 11.0, fill in the server name, and select the target database.img9
  2. Click Next.
  3. For this step, keep the first option selected, and then click Next.img10

Select all the tables and views from the source. Notice that SQL Server will automatically map the target tables on Azure.

About data hierarchy: If foreign key constraints are being used in the database, the data migration should be made in phases to avoid failure. This needs to be analyzed prior to the final migration.

img11

  1. Make sure that all the tables are highlighted and click Edit Mappings.
  2. Select Enable Identity Insert and then click Ok.
  3. Then, in the main Wizard window click Next.

img12

  1. Make sure the Run immediately check box is selected and click Next.

img13

  1. In the following screen, review the options, and then click Finish.

img14

  1. Monitor and the data transfer and close the wizard.

img15

 

That’s it. I hope that the steps were clear and this article was useful. If you have questions, do not hesitate in post your comment or contact me using twitter (@murilocmiranda). “See” you in another article.

 

Discover more about our expertise in SQL Server

Categories: DBA Blogs

Issues with Oracle Secure External Password Stores

Pythian Group - Fri, 2016-01-15 15:09
Background

In the previous article, I covered the basics of how to remove database passwords (credentials) from Oracle monitoring or backup scripts and how to instead secure them using a “Secure External Password Store” (SEPS) and Oracle Wallet.

While this mechanism is far better than putting a plain text credential in a script file, one of the more advanced options, specifically tying the files to the local host with the “-auto_login_local” introduces bugs with Oracle 12cR1 software not present with other versions.

This article goes deeper into how to harden the approach, lock-down script access to the local server, and workaround Oracle Wallet limitations and bugs.

 

Issues with the “-auto_login_local” Option

Oracle suggests using the “-auto_login_local” option to secure an Oracle Wallet and only allow it to be used on the server on which it was created and by the user that created it. See MOS document: “How To Prevent The Secure Password Store Wallet From Being Moved to Another Host (Doc ID 1114599.1)

This is supposed to protect from a bad actor obtaining a copy of the file, say from a backup, and being able to use it (and the credentials contained within it) from another machine. Unfortunately, there’s a number of issues and problems with this option:

    1. There are ways to work around the protection it provides.
    2. The option fundamentally doesn’t work with 12.1.0.2 (while it does with 11.2.0.4 and 12.1.0.1). This is clearly an Oracle bug.

 

By-passing the “-auto_login_local” Parameter”

The “-auto_login_local” parameter is supposed to protect the Wallet from being used on another server. However testing proves that this is really easy to workaround.

The basics of SEPS and Oracle Wallets I covered in my previous article. To enable the -auto_login_local option, we simply modify the existing Wallet file using the orapki utility:

$ orapki wallet create -wallet "/u01/app/oracle/wallet" -auto_login_local
Oracle PKI Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
1: ORCL scott

$

 

Testing on the local machine shows that the connection using the Wallet works as expected:

$ sqlplus /@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jan 13 15:27:54 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select SYS_CONTEXT('userenv','IP_ADDRESS') IP_ADDRESS,
  2         SYS_CONTEXT('userenv','DB_NAME') DB_NAME,
  3         SYS_CONTEXT('userenv','CURRENT_USER') CURRENT_USER
  4  from dual;

IP_ADDRESS      DB_NAME      CURRENT_USER
--------------- ------------ ------------
192.168.1.123   ORCL         SCOTT

SQL>

 

It’s not overly simple for a bad actor already inside the network to obtain all of the information they’d need to access the database remotely, but it is possible. Say, for the sake of an example, that a bad actor obtained access to a backup of the OS. From that they could see the DBA scripts and how they connect, obtain the network files such as the sqlnet.ora and tnsnames.ora files, and obtain the Oracle Wallet files.

If a SEPS and Oracle Wallet was not being used, they’d presumably also be able to work out the database credentials as they’d either be hard-coded in the DBA script files or obfuscated in some other plain text file (not hard to reverse engineer).

Copying the cwallet.sso and ewallet.p12 (and maybe the tnsnames.ora) files to a secondary server simulates the actions of the “bad actor”.

But trying to make the same connection from the secondary server (which the “bad actor” controls) shows the “ORA-12578: TNS:wallet open failed” error:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 15:38:50 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed


Enter user-name:

 

This is the expected error when the “-auto_login_local” option is used. However it’s simple to work-around.

MOS Note 1114599.1 suggests that the /etc/hosts file may cause this error. So the first thing to try is changing the name in the hosts file to that of the legitimate DB server:

# cp /etc/hosts /etc/hosts.backup
# cat /etc/hosts.backup | sed -e "s/HACKED_OS/DBSERVER/ig" > /etc/hosts
# su - oracle -c "sqlplus /@ORCL"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 15:59:53 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed


Enter user-name:

 

Clearly that didn’t help the situation at all. Undoing that and instead trying to rename the compromised server (separately as root) gives a different error:

# cp /etc/hosts.backup /etc/hosts
# hostname
HACKED_OS

# hostname DBSERVER
# hostname
DBSERVER

# su - oracle -c "sqlplus /@ORCL"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 15:53:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-21561: OID generation failed


Enter user-name:

 

But if we do both:

# cp /etc/hosts /etc/hosts.backup
# cat /etc/hosts.backup | sed -e "s/HACKED_OS/DBSERVER/ig" > /etc/hosts
# hostname DBSERVER
# su - oracle -c "sqlplus /@ORCL"

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:05:53 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 16:04:45 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select SYS_CONTEXT('userenv','IP_ADDRESS') IP_ADDRESS,
  2         SYS_CONTEXT('userenv','DB_NAME') DB_NAME,
  3         SYS_CONTEXT('userenv','CURRENT_USER') CURRENT_USER
  4  from dual;

IP_ADDRESS      DB_NAME      CURRENT_USER
--------------- ------------ ------------
192.168.1.200   ORCL         SCOTT

SQL>

 

So if we change both the hostname via the hostname command (or in the /etc/sysconfig/network file) and update the /etc/hosts file, then the -auto_login_local security is by-passed and we can log into the database from a compromised machine using the credentials stored in the Oracle Wallet!

Important to note there that I’m connecting to a 12.1.0.2 database but using a Wallet file that was created using the 11.2.0.4 software.

 

ORA-12578 with Oracle Database 12.1.0.2

To make matters worse, with Oracle 12.1.0.2 the -auto_login_local option doesn’t even work at all.

Back on the database server (legitimate DBA activity – not simulating a “bad actor”), creating the Oracle Wallet file using 12.1.0.2 software seems to prevent connectivity locally:

$ orapki wallet create -wallet "/u01/app/oracle/wallet" -auto_login_local
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:21:05 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12578: TNS:wallet open failed

 

This is unexpected behaviour and clearly shows and Oracle bug. Taking off the -auto_login_local option (by using -auto_login) shows that the Oracle Wallet does indeed work on this server:

$ orapki wallet create -wallet "/u01/app/oracle/wallet" -auto_login
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:22:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 16:20:38 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select SYS_CONTEXT('userenv','IP_ADDRESS') IP_ADDRESS,
  2         SYS_CONTEXT('userenv','DB_NAME') DB_NAME,
  3         SYS_CONTEXT('userenv','CURRENT_USER') CURRENT_USER
  4  from dual;

IP_ADDRESS      DB_NAME      CURRENT_USER
--------------- ------------ ------------
192.168.1.123   ORCL         SCOTT

SQL>

 

Hence, there clearly is a bug that’s specific to the 12.1.0.2 software where as the ORA-12578 error is returned when it shouldn’t be. Repeating the same procedure using 12.1.0.1 or 11.2.0.4 software does not exhibit the same error.

And it’s important to understand that it doesn’t matter which version of the database the connection is to. The problem is specific only to which software was used to create the Wallet file. So creating the Wallet with 11.2.0.4 software just to use against a 12.1.0.2 database works without issue.

 

Harding Using Other Strategies

Due to the above mentioned issues, other strategies can be used to harden the connections and credential management for use by DBA scripts.

 

Using localhost or 127.0.0.1

The simplest way to prevent the Wallet files from being usable on another server is to change the OracleNET Service Name to an EZconnect string that uses localhost or 127.0.0.1. For example, on the DB server:

$ mkstore -wrl "/u01/app/oracle/wallet" -create -createCredential localhost:1521/ORCL scott
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:
Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Create credential oracle.security.client.connect_string1

$ sqlplus /@localhost:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:33:27 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 16:31:50 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "SCOTT"
SQL>

 

Now if we try using the Oracle Wallet files on a compromised server (with the /etc/hosts and /etc/sysconfig/network spoofing as described previously), the connection attempt routes through the localhost back to the compromised server and not to the database server. Hence a connection attempt gives:

[oracle@HACKED_OS ~]$ sqlplus /@localhost:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 16:34:27 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-12541: TNS:no listener

 

Thus, by using an EZconnect connection string and localhost instead of the actual server’s hostname, FQDN, or IP address, we’ve avoided the 12.1.0.2 bug and provided more thorough protection than the -auto_login_local option provides anyway.

And of course we could have used 127.0.0.1 instead of localhost – the results are the same.

Finally, remember that the connection string forms the primary key of the 3DES protected data in the Wallet file which can’t be modified without knowing the Wallet’s password.

 

Connecting Through a Dedicated Listener and “Valid Node Checking”

Another way to prevent the Oracle Wallet from being used to access the database from an unauthorized server (actually any server other than the DB server) is to have the scripts connect through a dedicated listener. The dedicated listener’s port can then be restricted using either a firewall or the listener’s “valid node checking” functionality.

For example, the dedicated listener could be configured with the following in the listener.ora file:

MONITORING_LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1599)))
  )
SID_LIST_MONITORING_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = ORCL)
    )
  )
PASSWORDS_MONITORING_LISTENER= (F251EDED29514235)

 

Then for added (though possibly redundant due to the use of localhost) protection, the following entries could be adding to the server’s sqlnet.ora:

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (localhost)

 

As a result, local connections using localhost:1599/ORCL will work, while connections from the compromised server, connections will receive:

ERROR:
ORA-12537: TNS:connection closed

 

 

Preventing Use by Another OS User

Another challenge is to prevent another OS user on the same server from using the Oracle Wallet to connect to the database.

Of course the wallet files should be well secured using OS directory and file security. It can further be obfuscated by making the wallet directory a hidden directory (starting with a period).

If the -auto_login_local option is used then other users on the same server will not be able to use the Oracle Wallet credentials and instead will get the “ORA-12578: TNS:wallet open failed” error. Hence, creating the Oracle Wallet using a version other than 12.1.0.2 (regardless of the database version) and enabling the -auto_login_local option is still the best solution.

Beyond OS directory and file restrictions and the -auto_login_local option, the only other method for restricting access from other OS users on the same server would be a database scoped logon trigger or secured application role.

 

Conclusions

Using an Oracle Secure External Password Store (SEPS) and Oracle Wallet files is the best way to handle database credentials and passwords in OS scripts. However, a number of significant problems exist:

  1. The -auto_login_local parameter can be bypassed on a compromised server by changing the hostname (in /etc/hosts and /etc/hosts/network).
  2. The -auto_login_local parameter doesn’t work at all when created with 12.1.0.2 software.

 

That being said, we can still harden our script’s database access by following some additional suggestions:

  • Create the Oracle Wallet using 11.2.0.4 or 12.1.0.1 software even if connecting to 12.1.0.2 databases.
  • If the Oracle Wallet files were created using 11.2.0.4 or 12.1.0.1, protect from usage by other users by using the -auto_login_local parameter.
  • Prevent use from other servers by not using an OracleNET Service Name in Oracle Wallets and instead using an EZconnect connection string using either localhost or 127.0.0.1 (not the proper DB server’s hostname, FQDN, or IP address).
  • Another strategy is to use a dedicated listener on a dedicated port with listener “valid node checking” to only permit connections from the local server.
  • As a last resort prevent non-authorized IPs or OS Users from connecting using a logon trigger or secure application role within the DB.

 

Discover more about our expertise in Oracle.

Categories: DBA Blogs

Securing Oracle Monitoring and Backup Scripts

Pythian Group - Fri, 2016-01-15 14:41
Background

Almost every DBA writes and uses various custom scripts to monitor and backup their Oracle databases. However, finding the optimal and most secure way to connect to the database is often not prioritized.

The short summary is that having your script put the username/password (credentials) in any sort of variables or command arguments is the “bad way”. Using an “Oracle External Password Store” (SEPS) or Oracle Wallet is the “better way”. Yet this technology which has been around since Oracle 10gR2 and which does not require the Advanced Security Option is often not used.

 

Common Mistakes

Many DBAs will store credentials in their actual scripts. Sometimes obfuscating the actual password through some custom mechanism or script. For example, the following is a simplified version of an RMAN backup script found at a client site:

#!/bin/bash
export DB_USER=backup_user
export DB_PASS=`~/.secure_pwd_extractor`

$ORACLE_HOME/bin/rman << EOF
   connect target $DB_USER/$DB_PASS
   shutdown immediate
   startup mount
   backup database;
   alter database open;
EOF

 

The client thought that it was somewhat secure as the actual password wasn’t used as a command line argument to RMAN and was stored in a Linux “hidden file” (starts with a period), which was protected by properly setting OS permissions. However, it dynamically extracted the password from a plain text file (based on the DB_USER environment variable). Another key problem was the fact that the environment variable was exported and hence was part of the environmental profile under which the database was started.

The exported environment variables in this case can be a little bit of a security risk in a couple of ways:

First of all, the complete operating environment including the exported environment variables under which the database is running are recorded by the listener when service is registered. Hence, they are visible in a listener “services” command with “set displaymode verbose“:

Picture1

Secondly, they may be recorded in OS process files. For example, the pmon process’ operating environment or even the RMAN process’ while running:

Picture2

Picture3

But most significantly the credentials can be extracted by anyone with access to the script file and/or the underlying credentials file.

 

A Better Approach

A better way to store database credentials for monitoring or backup scripts is to use a “Secure External Password Store” (SEPS) which relies on having the necessary credentials securely stored in an Oracle Wallet file.

Typically a DBA might create their own plain text password listing file for use by scripts and batch jobs. Usually with three columns: 1) DB_NAME; 2) DB_USERNAME; 3) DB_PASSWORD. The Oracle Wallet is structured exactly the same way except:

  1. The file is protected with 3DES encryption.
  2. The DB_NAME is really an OracleNET Service Name meaning you can have multiple aliases for the same database.
  3. The passwords are never exposed.
  4. A separate Wallet password is required to manipulate the file’s contents.
  5. Control on whether the Oracle Wallet file is tied to the local machine or whether it can be copied to and used on other machines.

 

The advantages of this approach include:

  • No clear text password in any scripts or files.
  • No possible exposure of passwords by the listener or process operating environments.
  • Control on whether the Oracle Wallet file can be copied and used on another machine.

 

The last point is actually a complex one. A Wallet can be created as an “auto_login” wallet (done by default). To secure it to only work on the local server, it can be changed to “auto_login_local“. However, there are various issues, limitations, and 12c bugs with the additional functionality that Oracle provides. A separate article goes into this in detail.

 

Setup

Setting up a “Secure External Password Store” and Oracle Wallet is actually quite quick and easy:

1) Adjust the sqlnet.ora file to point to an Oracle Wallet location. For example, add the following to the sqlnet.ora file (assuming that the specified directory exists):

WALLET_LOCATION =
   (SOURCE =
     (METHOD = FILE)
     (METHOD_DATA =
       (DIRECTORY = /u01/app/oracle/wallet)
     )
   )
SQLNET.WALLET_OVERRIDE = TRUE

 

2) Create the Oracle Wallet files and add a credential. Two files will actually be created in the specified directory:

  • ewallet.p12 – the actual password protected Wallet file.
  • cwallet.sso – an additional file for auto-login credentials.

 

This can be done as either two separate commands or all in a single command:

$ mkstore -wrl "/u01/app/oracle/wallet" -create
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential ORCL scott
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
Create credential oracle.security.client.connect_string1

 

Or as a single command:

$ mkstore -wrl "/u01/app/oracle/wallet" -create -createCredential ORCL scott
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:
Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Create credential oracle.security.client.connect_string1
$

 

Notice that the Wallet is secured by a password. And then the SCOTT credentials are stored within the Wallet. The Wallet password is required to manipulate contents – not for scripts to access the stored credentials.

The first parameter after the “-createCredential” argument is an OracleNET Service Name. Just like with any database connection, here we can specify an OracleNET Service Name (from the tnsnames.ora file), or a full connection string, or an EZconnect string.

Hence, we could add a second and third connection to the same database as:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential localhost:1521/ORCL monitoring_user
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
Create credential oracle.security.client.connect_string2

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential 127.0.0.1:1521/ORCL batch_reporting
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
Create credential oracle.security.client.connect_string3

$

 

And to list the contents of the Oracle Wallet:

$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
3: 127.0.0.1:1521/ORCL batch_reporting
2: localhost:1521/ORCL monitoring_user
1: ORCL scott
$

 

Now any of the three can be used (from the same OS account: “oracle”) depending on which OracleNET Service Name is referenced:

$ sqlplus /@ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 08:59:12 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 08:56:56 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "SCOTT"
SQL>
$ sqlplus /@localhost:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 08:59:41 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 08:57:25 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "MONITORING_USER"
SQL>

$ sqlplus /@127.0.0.1:1521/ORCL

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 09:00:14 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Jan 13 2016 08:43:44 -07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "BATCH_REPORTING"
SQL>
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 13 09:01:12 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show user
USER is "SYS"
SQL>

 

However, one might challenge the fact that the Oracle Wallet file itself that must be secured just as a plain text password file would need to be and that the risks of either being obtained by a “bad actor” are the same. That being said, there are still some benefits of the SEPS method:

  • Passwords are never maintained in plain text in scripts or (hidden) password files.
  • No risk of password exposure at the operating system process list.
  • No risk of password exposure in operating system process environmental files.
  • No risk of exposure from any bad actor with access to the script.
  • No risk of password exposure in OS backups maintained by sysadmins or backup vendors (though the files themselves may still be usable).

Regardless, using the SEPS and Oracle Wallet shouldn’t make anything less secure. One could argue that security and risk is equal, but definitely not worse.

However they are a few operational disadvantages:

  • The OracleNET Service Name forms the primary key of the entries in the Wallet and hence must be unique. So if another user credential is required for the same DB, an OracleNET alias will be required (as trying to add another user using the same OracleNET Service Name will generate the “Credential already exists” error based on the OracleNET Service Name not being unique).
  • Doesn’t work after connecting to SQLPLUS using the “CONNECT” command.
  • Scripts are now dependent on listener availability for establishing a TNS connection instead of a BEQ. Though a workaround may be to connect directly to a DISPATCHER port.

 

Advantages Over OS Authentication

Using a SEPS and an Oracle Wallet may seem functionally similar to just using “OS Authentication”. However it does pose a few differences or advantages.

Most DBAs operate and implement scripts under the “Oracle software owner” account which is typically called “oracle” on Unix or Linux systems. And hence most are able to connect to the database using a SYSDBA connection. So one solution would be to use a dedicated OS user account specifically for monitoring or database backup scripts. And then rely on OS authentication for database connections. However this is often not done. And if running scripts from a remote server or centralized monitoring server then the REMOTE_OS_AUTHENT=TRUE parameter would have to be set which poses other security risks.

Instead, using a SEPS allows for non-OS authenticated connections to a dedicated and minimally privileged database account even from the software owner (Oracle) account locally, or from any other account remotely.

 

Other Questions

Q: Do I need to re-create my Wallet file as part of a 12c upgrade?
A: NO.

Q: Do I need to backup the Wallet files?
A: Absolutely. Just back them up as you would other Oracle environmental files, such as the sqlnet.ora. Otherwise they’ll need to be re-created.

Q: Is SEPS compatible with “Proxy Authenticated Connections”?
A: YES. See Proxy Authenticated Database connections

Q: Can the Wallet be used with RMAN?
A: YES. It definitely can and should be used by RMAN scripts.

Q: Can the Wallet be used with JDBC thin or ODBC connections?
A: YES to both. See MOS documents 1441745.1 and 1430666.1 for further details.

Q: Are SEPS and Wallet connections compatible with the CDB architecture and PDBs?
A: Of course. PDBs connect through Oracle Services and OracleNET Service Names.

Q: Can we tell from within the database whether a connected session referenced an Oracle Wallet and used SEPS?
A: NO. There doesn’t seem to be any indication from within the database. As far as the DB is concerned, it just seems to be a password authenticated connection. A SQLNET trace shows that the Wallet files are accessed but doesn’t transpose that information into any DB metric or SYS_CONTEXT USERENV data.

 

Preventing the Secure Password Store Wallet File from Being Moved to Another Host

Oracle suggests that we can add additional security by tying the Oracle Wallet file to a specific host. See MOS document 1114599.1. However, this poses some restrictions and bugs with 12.1.0.2 specifically. As this discussion is complex, this follow-up article has been created.

A good workaround (and a key point from the follow-up article) is to simply use an EZconnect connection in the Oracle Wallet file and to specify localhost or 127.0.0.1.

 

Summarizing Best Practices
  • Continue to properly secure the directory and file permissions of the Wallet files as you would a plain text password file. Further, why not make the entire wallet directory a hidden directory starting with a period. Remember that If the wallet file can be copied to another server then potentially the credentials within it can continue to be used. See the follow-up article for suggested techniques for securing access further.
  • Have scripts connect to the database using dedicated purpose based user accounts with minimal privileges (don’t use SYS or SYSTEM).
  • Use OracleNET Service Name aliases (not duplicate entries, but aliases to an existing entry) in the tnsnames.ora file to allow multiple credentials for the same database.
  • For additional security add Wallet entries based on EZconnect strings using localhost or 127.0.0.1 instead of relying on OracleNET Service Names from the tnsnames.ora file.

 

Discover more about our expertise in the world of Oracle.

Categories: DBA Blogs

Log Buffer #457: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2016-01-15 14:19

This Log Buffer Edition begins with some great blog posts from Oracle, goes through SQL Server and then ends with MySQL.

Oracle:

  • Ruby-oci8 is a ruby interface for an Oracle Database.
  • Another python graph – one wait event.
  • This article compares FBL and HDL – two of the commonly used data loading tools in Fusion HCM to highlight key differences and similarities.
  • Better Data Modeling: Customizing Oracle Sql Developer Data Modeler (#SQLDevModeler) to Support Custom Data Types.
  • Sample code: Oracle Grid Infrastructure action script for Windows.

SQL Server:

  • Being a database administrator can be very challenging at times when you have to troubleshoot performance issues.
  • Another Reason to Use NOEXPAND hints in Enterprise Edition.
  • Error: Microsoft .NET framework 3.5 service pack 1 is Required.
  • Removing Duplicates from Strings in SQL Server.
  • .NET Core is more interesting than the name might suggest. Whereas the .NET framework provides a consistent runtime for all the applications on a machine.

MySQL:

  • OpenSSH CVE-2016-0777: Details and Mitigation.
  • MySQL Group Replication for MySQL 5.7.10.
  • MySQL 5.7 auto-generated root password.
  • MySQL Support People – Those Who Were There First.
  • Planning the defaults for MySQL 5.8.

 

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs