Feed aggregator

how to store output of dynamic sql into a cursor.

Tom Kyte - Thu, 2016-06-30 12:46
Hi tom, I want to know how to declare a cursor which contains the data obtained by dynamically executing a query,where the query was taken from another cursor. CURSOR c1 IS SELECT QRY from QUERY_MASTER WHERE Q_ID IN (1,2); where QRY(an SQ...
Categories: DBA Blogs

How to export xml data ( clob column ) to a csv file

Tom Kyte - Thu, 2016-06-30 12:46
Hi, Let say, I have a table employee ( id number, name varchar2, notes clob ). I use dbms_util to write the data to a file as csv. The id (number) and name( Varchar2) columns are writing to csv file as expected. But when it comes to notes (clo...
Categories: DBA Blogs

Title on Report.

Tom Kyte - Thu, 2016-06-30 12:46
Hello All- May be another set of eyes!! How do I get the title to show on the spool output report file? Below is my report lay out. Thanks! <code> set heading on SPOOL U:\...\tt.txt ttitle left " Test report " skip 1 COLUMN...
Categories: DBA Blogs

covert string data to columns

Tom Kyte - Thu, 2016-06-30 12:46
Hi, I need some help to converting the string data from string to coulmns wise, i have the data in one column like : #1#5#3#2#5#7 #5#3#5#4#3#2 #1#7#5#3#6 #3#5#2#1#5 #6#5#4#7#1#2 and i need to each numberic should be columns wise like this...
Categories: DBA Blogs

Oracle Cloud Improves Financial Process Visibility and Cuts Upgrade Costs for University of Kansas

Oracle Press Releases - Thu, 2016-06-30 11:05
Press Release
Oracle Cloud Improves Financial Process Visibility and Cuts Upgrade Costs for University of Kansas

Redwood Shores, Calif.—Jun 30, 2016

The University of Kansas (KU) has paired its top-notch educational programs with the technology to advance them.  With Oracle Cloud , KU will drive strategic student educational initiatives, focus on operational efficiencies and reduce costs associated with lengthy upgrade cycles.

KU selected Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Business Intelligence Cloud, and Oracle Planning and Budgeting Cloud to integrate with its existing Oracle’s PeopleSoft Human Capital Management (HCM) and PeopleSoft Campus Solutions applications.

“We have a strong relationship with Oracle, so they were at the top of the list when we were looking to upgrade and transform our current ERP solution,” said Diane Goddard, KU’s vice provost for administration and finance. “Oracle Cloud really stood out to us because it is on the cutting edge of innovation, offers us increased functionality, and we can easily integrate with our existing PeopleSoft HCM and PeopleSoft Campus Solutions applications.” 

With the Oracle Cloud solutions in place, KU will have a complete, modern system that delivers scalability, reliability, and rapid time to value. The university will now have a comprehensive view of its finances across the campus with end-to-end capabilities—from planning and budgeting to reporting—enabling the university to gain insight into critical information in real-time with an integrated analytics platform for improved decision-making. KU will also be able to improve the accuracy, efficiency, and consistency of its planning and budgeting processes.

“Oracle is successfully working with major universities, such as KU, to transform their current on-premise platforms to more innovative, flexible, and unified cloud platforms,” said Mark Armstrong, vice president, product development, Oracle Higher Education. “With Oracle Cloud, KU is upgrading and integrating with its current systems to create, enrich, and extend its financial processes with less effort and cost.”

Moving to Oracle Cloud positions KU to take advantage of the most advanced functionality and features while freeing IT staff from the responsibility of routine upgrades and maintenance.

“Outsourcing day-to-day system responsibilities to the professionals at Oracle allows KU Information Technology staff to focus their time and energy on strategic initiatives that drive institutional excellence,” said Eric Freeze, KU IT deputy technology officer.

Oracle Cloud delivers the industry’s broadest suite of enterprise-grade cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).

Contact Info
Katie Barron
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1-202-904-1138

What is SID in Oracle ?

Laurent Schneider - Thu, 2016-06-30 10:56

In the doc you’ll find, it’s the Oracle system identifier.

Okay, let’s imagine the following






SQL> select * from GLOBAL_NAME;


    (SID_DESC =

What is my SID? Actually there is more than one correct answer.

In the environment, Oracle SID is ORA001. This matches SID_NAME in listener.ora. It does not have to match database name, unique name, global name or instance_name.

$ lsnrctl services
Services Summary...
Service "GLO001.EXAMPLE.COM" has 1 instance(s).
  Instance "ORA001", status UNKNOWN, has 1 handler(s) for this service...
      "DEDICATED" established:7 refused:0

As the instance is not running, I have only my listener.ora static connections.


$ sqlplus "sys/***@
  )" as sysdba
Connected to an idle instance.
$ sqlplus "sys/***@
  )" as sysdba
Connected to an idle instance.

Let’s start

SQL> startup

and check my services

$ lsnrctl services
Services Summary...
Service "SVC001.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready
Service "SVC002.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready
Service "UNI001.EXAMPLE.COM" has 1 instance(s).
  Instance "INS001", status READY, has 1 handler(s) for this service...
      "DEDICATED" established:0 refused:0 state:ready
Service "GLO001.EXAMPLE.COM" has 1 instance(s).
  Instance "ORA001", status UNKNOWN, has 1 handler(s) for this service...
      "DEDICATED" established:13 refused:0
The command completed successfully

I know have 4 service names :

  1. The global name in listener.ora
  2. the unique name in init.ora
  3. both service name in init.ora

And 2 sid

  1. The SID in listener.ora
  2. The instance name in init.ora

While we often have sid = oracle_sid = service_name = service_names = global_name = instance_name = db_name = db_unique_name, if you switch from SID to SERVICE_NAME, this could be help to identify legacy application.

If you read the doc carefully, you may have noticed the SID is no longer documented as a valid clause of CONNECT_DATA in 11g and 12c

In 10gR2 :
Use the parameter SID to identify the Oracle8 database instance by its Oracle System Identifier (SID). If the database is Oracle9i or Oracle8, use the SERVICE_NAME parameter rather than the SID parameter.

This is probably a documentation bug, I would rephrase this as If the database is Oracle9i or Oracle8i or later.

In 11g and 12c, the comment disappeared. Oracle 8i was released last century, but SID didn’t completly disappear from tnsnames. Yet.

Integrating Telstra SMS API with the Apigee Edge Service Broker for Pivotal Cloud Foundry (PCF)

Pas Apicella - Thu, 2016-06-30 07:35
Apigee and Pivotal partnered to provide comprehensive API management capabilities that expedite the scalable delivery of apps on the powerful Pivotal Cloud Foundry platform. Apigee Edge is available for rapid deployment as a partner service in the Pivotal Network

The following link talks about this service in detail


In this blog post we walk through how we would use this Service on Pivotal Cloud Foundry 1.7 to expose Telstra SMS Api.

1. First we have to deploy our application which provides access to the Public SMS Api from Telstra. This is deployed to Pivotal Cloud Foundry (PCF). The GitHub project is as follows.


2. Once deployed the application provides two REST endpoints that also includes swagger UI

Note: This Telstra API only works for Australian based mobile numbers and you will need a https://dev.telstra.com/ account to invoke the free Telstra SMS Service. The API is explained in detail at this link https://dev.telstra.com/content/sms-api-0

3. Now at this point we will need to add "Apigee Edge Service Broker for PCF" tile to Pivotal Ops Manager. You can download it from the URL below and follow the instructions to install the Tile onto the


4. Once installed it will be shown as a tile on Pivotal Ops Manager as per the image below

5. To ensure it's part of our marketplace services we simply log into our PCF instance using the command line as shown below

OR from the Pivotal Apps Manager

6. Use the create-service command to create an instance of the Apigee Edge service broker as shown below

papicella@papicella:~/pivotal/services/apigee$ cf create-service apigee-edge org api-connectors-service -c api-connectors.json
Creating service instance api-connectors-service in org system / space pas as papicella@pivotal.io...


{"org":"papicella", "env":"prod", "user":"papicella@pivotal.io", "pass":"yyyyyyy", "host": "apigee.net", "hostpattern": "${apigeeOrganization}-${apigeeEnvironment}.${proxyHost}"}

The JSON specifies the Apigee Edge details needed to route traffic:

  • org -- Organization of the Apigee Edge proxy through which requests should be routed. You'll find this value at the top of the Edge UI while looking at the Dashboard.
  • env -- Environment of the Apigee Edge proxy through which requests should be routed. You'll find this value at the top of the Edge UI while looking at the Dashboard.
  • user -- Username of an Edge user who has access to create proxies. This is the username you use to log into the Edge UI.
  • pass -- Password of an Edge user who has access to create proxies. The password you use to log into the Edge UI.
  • host -- Edge host name to which requests to your API proxies can be sent.
  • hostpattern -- Pattern for generating the API proxy URL. For example, #{apigeeOrganization}-#{apigeeEnvironment}.#{proxyHost} for cloud accounts.
7. Use the bind-route-service command to create an Edge API proxy and bind your Cloud Foundry application to the proxy. This tells the Go router to redirect requests to the Apigee Edge proxy before sending them to the Cloud Foundry application.

papicella@papicella:~/pivotal/services/apigee$ cf bind-route-service pcfdemo.net api-connectors-service --hostname apples-springboot-telstrasms
Binding route apples-springboot-telstrasms.pcfdemo.net to service instance api-connectors-service in org system / space pas as papicella@pivotal.io...

Note: The hostname is the name of your REST based application

8. With the service created it will then exist within the space as per the image below

9. Click on the service as shown below

10. Click on Manage as shown below

11. In the Apigee management console, under APIs > API proxies, locate the name of the proxy you just created as shown below

12. Click the PCF proxy's name to view its overview page.

13. Click the Trace tab, the click the Start Trace Session button.

14. Back at the command line, access the REST based application endpoint for the Telstra SMS Service as shown below.

papicella@papicella:~/pivotal/services/apigee$ curl "http://apples-springboot-telstrasms.pcfdemo.net/telstra/sms?to=0411151350&appkey=apples-key&appsecret=apples-password"

15. Return to the Apigee Management Console to verify Trace Output and a Successful HTTP 200 call. The new proxy is just a pass-through. But it's ready for you or someone on your team to add policies to define security, traffic management, and more.

More Information

Pivotal Cloud Foundry and Apigee - http://apigee.com/about/solutions/pivotal-cloud-foundry-apigee
Pivotal Cloud Foundry - http://pivotal.io/platform

Categories: Fusion Middleware

Oracle Midlands : Event #16

Tim Hall - Thu, 2016-06-30 06:15

Don’t forget Oracle Midlands Event #15 next week!


It’ll be interesting to see what this new kid on the block Jonathan Lewis has to say for himself!

Please show your support and come along. It’s free thanks to the sponsorship by RedStackTech.



Oracle Midlands : Event #16 was first posted on June 30, 2016 at 12:15 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

GoldenGate 12.2 – Installation of the monitoring agent

Yann Neuhaus - Thu, 2016-06-30 02:50

As described in my last post, GoldenGate Studio requires the monitor agent on each GoldenGate instance. The goal of this agent is to allow GoldenGate Studio to interact with GoldenGate, for example, to deploy a new solution.

So lets start with the installation of the agent.


The first step is to download the monitor agent (Oracle GoldenGate Monitor). It is available here.



The second step is to install the product.

After you have transferred the installer to the server decompress it.

oracle@srvoracle:/tmp/ [DB1] cd monitor/
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 420092
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.
oracle@srvoracle:/tmp/monitor/ [DB1] unzip fmw_12. 
Archive:  fmw_12.
  inflating: fmw_12.  
oracle@srvoracle:/tmp/monitor/ [DB1] ll
total 840392
-rw-r--r-- 1 oracle oinstall 430166267 Feb 29 13:32 fmw_12.
-r-xr-xr-x 1 oracle oinstall 430387063 Oct 14 08:33 fmw_12.

For launching the installer it is mandatory to have at a minimum Java version 8 (1.8). If this is not available it can be downloaded here.

To start the installation, launch the fmw_12.

oracle@srvoracle:/home/oracle/Downloads/jdk1.8.0_73/bin/ [DB1] ./java -jar /tmp/monitor/fmw_12. 
Launcher log file is /tmp/OraInstall2016-02-29_01-39-26PM/launcher2016-02-29_01-39-26PM.log.
Extracting files.......
Starting Oracle Universal Installer

Checking if CPU speed is above 300 MHz.   Actual 2494.801 MHz    Passed
Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 4095 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 28817 MB    Passed

Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-02-29_01-39-26PM
Log: /tmp/OraInstall2016-02-29_01-39-26PM/install2016-02-29_01-39-26PM.log
Logs successfully copied to /u01/app/oraInventory/logs.

The OUI (Oracle Universal Installer) will start. On the first screen just click on the next button.


On the next screen, we can choose the option for the updates. In my case, I leave the option to skip the auto updates.


Fill up the software location desired for GoldenGate agent.


Select the option to install only the agent monitor.


The OUI will test the system configuration and the java version.


The OUI provides a summary of the configuration. Click on next button, if all is ok.


The installation is done.


At the end OUI provides a summary of the installation with the location of the logs.


Now GoldenGate agent is installed.

Configuration Create instance

To create the instance of the agent, go where the binaries have been installed. In this example, it is /u01/app/oracle/product/jagent/oggmon/ogg_agent.

After that, launch the script createMonitorAgentInstance.sh.

oracle@srvoracle:/u01/app/oracle/product/jagent/oggmon/ogg_agent/ [DB1] ./createMonitorAgentInstance.sh 
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.0/gg_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20160229140552.sh) : 2
Sucessfully created OGG Agent instance.
Create password

The agent needs a password to work. All the passwords will be stored in a wallet. For this go to the ORACLE_HOME_AGENT/bin. In my case, /u01/app/oracle/product/

Launch the script pw_agent_util.sh.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
Feb 29, 2016 2:18:55 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Feb 29, 2016 2:18:56 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.
Enable monitoring

To enable the monitoring, launch ggsci command and edit the GOLBALS parameter file.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ggi 

GGSCI (srvoracle) 2> edit params ./GLOBALS

GGSCI (srvoracle) 4> view params ./GLOBALS

GGSCHEMA ggadmin
CHECKPOINTTABLE ggadmin.checkpoint

Now restart the ggsci command and the jagent appears when doing an “info all”.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ggi

GGSCI (srvoracle) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           

To finalize, delete the datastore and recreate one.

oracle@srvoracle:/u01/app/oracle/product/ [DB1] ggi

GGSCI (srvoracle) 1> delete datastore
Are you sure you want to delete the datastore? yes

2016-02-29 14:33:30  INFO    OGG-06492  Datastore deleted.

GGSCI (srvoracle) 2> create datastore
Profile 'Trace' added.

2016-02-29 14:33:55  INFO    OGG-06489  Datastore created.

Now you can start the manager process and the jagent.


Now that the agents have been installed on each instance, all the prerequisite for GoldenGate Studio are met.

In the next blog, I will show you how to use GoldenGate Studio to deploy a solution


Cet article GoldenGate 12.2 – Installation of the monitoring agent est apparu en premier sur Blog dbi services.

SQL Server 2016: Always Encrypted – part 2

Yann Neuhaus - Thu, 2016-06-30 02:39

In my last blog post about SQL Server 2016 Always Encrypted, here, I showed how to use this new functionality but also that you have to separate the different execution context with an Application server, a database server and a security server to avoid that certificate will be available for all users and break the segregation.

Let’s see how to build those environment.
In my security server named SQL2016-2, I first create a Self-signed certificate with PowerShell after having import the PKI module. The certificate is stored in the local machine with:

  • DnsName, DNS name of the certificate: Demo_AlwaysEncrypted.cert
  • CertStoreLocation, specifies the certificate store in which to store the new certificate: Cert:\LocalMachine\MY
  • Provider, specifies the name of the KSP or CSP that this cmdlet uses to create the certificate: Microsoft Strong Cryptographic Provider
  • KeyAlgorithm, specifies the name of the algorithm that creates the asymmetric keys that are associated with the new certificate: RSA
  • KeyLength, specifies the length, in bits, of the key that is associated with the new certificate: 2048
  • HashAlgorithm, specifies the name of the hash algorithm to use to sign the new certificate: sha256

PowerShell script:

Import-Module PKI
New-SelfSignedCertificate -DnsName "Demo_AlwaysEncrypted.cert" -CertStoreLocation Cert:\LocalMachine\MY -Provider "Microsoft Strong Cryptographic Provider" -KeyAlgorithm RSA -KeyLength 2048 -HashAlgorithm sha256


I have now my new certificate.
I open my SQL Server instance and navigate in my database SecurityDB to Security, Always Encrypted Keys and right click on Column Master Keys and select “New Column Master Key…”.
I create a new Column Master Key with the certificate I have just created on my local Machine and I name it CMK2:


The code to create my Master Key is the following, you can have it by clicking on the Script button in the above screen:

       KEY_PATH = N'LocalMachine/My/A3DF88FA53ED5347ABCF709D7A15621CDC358456'

I create now a Column Encryption Key named CEK2 with the Column Master Key I have create before and named CMK2:


The code to create my Encryption Key is the following:

       ENCRYPTED_VALUE = 0x01700000016C006F00630061006C006D0061006300680069006E0065002F006D0079002F00610033006400660038003800660061003500330065006400350033003400370061006200630066003700300039006400370061003100350036003200310063006400630033003500380034003500360098721948A1283B0E5375B4E7F0A5F2EED05EDB0A87814BF66EABCBE2B4E75B5A759CF48D7D87D6A2C44A6DB2B587992E412EAF20D3FE826844FE76B1BC764E769C911486150E08EB972DBF406FF51738EFAA7F831A27A540F948CBC673273B1205A2CEF33AE77A0AF01DB3589A4A746151DC8C699DAB7AD1B99910A598EA7258670F1D5C0600704A9BB440F86761F41E8C72E52FFF6A7348B65F62E60D90B60CF1FC00B1B473F40E82B140219A317760CBC8591E07215AD114FAF80F08361C607823DB44037C455514B3789661560613638D058D7965846962B9FD3D9ECF0AC62325043AB40D892683744DC994E868C5F0E3FD14277D8941977426AC59DEA7EEF1394641DB314C2DCB083D3D5A12B97A131986D9834CBF2837908D3EF54C9104D92371C8A9EDC9B22555D8D615440D21B03393303FD4A9C0047282091178F31323A6EF4313576470349F32F106D38ABC3B0A1C4F47E870EA43F7B90D85D4F6C6686368E4E5EFC416EA08241F75F2D7B4F4B700114CA545AA4310829B2691A6D2C553F8F8B7E3983FC99F4E0A28BD79C84F50B7F6ABB17419DEAE06ACA694D68B00B48DBC0D66B46847A389C8A587F138CEF7791845A999EA45849D2FAA45BDA88FE134C70E592937981A7B2A18397CC615BB435A1EDDD66CF58119D89B24190744F4326718CBCDC1182AD20C8023EECB3994569844270DC90020B20D70B5CCC5

I will now transfer those two certificates to my SQL Server VM, named SQL2016-1, in order to encrypt my table columns with them. To do that, I will execute the scripts I have generated before in my SQL Server instance in my security VM. I have created a second database named TestDB2 where I will execute my scripts:


After execution of the script I have my Column Master Key and my Column Encryption Key in the Security\Always Encrypted Keys path of my TestDB2 database.

I will now create the same table that I have on my TestDB database (see my previous blog post) in my new database TestDB2. For that I will execute the following script with the same encrypted columns (CustAccountNumber, CustAccountBalance, CustCreditCardNumber) but with my new Column Encryption Key CEK2:


My table is created with the specified columns encrypted.
I will now export my certificate from my security server to my application server with PowerShell.
First, I retrieve my certificate from the store and export it to a file with a password:

PowerShell script:

# I retrieve the Certificate from the store
$MyCert = Get-ChildItem Cert:\LocalMachine\My -DnsName "Demo_AlwaysEncrypted.cert"
# I create a password to protect my certificate
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
# I export my certificate as PFX Certificate Archive
Export-PfxCertificate -Cert $MyCert -FilePath C:\Temp\MyCert.pfx -Password $pwd


I’m able now to copy my certificate in my application server and import it:

PowerShell script:

# I import the certificate from my file
$pwd = ConvertTo-SecureString -String "Demo" -Force –AsPlainText
Import-PfxCertificate -CertStoreLocation Cert:\LocalMachine\My -FilePath C:\Temp\MyCert.pfx -Password $pwd


My certificate is now in the store of my application server and I will be able at this time to insert and read plain text data from my application server. I will use a PowerShell script to insert data. I will not explain all this script but it’s good to know that:

  • The connection string needs to enable Always Encrypted for the client application, here PowerShell, with this parameter: Column Encryption Setting=enabled
  • The application passes plain text data in SqlParameter objects to the driver. The driver will encrypt the values and send encrypted values to the SQL Server VM

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = "Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;"
$sqlcmd = New-Object System.Data.SqlClient.SqlCommand
$sqlcmd.Connection = $sqlConn
$sqlcmd.CommandText = "INSERT INTO dbo.Customer (CustNb, CustName, CustSurname, CustAccountNumber, CustAccountBalance, CustCreditCardNumber, CustCountry) VALUES (@CustNb, @CustName, @CustSurname, @CustAccountNumber, @CustAccountBalance, @CustCreditCardNumber, @CustCountry)"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustNb",[Data.SQLDBType]::uniqueidentifier)))
$sqlcmd.Parameters["@CustNb"].Value = [GUID]::NewGuid()
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustName",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustName"].Value = "Durand"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustSurname",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustSurname"].Value = "Laurent"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustAccountNumber"].Value = "1234-234589"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustAccountBalance",[Data.SQLDBType]::Int)))
$sqlcmd.Parameters["@CustAccountBalance"].Value = 1256890
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCreditCardNumber",[Data.SQLDBType]::VarChar,50)))
$sqlcmd.Parameters["@CustCreditCardNumber"].Value = "1234-4245-4563-4564"
$sqlcmd.Parameters.Add((New-Object Data.SqlClient.SqlParameter("@CustCountry",[Data.SQLDBType]::VarChar,200)))
$sqlcmd.Parameters["@CustCountry"].Value = "France"


I have now inserted a row in my Customer table. If I try to do the same insert without the parameter Column Encryption Setting=enabled, I receive an error message:


Now, if I want to read my table from my application server I can execute a simple select * with this PowerShell script and my parameter Column Encryption Setting=enabled:

PowerShell script:

$sqlConn = New-Object System.Data.SqlClient.SqlConnection
$sqlConn.ConnectionString = “Server=SQL2016-1\SQL2016_1;Integrated Security=true; Initial Catalog=TestDB2; Column Encryption Setting=enabled;”
$sqlcmd = $sqlConn.CreateCommand()
$query = "SELECT * FROM dbo.customer"
$sqlcmd.CommandText = $query
$adp = New-Object System.Data.SqlClient.SqlDataAdapter $sqlcmd
$data = New-Object System.Data.DataSet
$adp.Fill($data) | Out-Null


I see plan text data but without the parameter:


I’m able to read the table but the encrypted columns appear as blob and cannot by readable.
If I go back to my SQL Server VM, even with SYSADMIN right and with the connection string parameter Column Encryption Setting=enabled I am not allowed to query the table:


The error indicates that I don’t have access on my SQL Server VM at the certificate used to encrypt the column master key and so I’m not able to decrypt the encrypted columns.
If I don’t use the parameter Column Encryption Setting=enabled during my connection via SQL Server Management Studio and try to query my table:


I can see my table row(s) but the encrypted columns appear as BLOB and so are protected to unexpected reader which is exactly the goal of this new feature: avoid that SQL Server DBAs or users with some rights should able to visualize critical data.
In order to validate this security, the certificate must not be saved in the certificate store of the SQL Server VM.
That’s all Folks! ;-)


Cet article SQL Server 2016: Always Encrypted – part 2 est apparu en premier sur Blog dbi services.

EBS 12.2 Data Masking Template Certified with EM 13c

Steven Chan - Thu, 2016-06-30 02:05

(Contributing author:  Nirzari Raichura)

We're pleased to announce the certification of the E-Business Suite 12.2 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 13c.

You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Cloud Control 13c to mask sensitive data in cloned Oracle E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. 

You may scramble data in E-Business Suite 12.2 cloned environments with EM13c using the My Oracle Support Note and template: 

What's New with Data Masking for EBS 12.2 and EM13c?

Online Patching with Oracle E-Business Suite 12.2 introduces the use of the Oracle Database feature Edition-Based Redefinition and editioning views.  The following updates occurred as part of this certification effort for compatibility with editioning views:

  • A new data masking template for Oracle E-Business Suite 12.2
  • New enhancements to the Data Masking Pack delivered with Oracle Enterprise Manager Cloud Control 13c
What does masking do in an Oracle E-Business Suite environment?

Based upon the knowledge of the Oracle E-Business Suite architecture and sensitive columns, application data masking does the following:

  • De-identifies the data:  Scramble identifiers of individuals, also known as personally identifiable information (PII).  Examples include information such as name, account, address, location, and driver's license number.
  • Masks sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.  
  • Maintains data validity:  Provide a fully functional application.

Related Articles

Categories: APPS Blogs

Interactive Grids (Apex 5.1 EA) and TAPIs

Jeff Kemp - Wed, 2016-06-29 22:37

DISCLAIMER: this article is based on Early Adopter 1.


I’ve finally got back to looking at my reference TAPI Apex application. I’ve greatly simplified it (e.g. removed the dependency on Logger, much as I wanted to keep it) and included one dependency (CSV_UTIL_PKG) to make it much simpler to install and try. The notice about compilation errors still applies: it is provided for information/entertainment purposes only and is not intended to be a fully working system. The online demo for Apex 5.0 has been updated accordingly.

I next turned my attention to Apex 5.1 Early Adopter, in which the most exciting feature is the all-new Interactive Grid which may replace IRs and tabular forms. I have installed my reference TAPI Apex application, everything still works fine without changes.

I wanted my sample application to include both the old Tabular Forms as well as the new Interactive Grid, so I started by making copies of some of my old “Grid Edit” (tabular form) pages. You will find these under the “Venues” and “Event Types” menus in the sample application. I then converted the tabular form regions to Interactive Grids, and after some fiddling have found that I need to make a small change to my Apex API to suit them. The code I wrote for the tabular forms doesn’t work for IGs; in fact, the new code is simpler, e.g.:

PROCEDURE apply_ig (rv IN VENUES$TAPI.rvtype) IS
  r VENUES$TAPI.rowtype;
    r := VENUES$TAPI.ins (rv => rv);
    sv('VENUE_ID', r.venue_id);
    r := VENUES$TAPI.upd (rv => rv);
    VENUES$TAPI.del (rv => rv);
END apply_ig;

You may notice a few things here:

(1) APEX$ROW_STATUS for inserted rows is ‘I’ instead of ‘C’; also, it is set to ‘D’ (unlike under tabular forms, where it isn’t set for deleted rows).

(2) After inserting a new record, the session state for the Primary Key column(s) must be set if the insert might have set them – including if the “Primary Key” in the region is ROWID. Otherwise, Apex 5.1 raises No Data Found when it tries to retrieve the new row.

(3) I did not have to make any changes to my TAPI at all :)

Here’s the example from my Event Types table, which doesn’t have a surrogate key, so we use ROWID instead:

  r EVENT_TYPES$TAPI.rowtype;
    r := EVENT_TYPES$TAPI.ins (rv => rv);
    sv('ROWID', r.p_rowid);
    r := EVENT_TYPES$TAPI.upd (rv => rv);
    EVENT_TYPES$TAPI.del (rv => rv);
END apply_ig;

Converting Tabular Form to Interactive Grid

The steps needed to convert a Tabular Form based on my Apex API / TAPI system are relatively straightforward, and only needed a small change to my Apex API.

  1. Select the Tabular Form region
  2. Change Type from “Tabular Form [Legacy]” to “Interactive Grid”
  3. Delete any Region Buttons that were associated with the Tabular form, such as CANCEL, MULTI_ROW_DELETE, SUBMIT, ADD
  4. Set the Page attribute Advanced > Reload on Submit = “Only for Success”
  5. Under region Attributes, set Edit > Enabled to “Yes”
  6. Set Edit > Lost Update Type = “Row Version Column”
  7. Set Edit > Row Version Column = “VERSION_ID”
  8. Set Edit > Add Row If Empty = “No”
  9. If your query already included ROWID, you will need to remove this (as the IG includes the ROWID automatically).
  10. If the table has a Surrogate Key, set the following attributes on the surrogate key column:
    Identification > Type = “Hidden”
    Source > Primary Key = “Yes”
  11. Also, if the table has a Surrogate Key, delete the generated ROWID column. Otherwise, leave it (it will be treated as the Primary Key by both the Interactive Grid as well as the TAPI).
  12. Set any columns Type = “Hidden” where appropriate (e.g. for Surrogate Key columns and VERSION_ID).
  13. Under Validating, create a Validation:
    Editable Region = (your interactive grid region)
    Type = “PL/SQL Function (returning Error Text)”
    PL/SQL = (copy the suggested code from the generated Apex API package) e.g.

        RETURN VENUES$TAPI.val (rv =>
            (venue_id     => :VENUE_ID
            ,name         => :NAME
            ,map_position => :MAP_POSITION
            ,version_id   => :VERSION_ID
  14. Under Processing, edit the automatically generated “Save Interactive Grid Data” process:
    Target Type = PL/SQL Code
    PL/SQL = (copy the suggested code from the generated Apex API package) e.g.

        VENUES$APEX.apply_ig (rv =>
            (venue_id     => :VENUE_ID
            ,name         => :NAME
            ,map_position => :MAP_POSITION
            ,version_id   => :VERSION_ID

I like how the new Interactive Grid provides all the extra knobs and dials needed to interface cleanly with an existing TAPI implementation. For example, you can control whether it will attempt to Lock each Row for editing – and even allows you to supply Custom PL/SQL to implement the locking. Note that the lock is still only taken when the page is submitted (unlike Oracle Forms, which locks the record as soon as the user starts editing it) – which is why we need to prevent lost updates:

Preventing Lost Updates

The Interactive Grid allows the developer to choose the type of Lost Update protection (Row Values or Row Version Column). The help text for this attribute should be required reading for any database developer. In my case, I might choose to turn this off (by setting Prevent Lost Updates = “No” in the Save Interactive Grid Data process) since my TAPI already does this; in my testing, however, it didn’t hurt to include it.

Other little bits and pieces

I found it interesting that the converted Interactive Grid includes some extra columns automatically: APEX$ROW_SELECTOR (Type = Row Selector), APEX$ROW_ACTION (Type = Actions Menu), and ROWID. These give greater control over what gets included, and you can delete these if they are not required.

Another little gem is the new Column attribute Heading > Alternative Label: “Enter the alternative label to use in dialogs and in the Single Row View. Use an alternative label when the heading contains extra formatting, such as HTML tags, which do not display properly.”.


If you’d like to play with a working version of the reference application, it’s here (at least, until the EA is refreshed) (login as demo / demo):


I’ve checked in an export of this application to the bitbucket repository (f9674_ea1.sql).

Filed under: APEX Tagged: APEX, apex-5.1, interactive-grid, TAPI

Transpose the data in oracle

Tom Kyte - Wed, 2016-06-29 18:26
Hi Tom, I have a table (TABLE1): CREATE TABLE TABLE1 ( STDNO NUMBER(10), SUBJ VARCHAR2(5), MARKS NUMBER(10) ); with this data in it: INSERT INTO TABLE1 VALUES (1, 'A', 50); INSERT INTO TABLE1 VALUES (1, '...
Categories: DBA Blogs

Transposing the data from rows to columns and to split the columns into new line during particular data intervals

Tom Kyte - Wed, 2016-06-29 18:26
Hi Tom, Can you suggest me the solution for transposing the data from rows to columns, and to split the data into new column during particular interval. Here is the sample tables and data that i am working on and the desired output as well: ...
Categories: DBA Blogs

last blog for a while

Pat Shuff - Wed, 2016-06-29 17:02
This will be my last blog entry for a while. We have been going through the database options and looking at the different database options for Platform as a Service. My intent was to go through all of the options and look at each of them individually. Unfortunately, I am in process of changing groups. My hopes are to restart this blog on blogs.oracle.com/iaas and blogs.oracle.com/paas. I am in process of changing jobs within Oracle and and the process is taking longer than expected. I have had fun on this journey that started back in April. Hopefully everyone has enjoyed the trip as well. My Oracle email will probably be turned off after June 30th so if you need to get in touch with me use the same format for email but at gmail.com. My hope is to resume this blog at the other two locations starting in August. pat

Quarterly EBS Upgrade Recommendations: June 2016 Edition

Steven Chan - Wed, 2016-06-29 14:00

We've previously provided advice on the general priorities for applying EBS updates and creating a comprehensive maintenance strategy.   

Here are our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, latest support timelines, and newly-certified releases

You can research these yourself using this Note:

Upgrade Recommendations for June 2016

  1. EBS 11i users should upgrade to 12.1.3 or 12.2.  Before upgrading, 11i users should be on the minimum 11i patching baseline.

  2. EBS 12.0 users should upgrade to 12.1.3 or 12.2.  Before upgrading, 12.0 users should be on the minimum 12.0 patching baseline.

  3. EBS 12.1 users should upgrade to 12.1.3 RPC4 or 12.2.  Before upgrading, 12.1 users should be on the minimum minimum 12.1 patching baseline.

  4. EBS 12.1 users should switch their PKI certificates to SHA-2.

  5. EBS 12.2 users should upgrade to EBS 12.2.5, Database or, the April 2016 AD tools, StartCD 50, and FMW

  6. EBS 11i and 12.1 users should upgrade to Database or

  7. EBS 11i, 12.0, 12.1, and 12.2 customers should switch from Secure Socket Layer (SSL) to Transport Layer Security (TLS).

  8. EBS 11i, 12.0, 12.1, and 12.2 users must sign their environment's JAR files now.

  9. EBS 11i, 12.0, 12.1, 12.2 users should apply the April 2016 Critical Patch Update.

  10. Oracle Single Sign-On 10g users should migrate to OAM 11gR2 Patchset 3

  11. Oracle Internet Directory users should upgrade to Oracle Internet Directory 11g

  12. Oracle Discoverer users should migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Applications (OBIA), or Discoverer 11g

  13. Oracle Portal 10g users should migrate to Oracle WebCenter 11g or upgrade to Portal 11g

  14. Firefox users should upgrade to Firefox Extended Support Release 45.

  15. Windows desktop users should migrate from older Java releases (including JInitiator) to JRE 1.8.0_91/92 (or later JRE 1.8 updates), JRE 1.7.0_101 (or later 1.7 updates), or JRE 1.6.0_115 (or later 1.6 updates).

  16. Windows XP and Office 2003 users should upgrade to later versions.

  17. Windows Vista users should upgrade to a later version of Windows.

  18. Windows Internet Explorer users should upgrade to IE 11.

  19. EBS customers on Exalogic and Exadata should follow the latest recommendations.

Categories: APPS Blogs

passing variables in sqlplus scripts

Learn DB Concepts with me... - Wed, 2016-06-29 13:14

ACCEPT par1 prompt "ENTER PARAMETER #1: "

ACCEPT par2 prompt "ENTER PARAMETER #2: "

execute pkg_TEST_VARIABLES.TEST_PASS_VARIABLES ( &&par1, &&par2);
Categories: DBA Blogs

Why in Oracle BPM/SOA Suite Attaching a Fault Policy to a Synchronous Service Is Not a Good Idea

Jan Kettenis - Wed, 2016-06-29 11:50
In this posting I will explain why in the Oracle BPM/SOA Suite you should not attach fault policies to synchronous services.

The other day I investigated some BPM process instance that had an unrecoverable error. It was calling a synchronous service, that on its turn was calling another synchronous service, that on its turn was calling a external, synchronous service exposed through the Oracle Service Bus. That latter call failed (due to a timeout). As all the composites were having a fault policy attached to them, it was expected that the instance was recoverable. Instead there was some JTA transaction error that rolled back all the way up to last dehydration point in the (top level) BPM process, and from there was retried two times before it finally gave up and went into a coma. Big surprise!

What I recommended to them to prevent this in the future, was the following:

  1. For all synchronous services detach the fault policy. Only attach fault policies to asynchronous and fire&forget services,
  2. Where possible, do asynchronous calls from the BPM process (instead of synchronous), 
  3. Wherever possible make all synchronous services idempotent or make them asynchronous / fire&forget.

If you want to understand why, read on!

Let's assume we have the following chain of services:

I created a FailingService that I can let succeed or fail depending on the input.

Now what happens when BPMProcess calls ServicePS with request 'fail' is the following:

  1. ServiceA errors because of the fault thrown by FailingService
  2. BPMProcess and ServicePS both error because of a timeout
  3. Because of fault policy attached to all them, all go into recoverable state (human intervention)

As ServiceA is in a recoverable state, why not try to recover and see if that will fix the flow?

Now what happens when a retry is done from ServiceA with payload 'normal' is the following:

  1. ServiceA completes successfully
  2. However, BPMProcess & ServicePS are still in recoverable state

The explanation for this is that, because ServicePS was already in a recoverable state, it will not receive the response from ServiceA, as it is no longer listening.

Now let's see what happens when we try to recover the instance of ProcessPS and change the payload from 'fail' to 'normal'

  1. Although the payload was changed to 'normal', we still end up with a new errored instance of ServiceA, as the request of the call from A to FailingService did not change with it (i.e. still 'fail')

To be completely honest, I do not (yet) completely understand why ServiceA would still call the FailingService with payload 'fail', but obviously this also is not the way to fix the flow. Even if the call would have been successful, the FailingService would have been called twice, and let's just hope it is idempotent!

To prevent we get more of these duplicate calls, we recover the (top level) BPM instance instead.

Now what happens when a retry is done from BPMProces with payload 'normal' is the following:

  1. There are new (successful) calls to ServicePS -> ServiceA -> FailingService
  2. However, there are still running instances of ServicePS and ServiceA (they are still in a recoverable state)

The explanation being that these instances were still running after the previous (failed) attempt. So now we still have to abort these running instances to prevent duplicate calls. All in all not very convenient.

The solution is to never let an asynchronous service use a fault policy that either initiates human intervention or does 1 or more retries. The point being that in the meantime the consumer will have timed out, and never receive the response even it succeeds later on.

The best layer to handle errors with synchronous services is a layer that has 'knowledge' about the context of the process. Normally that is the business process itself. The reason being that a policy that fits one process may not fit another.

On the other hand, there are some good arguments for not letting system errors bubble all the way up to a business process. Instead you should consider handling it in the next layer below it - in this case being ServicePS - by making all calls from the business process to ServicePS either asynchronous, or fire&forget (the latter when successful continuation of the process is not depending upon the call). ServicePS will then handles the error using fault policies. You have two options to recover:

  • You recover the instance of ProcessPS, or (when that fails for whatever reason) 
  • Abort the instance of ProcessPS, and do an alter flow on the business process by moving the token from the Receive back to the Send activity. 

As a matter of fact, this customer actually created this ServicePS as a process-specific layer that sits in between the business process and any other service. A similar layer may not be feasible in your case, in which the solution would be to let the error bubble all the way up to the process instance and handle it there (using fault policies).

select grants on tables to users or roles

Learn DB Concepts with me... - Wed, 2016-06-29 11:28

select grants on tables to users or roles

  table_name   = 'TABLENAME'

select grants on table to a user thru role or direct priv

select Grantee,'Granted Through Role' as Grant_Type, role, table_name
from role_tab_privs rtp, dba_role_privs drp
where rtp.role = drp.granted_role
and table_name = 'TABLENAME'
select Grantee,'Direct Grant' as Grant_type, null as role, table_name
from dba_tab_privs
where table_name = 'TABLENAME' ;
Categories: DBA Blogs

Remote DBA Benefits Comparison Series- Part 2- Service Quality and Flexibility

Chris Foot - Wed, 2016-06-29 10:21

In part 1, we talked about the cost reduction benefits that remote DBA service providers offer customers. In part 2 of the Remote DBA Benefits Comparison Series, we’ll discuss the continuity and quality of service and support flexibility benefits customers experience when utilizing a remote DBA service provider. We’ll also compare these benefits to in-house and consulting alternatives.


Subscribe to Oracle FAQ aggregator