Feed aggregator

More Fine-Grained 'ALTER USER' Privilege

Tom Kyte - Mon, 2017-06-19 15:26
I am currently looking for a more fine-grained approach to the user management within an Oracle 11g Release 2 (soon to be Database: There is an idea to give some users the permission to manage some aspects of a user account such as: -...
Categories: DBA Blogs

Oracle 12c Index unusable after partition drop

Tom Kyte - Mon, 2017-06-19 15:26
In Oracle 11g if global index maintained during execution of the DROP partition command then command could take hours to complete. Oracle 12c said DROP partition command executes immediately and index will remain usable. I had database table in Or...
Categories: DBA Blogs

Security Alert CVE-2017-3629 Released

Oracle Security Team - Mon, 2017-06-19 15:21

Oracle just released Security Alert CVE-2017-3629 to address three vulnerabilities affecting Oracle Solaris:

- Vulnerability CVE-2017-3629 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 7.8. - CVE-2017-3630 affects Oracle Solaris version 10 and version 11.3 and has a CVSS Base Score of 5.3. - CVE-2017-3631 only affects Oracle Solaris 11.3 and has a CVSS Base Score of 5.3.

Oracle recommends affected Oracle Solaris customers apply the fixes released with this Security Alert as soon as possible.

For More Information:
The Advisory for Security Alert CVE-2016-0636 is located at http://www.oracle.com/technetwork/security-advisory/alert-cve-2017-3629-3757403.html

Installing SQLDeveloper and SQLCL on CentOS

The Anti-Kyte - Mon, 2017-06-19 14:02

As is becoming usual in the UK, the nation has been left somewhat confused in the aftermath of yet another “epoch-defining” vote.
In this case, we’ve just had a General Election campaign in which Brexit – Britain’s Exit from the EU – played a vanishingly small part. However, the result is now being interpreted as a judgement on the sort of Brexit that is demanded by the Great British Public.
It doesn’t help that, beyond prefixing the word “Brexit” with an adjective, there’s not much detail on the options that each term represents.
Up until now, we’ve had “Soft Brexit” and “Hard Brexit”, which could describe the future relationship with the EU but equally could be how you prefer your pillows.
Suddenly we’re getting Open Brexit and even Red-White-and-Blue Brexit.
It looks like the latest craze sweeping the nation is Brexit Bingo.
This involves drawing up a list of adjectives and ticking them off as they get used as a prefix for the word “Brexit”.
As an example, we could use the names of the Seven Dwarfs. After all, no-one wants a Dopey Brexit, ideally we’d like a Happy Brexit but realistically, we’re likely to end up with a Grumpy Brexit.

To take my mind off all of this wacky word-play, I’ve been playing around with CentOS again. What I’m going to cover here is how to install Oracle’s database development tools and persuade them to talk to a locally installed Express Edition database.

Specifically, I’ll be looking at :

  • Installing the appropriate Java Developer Kit (JDK)
  • Installing and configuring SQLDeveloper
  • Installing SQLCL

Sound like a Chocolate Brexit with sprinkles ? OK then…


I’m running on CentOS 7 (64 bit). I’m using the default Gnome 3 desktop (
CentOS is part of the Red Hat family of Linux distros which includes Red Hat, Fedora and Oracle Linux. If you’re running on one of these distros, or on something that’s based on one of them then these instructions should work pretty much unaltered.
If, on the other hand, you’re running a Debian based distro ( e.g. Ubuntu, Mint etc) then you’ll probably find these instructions rather more useful.

I’ve also got Oracle Database 11gR2 Express Edition installed locally. Should you feel so inclined, you can perform that install on CentOS using these instructions.

One other point to note, I haven’t bothered with any Oracle database client software on this particular machine.

Both SQLDeveloper and SQLCL require Java so…

Installing the JDK

To start with, we’ll need to download the JDK version that SQLDeveloper needs to run against. At the time of writing ( SQLDeveloper 4.2), this is Java 8.

So, we need to head over to the Java download page
… and download the appropriate rpm package. In our case :


Once the file has been downloaded, open the containing directory in Files, right-click our new rpm and open it with Software Install :

Now press the install button.

Once it’s all finished, you need to make a note of the directory that the jdk has been installed into as we’ll need to point SQLDeveloper at it. In my case, the directory is :


Speaking of SQLDeveloper…


Head over to the SQLDeveloper Download Page and get the latest version. We’re looking for the ??? option. In my case :


While we’re here, we may as well get the latest SQLCL version as well. The download for this is a single file as it’s platform independent.

Once again, we can take advantage of the fact that Oracle provides us with an rpm file by right-clicking it in Files and opening with Software Install.

Press the install button and wait for a bit…

Once the installation is complete, we need to configure SQLDeveloper to point to the JDK we’ve installed. To do this, we need to run :

sh /opt/sqldeveloper/sqldeveloper.sh

…and provide the jdk path when prompted, in this case :


The end result should look something like this :

In my case I have no previous install to import preferences from so I’ll hit the No button.

Once SQLDeveloper opens, you’ll want to create a connection to your database.

To do this, go to the File Menu and select New/Connection.

To connect as SYSTEM to my local XE database I created a connection that looks like this :

Once you’ve entered the connection details, you can hit Test to confirm that all is in order and you can actually connect to the database.
Provided all is well, hit Save and the Connection will appear in the Tree in the left-side of the tool from this point forward.

One final point to note, as part of the installation, a menu item for SQLDeveloper is created in the Programming Menu. Once you’ve done the JDK configuration, you can start the tool using this menu option.


As previously noted, SQLCL is a zip file rather than an rpm, so the installation process is slightly different.
As with SQLDeveloper, I want to install SQLCL in /opt .
To do this, I’m going to need to use sudo so I have write privileges to /opt.

To start with then, open a Terminal and then start files as sudo for the directory that holds the zip. So, if the directory is $HOME/Downloads …

sudo nautilus $HOME/Downloads

In Files, right click the zip file and select Open With Archivte Manager

Click the Extract Button and extract to /opt

You should now have a sqlcl directory under /opt.

To start sqlcl, run


…and you should be rewarded with…

There, hopefully that’s all gone as expected and you’ve not been left with a Sneezy Brexit.

Filed under: Linux, Oracle, SQLDeveloper Tagged: jdk, sqlcl, SQLDeveloper, sudo nautilus

Oracle's Security Fixing Practices

Oracle Security Team - Mon, 2017-06-19 13:53

In a previous blog entry, we discussed how Oracle customers should take advantage of Oracle's ongoing security assurance effort in order to help preserve their security posture over time. In today's blog entry, we're going to discuss the highlights of Oracle's security fixing practices and their implications for Oracle customers.

As stated in the previous blog entry, the Critical Patch Update program is Oracle's primary mechanism for the delivery of security fixes in all supported Oracle product releases and the Security Alert program provides for the release of fixes for severe vulnerabilities outside of the normal Critical Patch Update schedule. Oracle always recommends that customers remain on actively-supported versions and apply the security fixes provided by Critical Patch Updates and Security Alerts as soon as possible.

So, how does Oracle decide to provide security fixes? Where does the company start (i.e., for what product versions do security fixes get first generated)? What goes into security releases? What are Oracle's objectives?

The primary objective of Oracle's security fixing policies is to help preserve the security posture of ALL Oracle customers. This means that Oracle tries to fix vulnerabilities in severity order for each Oracle product family. In certain instances, security fixes cannot be backported; in other instances, lower severity fixes are required because of dependencies among security fixes. Additionally, Oracle treats customers equally by providing customers with the same vulnerability information and access to fixes across actively-used platform and version combinations at the same time. Oracle does not provide additional information about the specifics of vulnerabilities beyond what is provided in the Critical Patch Update (or Security Alert) advisory and pre-release note, the pre-installation notes, the readme files, and FAQs. The only and narrow exception to this practice is for the customers who report a security vulnerability. When a customer is reporting a security vulnerability, Oracle will treat the customer in much the same way the company treats security researchers: the customer gets detailed information about the vulnerability as well as information about expected fixing date, and in some instances access to a temporary patch to test the effectiveness of a given fix. However, the scope of the information shared between Oracle and the customer is limited to the original vulnerability being reported by the customer.

Another objective for Oracle's security fixing policies is not so much about producing fixes as quickly as possible, as it is to making sure that these fixes get applied by customers as quickly as possible. Prior to 2005 and the introduction of the Critical Patch Update program, security fixes were published by Oracle as they become produced by development without any fixed schedule (as Oracle would today release a Security Alert). Feedback we received was that this lack of predictability was challenging for customers, and as a result, many customers reported that they no longer applied fixes. Customers said that a predictable schedule would help them ensure that security fixes were picked up more quickly and consistently. As a result, Oracle created the Critical Patch Update program to bring predictability to Oracle customers. Since 2005, and in spite of a growing number of product families, Oracle has never missed a Critical Patch Update release.

It is also worth noting that Critical Patch Update releases for most Oracle products are cumulative. This means that by applying a Critical Patch Update, a customer gets all the security fixes included in a specific Critical Patch Update release as well as all the previously-released fixes for a given product-version combination. This allows customers who may have missed Critical Patch Update releases to quickly "catch up" to current security releases.

Let's now have a look at the order with which Oracle produces fixes for security vulnerabilities. Security fixes are produced by Oracle in the following order:

  • Main code line. The main code line is the code line for the next major release version of the product.
  • Patch set for non-terminal release version. Patch sets are rollup patches for major release versions. A Terminal release version is a version where no additional patch sets are planned.
  • Critical Patch Update. These are fixes against initial release versions or their subsequent patch sets

This means that, in certain instances, security fixes can be backported for inclusion in future patch sets or products that are released before their actual inclusion in a future Critical Patch Update release. This also mean that systems updated with patch sets or upgraded with a new product release will receive the security fixes previously included in the patch set or release.

One consequence of Oracle's practices is that newer Oracle product versions tend to provide an improved security posture over previous versions, because they benefit from the inclusion of security fixes that have not been or cannot be backported by Oracle.

In conclusion, the best way for Oracle customers to fully leverage Oracle's ongoing security assurance effort is to:

  1. Remain on actively supported release versions and their most recent patch set—so that they can have continued access to security fixes;
  2. Move to the most recent release version of a product—so that they benefit from fixes that cannot be backported and other security enhancements introduced in the code line over time;
  3. Promptly apply Critical Patch Updates and Security Alert fixes—so that they prevent the exploitation of vulnerabilities patched by Oracle, which are known by malicious attackers and can be quickly weaponized after the release of Oracle fixes.

For more information:
- Oracle Software Security Assurance website
- Security Alerts and Critical Patch Updates

Real ODA “one button patching”, to

Geert De Paep - Mon, 2017-06-19 13:46

Today I had to patch a virtualized Oracle Database Appliance X5-2 HA at a customer site. Current version was, installed in June 2016, and it had to be upgraded to I’ve heard a lot of scary stories of colleagues and friends about possible issues during patching, so I was a little reluctant to do this. Anyway, no choice, what needs to be done, needs to be done.

I had one big advantage: this ODA was installed by myself and I was 100% sure that nobody had ever done something ‘unsupported’ with it. During installation in June 2016, I have taken very much care about following the books accurately and ensuring that nothing went wrong. This was a real clean install without any issues, now in production for 1 year.

Now one year later I have to say that I am very happy with “my ODA”. The patching today went smoothly. Not a single issue was encountered during the patching. Actually I was a little surprised, because as I said, I have heard about many issues of others during patching.

More specifically I did the following:

I followed Mos: README for 25499210

Luckily version can be directly upgraded to (lower versions need to go in 2 or more steps).


I did the Server part “locally”, i.e. one node at a time and hence 2 times patching. Allthough I had downtime for the full ODA, I preferred to do it node by node. If something would go completely wrong on node 1, I would still have a valid node 2.

Warnings and tips:

  • Ensure each node (oda_base) has more than 15Gb free on /u01
  • Verify if rpm orclovn-user-6.0.r7494-1.el5 exists in Dom0 on each node (cfr one of those issues – someone I know had this rpm missing in Dom0…)
  • Run the patching one node at a time, not both together
  • It is best to stop any VM’s manually before starting the patching (but at the end you will have to start them manually again as well)

This command performs the patching:

/opt/oracle/oak/bin/oakcli update -patch –server –local

You may be interested in the timings, if you need to do this as well and have to plan downtime:

These are the current and proposed versions (I think the amount of components to  be patched will influence the time it will take):

Component Name            Installed Version         Proposed Patch Version  
---------------           ------------------        -----------------       
Controller_INT            4.230.40-3739             Up-to-date               
Expander                  0018                      Up-to-date              
SSD_SHARED {                                                                
[ c1d20,c1d21,c1d22,      A29A                      Up-to-date              
c1d23 ]                                                                     
[ c1d16,c1d17,c1d18,      A29A                      Up-to-date              
c1d19 ]}                                                               
HDD_LOCAL                 A720                      A72A                    
HDD_SHARED                P901                      PAG1                    
ILOM             r101649 r114580        
BIOS                      30050100                  30100400                
IPMI                              Up-to-date              
OL                        6.7                       6.8                     
OVM                       3.2.9                     3.2.11                  
                          8354,21948344)            2082,24828633)          
                          8354,21948344)            2082,24828633)

Each node took about 75 minutes to do the patching. However at the end a reboot is triggered and you should calculate about 15 minutes extra before everything is up again and you can verify that all went well. But good to know, while node 1 is patched, node 2 remains fully available and vice versa.

Shared storage

After this, the shared storage part needs to be patched using

/opt/oracle/oak/bin/oakcli update -patch –storage

You run this command only on node 1.

This took 24 minutes, but at the end both nodes reboot (both at the same time, so this is not rolling and you have db unavailability, even with RAC). So calculate some extra time here.

Database homes

Finally the database homes (I got 4 of them) need to be patched. In fact only 3 of the homes needed to be patched in my case, because we first want to validate the patch before patching production (4th home). On node 1 you can do:

/opt/oracle/oak/bin/oakcli update -patch –database

Note, I didn’t use the “–local” option in this case, so the oracle_home on both nodes are patched in 1 command.


  • I don’t know if it is required, but it is safe to set first (again one of those issues of others):
    • export NLS_LANG=American_America.US7ASCII
  • You don’t have to stop the databases in advance. This will be done, on one node at a time, while patching.
  • “datapatch” will be run automatically during the patching. So after the patching all databases are open again and no additional steps are required.

After starting the command, a list of homes to be patched is displayed and you can select to patch them all together or not. In my case I answered N and then I could choose which homes NOT(!) to patch. Yes, is somewhat strange, instead of asking which homes I want to patch, I had to enter the opposite:

Would you like to patch all the above homes: Y | N ? : N
Please enter the comma separated list of Database Home names that you do NOT want to patch:OraDb12102_home4,OraDb12102_home1,OraDb12102_home2

In the mean while I filed an enhancement request at Oracle Support for adding a confirmation after entering these homes. Because at the moment, after pressing RETURN, the script immediately starts patching. I don’t want to know what will happen if you would have a typo in your list of home names. It might be that the script will patching the wrong home in that case… So be careful.

I noticed that datapatch was automatically run for each database and I could verify this in:

SELECT patch_id, version, status, bundle_id, bundle_series
       FROM dba_registry_sqlpatch;

patch_id version  status  bundle_id bundle_series
21948354 SUCCESS    160119 PSU         
24732082 SUCCESS    170117 PSU

Regarding timings, in my case it took about 20 minutes for each home. However I had 4 databases in each home. If you have more, or less databases, timings may be somewhat different, but at least you have an idea.


In summary, all patch steps went through without a single error or issue. So this is the real ODA “one button patching”. Very nice.

However I don’t know how many patchings go like this. It looks to me that you can easily do (unsupported) things on your ODA that may cause issues during patching, like e.g. installing custom rpm’s, changing OS settings, not creating databases using the oakcli provided command or manually adding other Oracle products in the oraInventory. Yes I know, it is hard to treat an appliance like an appliance if all backdoors are open…

So if you install or manage an ODA, understand the importance of keeping it clean and get rewarded during patching. You really don’t want issues during patching, especially if your downtime is e.g. in the middle of the night.

Fixes for ADF Cloud User Experience Rapid Development Kit (RDK) UI Layout Issues

Andrejus Baranovski - Mon, 2017-06-19 13:45
If you was evaluating Oracle RDK UI template, probably you noticed information popup coming up, when RDK home page is loaded. Popup is loaded through showPopupBehavior listener, which is executed on Welcome page load event. Such popup is not required in practice, and usually is disabled. But as soon as you disable it, there will be layout issues with Welcome page. User information widget will not align the name and menu navigation items will not be ordered correctly:

This is not nice. And you will get such behaviour only when popup is not loaded:

I looked into it in more detail and I saw there is a second HTTP PPR request executed, when popup is loaded. It seems this second HTTP request was triggering partial response and this was forcing UI to load correctly:

Fortunately I found a simple fix for that. Need to set layout="horizontal" for springboard panelGroupLayout component located in Welcome page:

This change makes job done and now Welcome page layout is rendered correctly from the start, even without loading popup and forcing second HTTP PPR request:

There is another issue - related to panelGridLayout usage in ADF Task Flows loaded through Film Strip page. You can check my previous example about customising/extending RDK template - Extending ADF Cloud User Experience Rapid Development Kit (RDK). Let's assume use case with ADF Task Flow implementing two fragments (search and edit functionality):

Search screen renders ADF list implemented using panelGridLayout:

Navigate to edit screen:

Try to navigate back to search screen, you will get empty list displayed:

Fix is simple. RDK is using property stretchChildren="first" in FilmStrip page and this seems to break UI layout for regions with panelGridLayout component:

Remove stretchChildren="first" property from FilmStrip page, showDetailItem component assigned with id="sdi1":

With this fix applied, try to navigate from edit to search:

This time search page layout with panelGridLayout component is displayed as it should:

Download extended RDK application code with applied fixes - AppsCloudUIKit_v3.zip.

Agile Development with PL/SQL

Gerger Consulting - Mon, 2017-06-19 13:12
Agile Development gives us the ability to work on multiple features at the same time and change which ones to ship at any point in time, quickly. This might be challenging for PL/SQL teams to accomplish, to say the least. This short video shows how Gitora, version control tool for PL/SQL, helps Oracle PL/SQL developers solve this problem.

If you prefer reading a step by step guide instead of watching a video, please click here.
Categories: Development

Unify: Could it be any easier?

Rittman Mead Consulting - Mon, 2017-06-19 09:00

Rittman Mead’s Unify is the easiest and most efficient method to pull your OBIEE reporting data directly into your local Tableau environment. No longer will you have to worry about database connection credentials, Excel exports, or any other roundabout way to get your data where you need it to be.

Unify leverages OBIEE’s existing metadata layer to provide quick access to your curated data through a standard Tableau Web Data Connector. After a short installation and configuration process, you can be building Tableau workbooks from your OBIEE data in minutes.

This blog post will demonstrate how intuitive and easy it is to use the Unify application. We will only cover using Unify and it’s features, as once the data gets into Tableau it can be used the same as any other Tableau Data Source. The environment shown already has Unify installed and configured, so we can jump right in and start using the tool immediately.

To start pulling data from OBIEE using Unify, we need to create a new Web Data Connector Data Source in Tableau. This data source will prompt us for a URL to access Unify. In this instance, Unify is installed as a desktop application, so the URL is http://localhost:8080/unify.

Once we put in the URL, we’re shown an authentication screen. This screen will allow us to authenticate against OBIEE using the same credentials. In this case, I will authenticate as the weblogic user.

Once authenticated, we are welcomed by a window where we can construct an OBIEE query visually. On the left hand side of the application, I can select the Subject Area I wish to query, and users are shown a list of tables and columns in the selected Subject Area. There are additional options along the top of the window, and I can see all saved queries on the right hand side of the window.

The center of the window is where we can see the current query, as well as a preview of the query results. Since I have not started building a query yet, this area is blank.

Unify allows us to either build a new query from scratch, or select an existing OBIEE report. First, let’s build our own query. The lefthand side of the screen displays the Subject Areas and Columns which I have access to in OBIEE. With a Subject Area selected, I can drag columns, or double click them, to add them to the current query. In the screenshot above, I have added three columns to my current query, “P1 Product”, “P2 Product Type”, and “1 - Revenue”.

If we wanted to, we could also create new columns by defining a Column Name and Column Formula. We even have the ability to modify existing column formulas for our query. We can do this by clicking the gear icon for a specific column, or by double-clicking the grey bar at the top of the query window.

It’s also possible to add filters to our data set. By clicking the Filter icon at the top of the window, we can view the current filters for the query. We can then add filters the same way we would add columns, by double clicking or dragging the specific column. In the example shown, I have a query on the column “D2 Department” where the column value equals “Local Plants Dept.”.

Filters can be configured using any of the familiar methods, such as checking if a value exists in a list of values, numerical comparisons, or even using repository or session variables.

Now that we have our columns selected and our filters defined, we can execute this query and see a preview of the result set. By clicking the “Table” icon in the top header of the window, we can preview the result.

Once we are comfortable with the results of the query, we can export the results to Tableau. It is important to understand that the preview data is trimmed down to 500 rows by default, so don’t worry if you think something is missing! This value, and the export row limit, can be configured, but for now we can export the results using the green “Unify” button at the top right hand corner of the window.

When this button is clicked, the Unify window will close and the query will execute. You will then be taken to a new Tableau Workbook with the results of the query as a Data Source. We can now use this query as a data source in Tableau, just as we would with any other data source.

But what if we have existing reports we want to use? Do we have to rebuild the report from scratch in the web data connector? Of course not! With Unify, you can select existing reports and pull them directly into Tableau.

Instead of adding columns from the lefthand pane, we can instead select the “Open” icon, which will let us select an existing report. We can then export this report to Tableau, just as before.

Now let’s try to do something a little more complicated. OBIEE doesn’t have the capability to execute queries across Subject Areas without common tables in the business model, however Tableau can perform joins between two data sources (so long as we select the correct join conditions). We can use Unify to pull two queries from OBIEE from different Subject Areas, and perform a data mashup with the two Subject Areas in Tableau.

Here I’ve created a query with “Product Number” and “Revenue”, both from the Subject Area “A - Sample Sales”. I’ve saved this query as “Sales”. I can then click the “New” icon in the header to create a new query.

This second query is using the “C - Sample Costs” Subject Area, and is saved as “Costs”. This query contains the columns “Product Number”, “Variable Costs”, and “Fixed Costs”.

When I click the Unify button, both of these queries will be pulled into Tableau as two separate data sources. Since both of the queries contain the “Product Number” column, I can join these data sources on the “Product Number” column. In fact, Tableau is smart enough to do this for us:

We now have two data sets, each from a different OBIEE subject area, joined and available for visualization in Tableau. Wow, that was easy!

What about refreshing the data? Good question! The exported data sources are published as data extracts, so all you need to do to refresh the data is select the data source and hit the refresh button. If you are not authenticated with OBIEE, or your session has expired, you will simply be prompted to re-authenticate.

Using Tableau to consume OBIEE data has never been easier. Rittman Mead’s Unify allows users to connect to OBIEE as a data source within a Tableau environment in an intuitive and efficient method. If only everything was this easy!

Interested in getting OBIEE data into Tableau? Contact us to see how we can help, or head over to https://unify.ritt.md to get a free Unify trial version.

Categories: BI & Warehousing

lock sys

Laurent Schneider - Mon, 2017-06-19 08:55

In the old days, locking sys had not much effect.

SQL> alter user sys identified by *** account lock;
User altered.
SQL> select account_status 
  from dba_users 
  where username='SYS';
SQL> conn / as sysdba
SQL> conn sys/** as sysdba
SQL> conn sys/***@db01 as sysdba

Well, in the very-old days, Oracle7, or with the in 12cR2-deprecated parameter O7_DICTIONARY_ACCESSIBILITY, SYS could be locked. But this is out of the scope of this post.

In 12cR2, it is now possible to lock SYS.

SQL> alter user sys 
  identified by *** 
  account lock;
User altered.
SQL> select account_status 
  from dba_users 
  where username='SYS';
SQL> conn / as sysdba
SQL> conn sys/** as sysdba
SQL> conn sys/***@db01 as sysdba
ORA-28000: the account is locked

I like it 🙂 Oracle recommends you create other users to perform DBA tasks.

SQL> grant dba, sysdba 
  to user0001 
  identified by ***;
Grant succeeded.

Still, probably intentionally left so or simply forgotten, Oracle recommends to lock all Oracle supplied accounts except for SYS and SYSTEM (ref: Changing Passwords for Oracle Supplied Accounts)

Also note, you’ll get an ORA-40365 if you use an old-style password file

SQL> alter user sys identified by *** account lock;
alter user sys identified by *** account lock
ERROR at line 1:
ORA-40365: The SYS user cannot be locked 
  while the password file is in its current format.

Oracle Health Sciences Reimagines Clinical Development Technology with New Cloud-Based eClinical Environment of the Future

Oracle Press Releases - Mon, 2017-06-19 07:00
Press Release
Oracle Health Sciences Reimagines Clinical Development Technology with New Cloud-Based eClinical Environment of the Future Unifies clinical development operations and information to help life sciences companies bring therapies to market faster and more cost-effectively

DIA – Chicago, IL.—Jun 19, 2017

Oracle today introduced Oracle Health Sciences Clinical One Platform, a cloud-based eClinical environment that is intended to redefine the way technology supports clinical research, and its first capability in the new environment, Clinical One Randomization and Supplies Management.

Developing a potentially life-saving drug from a promising molecule to an FDA-approved therapy can take more than a decade and cost billions of dollars due to redundant processes, increasing volumes and variety of patient data, and older technology systems that don’t communicate with one another. Oracle Health Sciences is tackling these issues with its Clinical One Platform, a new cloud-based eClinical solution that will unify clinical development operations and information in a single environment with shared functions and an easy-to-use interface for sites, clinical coordinators and their counterparts.

Unlike other products in the market, which are point solutions that address only a single part of the drug development lifecycle or a single area within clinical research, Clinical One Platform is a holistic, unified cloud environment. It is being built from the ground-up to address the needs of the entire drug development lifecycle – everyone and every process required to get a drug to market. The vision for its Clinical One Platform is to bring more drugs to market faster and offer hope for more cures by eliminating redundancies, creating process efficiencies and sharing information—in a way that has never been done before.

“Pharmaceutical companies continue to look for new and innovative ways to bring therapies to market faster and more cost effectively. Technology providers who can offer a unified, cloud-based eClinical platform that enables companies to seamlessly share clinical trial data throughout all phases of the drug development lifecycle and across all functions, will be poised to take advantage of the expanding eClinical market,” said Alan S. Louie, Ph.D., Research Director, IDC.

In addition to unveiling its Clinical One Platform, Oracle also launched its first capability in the new environment, Clinical One Randomization and Supplies Management. Designed with self-service in mind and to eliminate the need for customization, this capability will feature an intuitive user interface, enabling clinical teams to design, validate and deploy a study in days with the click of a button. This enables clinical coordinators to quickly add patients to a trial, collect screening information and ensure eligibility for randomization in record time.

“The pharmaceutical industry has been stitching together systems to support the drug development lifecycle for decades, creating a Frankenstein effect. Our vision, and why we created the Clinical One Platform clinical environment, is to provide our life sciences customers with a modern, collaborative cloud environment that eliminates massive redundancies and help enable them to set up a trial in days instead of weeks, while allowing broader clinical teams to leverage what’s already been done,” said Steve Rosenberg, general manager, Oracle Health Sciences. “But ultimately, it’s about trying to accelerate the pace of finding more cures faster and more cost-effectively as millions of patients wait with hope.”

Additional Resources
  • Visit www.oracle.com/clinical-one
  • Visit use at the DIA Annual Meeting at Booth #1511
Contact Info
Valerie Beaudett
Oracle Corporation
+1 650.400.7833
Christina McDonald
+1 212.614.4221
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Christina McDonald

  • +1 212.614.4221

Node-oracledb 2.0.13-Development is now on GitHub

Christopher Jones - Mon, 2017-06-19 06:19

Node-oracledb 2.0.13-Development is now on GitHub. Node-oracledb is the Node.js interface for Oracle Database.

Top features: Version 2 is based on the new ODPI-C abstraction layer. Additional data type support.

The full Changelog is here. The node-oracledb 2.0.13-Development documentation is here.

I'd recommend testing and reporting any issues as early as possible during the 2.0 Development release cycle. This is a development release so we are aware of some rough edges. I'll start a GitHub issue to track them.

Full installation instructions are here but note that node-oracledb 2.0 is not yet on npm so you need to install from GitHub with:

npm install oracle/node-oracledb.git#dev-2.0

All you then need are Oracle client 12.2, 12.1 or 11.2 libraries (e.g. the Oracle Instant Client 'Basic' or 'Basic Light' package) in your PATH or LD_LIBRARY_PATH or equivalent. Users of macOS must put the Oracle client libraries in ~/lib or /usr/local/lib. The use of ODPI-C makes installation a bit easier. Oracle header files are no longer needed. The OCI_LIB_DIR and OCI_INC_DIR environment variables are not needed. A compiler with C++11 support, and Python 2.7 are still needed, but a single node-oracledb binary now works with any of the Oracle client 11.2, 12.1 or 12.2 releases, improving portability when node-oracledb builds are copied between machines. You will get run time errors if you try to use a new Oracle Database feature that isn't supported by older client libraries, so make sure you test in an environment that resembles your deployment one.

Other changes in this release are:

  • Lob.close() now marks LOBs invalid immediately rather than during the asynchronous portion of the close() method, so that all other attempts are no-ops.

  • Incorrect application logic in version 1 that attempted to close a connection while certain LOB, ResultSet or other database operations were still occurring gave an NJS-030, NJS-031 or NJS-032 "connection cannot be released" error. Now in version 2 the connection will be closed but any operation that relied on the connection being open will fail.

  • Some NJS and DPI error messages and numbers have changed. This is particularly true of DPI errors due to the use of ODPI-C.

  • Stated compatibility is now for Node.js 4, 6 and 8.

  • Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer). There is no support for streaming these types, so the value stored in the DB may not be able to be completely fetched if Node.js and V8 memory limits are reached. You should convert applications to use LOBs, which can be streamed.

  • Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in Node.js's Date object.

  • Added support for ROWID without needing an explicit fetchAsString. Data is now fetched as a String by default.

  • Added support for UROWID. Data is fetched as a String.

  • Added query support for NCHAR and NVARCHAR2 columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set.

  • Added query support for NCLOB columns. NCLOB data can be streamed or fetched as String. Binding for DML may not insert data correctly, depending on the database character set and the database national character set.

  • Removed node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. Node.js and V8 memory restrictions will still prevent large LOBs being manipulated in single chunks. The v1 limits really only affected users who linked node-oracledb with 11.2 client libraries.

  • Statements that generate errors are now dropped from the statement cache. Applications running while table definitions change will no longer end up with unusable SQL statements due to stale cache entries. Applications may still get one error, but that will trigger the now invalid cache entry to be dropped so subsequent executions will succeed. ODPI-C has some extra smarts in there to make it even better than I describe. I can bore you with them if you ask - or you can check the ODPI-C source code. Note that Oracle best-practice is never to change table definitions while applications are executing. I know some test frameworks do it, but ....

All these improvements are courtesy of ODPI-C's underlying handling. The use of ODPI-C is a great springboard for future features since it already has support for a number of things we can expose to Node.js. The ODPI-C project was an expansion of the previous DPI layer used solely by node-oracledb. Now ODPI-C is in use in Python cx_Oracle 6, and is being used in various other projects. For example Tamás Gulácsi has been working on a Go driver using ODPI-C. (Check out his branch here). Kubo Takehiro started an Oracle Rust driver too - before he decided that he preferred programming languages other than Rust!

Our stated plan for node-oracledb is that maintenance of node-oracledb 1.x will end on 1st April 2018, coinciding with the end-of-life of Node 4, so start testing node-oracledb 2.0.13-Development now.

Things to do after you cloned a Virtual Machine

Frank van Bortel - Mon, 2017-06-19 02:43
Clean up a cloned VM After you made a clone of your (base) VM, you will need to do some stuff. MAC-address First of all, I suspect you have a different MAC-address than the original machine. VMWare does that, as long as you have your MAC address assigned automatically. VirtualBox will ask you whether to re-initialize the MAC-address while cloning. The problem is the udev process, responsable Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

New OA Framework 12.2.5 Update 13 Now Available

Steven Chan - Mon, 2017-06-19 02:00

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes 39 fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • When Descriptive Flexfield (DFF) is in advanced table under query bean, some of its segments may not be displayed and instead, other columns of the table get repeated.
  • The user is unable to switch to user Views from default View when the query bean views panel has HGrid results table.
  • 22590683    FORWARD-PORT 21465867 TO 12.2.5

Related Articles

Categories: APPS Blogs

Python cx_Oracle 6.0 RC 1 is now on PyPI

Christopher Jones - Sun, 2017-06-18 22:06

Release Candidate 1 of Python cx_Oracle 6.0 in now on PyPI. Test now and report any feedback.

Python cx_Oracle is the Python Instant for Oracle Database. Version 6 is based on the new ODPI-C abstraction layer, which is now also in Release Candidate phase. This layer has allowed cx_Oracle code itself to be greatly simplified.

There are a few small tweaks in cx_Oracle RC1 since the final Beta. Read about them in the Release Notes. A couple of the changes are that the method Cursor.setoutputsize() is now a no-op, since ODPI-C automatically manages buffer sizes of LONG and LONG RAW columns. Also unicode can be used (in addition to string) for creating session pools and for changing passwords in Python 2.7.

The use of ODPI-C has allowed Python Wheels to be created, making installation easier.

Install cx_Oracle 6.0 RC 1 from PyPI with:

python -m pip install cx_Oracle --pre

All you then need are Oracle client 12.2, 12.1 or 11.2 libraries (e.g. the Oracle Instant Client 'Basic' package) in your path at runtime.

cx_Oracle Documentation is here.

Happy Pythoning!

Steps for Moving ASM Disk from FRA to DATA

Pakistan's First Oracle Blog - Sun, 2017-06-18 21:11
Due to some unexpected data load, the space in DATA diskgroup became critically low on one of the production systems during middle of night on the weekend. There was no time to get a new disk and we needed the space to make room for new data load scheduled to be run after 3 hours.

Looked at the tablespaces space in DATA diskgroup and there wasn't much hope in terms of moving or shrinking or deleting anything. Also the upcoming data load was direct path load which always writes above the high water mark in segments, so shrinking wasn't of much help.

Looked at the FRA diskgroup and found out that there was plenty of space there, so I decided to rob Peter to pay Paul. The plan was to remove a disk from FRA diskgroup and add it to DATA. This all was done online and these were general steps:

Steps for Moving ASM Disk from FRA to DATA :

1) Remove Disk from FRA diskgroup

SQL> alter diskgroup FRA drop disk FRA_06;

Diskgroup altered.

2) Wait for Rebalance to finish

SQL> SELECT group_number, operation, state, power, est_minutes FROM v$asm_operation;

3) Add disk to the DATA diskgroup

alter diskgroup DATA add disk '/dev/myasm/superdb_fra_06' name DATA_06 rebalance power 8;

4) Wait for Rebalance to finish

SQL> SELECT group_number, operation, state, power, est_minutes FROM v$asm_operation;

This provided a much needed breather for the weekend and data load ran successfully. We will be making sure that we provision more disks to the DATA diskgroup and return the FRA disk to FRA with thanks.

Categories: DBA Blogs

ODPI-C RC1 released on GitHub

Christopher Jones - Sun, 2017-06-18 19:33

The ODPI-C abstraction layer for Oracle Database applications has just entered Release Candidate phase. You know what this means - it's seriously time you should do some testing and report any issues.

What is it?

ODPI-C is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications. It sits on top of OCI and requires Oracle client libraries.

Download ODPI-C from here.

Documentation is here.

The release notes are here. Along with some small bug fixes, this release has a memory optimization to reduce memory usage when the client character set is the same as the database character set - thus no unnecessary memory is allocated to cater for what otherwise is the potential expansion when converting between character sets.

In conjunction with this release, the Python cx_Oracle 6 API also went into Release Candidate phase. Later today the first Development release of node-oracledb v2 will be pushed to GitHub. Both these updated APIs use ODPI-C, so give some bigger usage examples you can follow.

Also recently Tamás Gulácsi has been working on a Go driver using ODPI-C. Check out his branch too.

The river floes break in spring... take 2

Greg Pavlik - Sun, 2017-06-18 16:19
Alexander Blok
The river floes break in spring...
March 1902
translation by Greg Pavlik 

The river floes break in spring,
And for the dead I feel no sorrow -
Toward new summits I am rising,
Forgetting crevasses of past striving,
I see the blue horizon of tomorrow.

What regret, in fire and smoke,
the lament of the cross,
With each hour, with each stroke -
Or instead - the heavens’ gift stoked,
from the bush burnt without loss!


Весна в реке ломает льдины,
И милых мертвых мне не жаль:
Преодолев мои вершины,
Забыл я зимние теснины
И вижу голубую даль.

Что сожалеть в дыму пожара,
Что сокрушаться у креста,
Когда всечасно жду удара
Или божественного дара
Из Моисеева куста!
Март 1902
Translators note: I updated this after some reflection. The original translation used the allegorical imagery that would have been common in patristic writing and hence Russian Orthodoxy. For example, I used the image of Aaron's rod in lieu of the word "cross", which appears in Russian (креста). The rod of Aaron was commonly understood to be a type of the cross in traditional readings of Old Testament Scriptures. Similarly, the final line of Blok's poem "Из Моисеева куста" literally translates to "from Moses's Bush". In my original translation, I rendered the final line "from the bush of Moses, the Mother of God". Since at least the 4th century, the burning bush was interpreted as a type of Mary, the Theotokos (or God-bearer) in the patristic literature (see for example, Gregory of Nyssa, The Life of Moses). In Russian iconography there is even an icon type of the Mother of God called the Unburnt Bush. While the use of "rod" and "Mother of God" allowed me to maintain the rhyme pattern (rod/God in place of креста/куста) of the original poem, it created a awkward rhythm to the poem, especially in the final line. It also added explicit allusions to patristic images that are not explicitly present in the original poem, perhaps fundamentally altering the author's intention. A neat experiment but also one that I think ultimately failed.

The new translation returns to a more literal translation without allegory: "
креста" means simply cross and that is how the poem now reads. The final line has been abbreviated from my original translation, though somewhat less literal - "Из Моисеева куста" is now rendered as "from the bush burnt without loss" rather than the literal "from Moses's bush" or the more awkward original translation "From the Bush of Moses, the Mother of God". The new translation I believe captures more closely the original meaning and manages to maintain at least the rhyme pattern of the original (now cross/loss in place of креста/куста). Overall, this is far from a perfect translation but I think it is an improvement.
One final comment about Blok himself that perhaps illustrates why I am still conflicted about the changes to final line: Blok was a master of the Symbolist movement in Russian poetry, wherein he worked unconventional rhythms and rhyming into his poetry. On that score, I feel somewhat more at liberty to ignore the meter of the original and attempt to express something of a musical quality in English. However, Blok was also deeply influenced by the great philosopher Vladimir Soloviev, a proponent of Sophiology in the Russian intellectual tradition. This led to him writing many of his early poetic compositions about the Fair Lady, Sophia the embodiment of Wisdom. It is with this in mind that I feel some regret at removing the reference to the Mother of God, a human embodiment/enhypostatization of Divine Wisdom.

Bash: The most useless command (4)

Dietrich Schroff - Sun, 2017-06-18 14:04
The blog statistics show, that there are many people reading the posts about useless commands. And there is the next candidate:
cowsayNow you are thinking, what is cowsay?
       Cowsay  generates  an  ASCII picture of a cow saying something provided by the user.  If run with no arguments, it accepts standard input, word-wraps
       the message given at about 40 columns, and prints the cow saying the given message on standard output.

Okay. Here we go:
$ echo what a cool command | cowsay
< what a cool command >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
One thing to add here: moo 

Now, here's an idea...

Frank van Bortel - Sun, 2017-06-18 03:41
Gaining control Or rather - regaining control. Over my own data, and what's done with it. Currently, I use several services, of which I know they are monitored. Several of these services fall under US legislation, although I'm not a US citizen. This allows several agencies to go through my documents, email and other stuff, whether I like that or not (I do not). Of course, for some of this, I Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0


Subscribe to Oracle FAQ aggregator