Feed aggregator

Plan Shapes

Jonathan Lewis - Mon, 2016-06-20 06:58

There are a number of articles, webinars, and blogs online about how to read execution plans, but many of them seem to stop after the the minimum description of the simplest type of plan, so I thought I’d throw out a brief comment on a couple the slightly more complicated things that are likely to appear fairly commonly because you sometimes find plans with very similar shapes but extremely different interpretation.

First: select with scalar subqueries in the select list (there’s no need to worry about what the table definitions look like):


rem
rem     Script:         plan_shapes.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016
rem

select
        id, n1,
        (select /*+  no_unnest */ max(n1) from t2 where t2.id = t1.n1)  t2_max,
        (select /*+  no_unnest */ max(n1) from t3 where t3.id = t1.n1)  t3_max
from
        t1
where
        id between 100 and 119
;


--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |     20 |00:00:00.01 |       8 |
|   1 |  SORT AGGREGATE                      |       |     20 |      1 |     20 |00:00:00.01 |      63 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     20 |      2 |     40 |00:00:00.01 |      63 |
|*  3 |    INDEX RANGE SCAN                  | T2_I1 |     20 |      2 |     40 |00:00:00.01 |      23 |
|   4 |  SORT AGGREGATE                      |       |     20 |      1 |     20 |00:00:00.01 |      83 |
|   5 |   TABLE ACCESS BY INDEX ROWID BATCHED| T3    |     20 |      3 |     60 |00:00:00.01 |      83 |
|*  6 |    INDEX RANGE SCAN                  | T3_I1 |     20 |      3 |     60 |00:00:00.01 |      23 |
|   7 |  TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |     21 |     20 |00:00:00.01 |       8 |
|*  8 |   INDEX RANGE SCAN                   | T1_I1 |      1 |     21 |     20 |00:00:00.01 |       4 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."ID"=:B1)
   6 - access("T3"."ID"=:B1)
   8 - access("ID">=100 AND "ID"<=119)

We have a select statement, with two columns in the select list generated by (correlated) scalar subqueries.

The overall shape of the plan shows the driving query as the last child plan for the SELECT (operations 7-8). The first and second child plans are the plans for the two scalar subqueries in turn (and the order the sub-plans appear is the order of the scalar subqueries in the select list). In this case the main query returned 20 rows (A-Rows=20), and the scalar subqueries executed 20 times each. There are a few other details we could mention, but the key feature of the plan is that the driver is the last sub-plan.

Second: update with scalar subqueries:


update t1
set
        n1 = (select max(n1) from t2 where t2.id = t1.n1),
        v1 = (select max(v1) from t3 where t3.id = t1.n1)
where
        id between 1000 and 1999
;

---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                      |       |      1 |        |      0 |00:00:00.13 |   10361 |
|   1 |  UPDATE                               | T1    |      1 |        |      0 |00:00:00.13 |   10361 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |   1001 |   1000 |00:00:00.01 |      22 |
|*  3 |    INDEX RANGE SCAN                   | T1_I1 |      1 |   1001 |   1000 |00:00:00.01 |       4 |
|   4 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.04 |    3672 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    916 |      2 |   1832 |00:00:00.03 |    3672 |
|*  6 |     INDEX RANGE SCAN                  | T2_I1 |    916 |      2 |   1832 |00:00:00.01 |    1840 |
|   7 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.05 |    4588 |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED| T3    |    916 |      3 |   2748 |00:00:00.04 |    4588 |
|*  9 |     INDEX RANGE SCAN                  | T3_I1 |    916 |      3 |   2748 |00:00:00.01 |    1840 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1000 AND "ID"<=1999)
   6 - access("T2"."ID"=:B1)
   9 - access("T3"."ID"=:B1)


In this statement we update two columns by executing (correlated) scalar subqueries. The most important feature of interpreting this plan is that it is the exact opposite of the earlier select statement. In this plan the first subplan is the driving part of the statement – it’s the part of the plan that tells us how to find rows to be updated (and we find 1,000 of them); the 2nd and 3rd sub-plans correspond to the subqueries in the two columns whose value we set. In this case we happen to get some benefit from scalar subquery caching so the two subqueries each run 916 times. (The fact that both subqueries run the same number of times is not a coincidence – the caching benefit is dependent on the value(s) used for driving the subqueries, and that’s the t1.n1 column in both cases.)

Finally: because people do all sorts of complicated things, and it’s easy to get deceived if you read the plan without seeing the SQL, one more example (starting with the plan):


---------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT                      |       |      1 |        |      0 |00:00:00.14 |   11257 |
|   1 |  UPDATE                               | T1    |      1 |        |      0 |00:00:00.14 |   11257 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED | T1    |      1 |   1001 |   1000 |00:00:00.01 |      22 |
|*  3 |    INDEX RANGE SCAN                   | T1_I1 |      1 |   1001 |   1000 |00:00:00.01 |       4 |
|   4 |   SORT AGGREGATE                      |       |   1103 |      1 |   1103 |00:00:00.06 |    5519 |
|   5 |    TABLE ACCESS BY INDEX ROWID BATCHED| T3    |   1103 |      3 |   3306 |00:00:00.04 |    5519 |
|*  6 |     INDEX RANGE SCAN                  | T3_I1 |   1103 |      3 |   3306 |00:00:00.01 |    2213 |
|   7 |   SORT AGGREGATE                      |       |    916 |      1 |    916 |00:00:00.11 |    9191 |
|   8 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    916 |      2 |   1832 |00:00:00.03 |    3672 |
|*  9 |     INDEX RANGE SCAN                  | T2_I1 |    916 |      2 |   1832 |00:00:00.01 |    1840 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("ID">=1000 AND "ID"<=1999)
   6 - access("T3"."ID"=:B1)
   9 - access("T2"."ID"=:B1)


This plan looks very like the previous plan – it’s an update so the first sub-plan will be identifying the rows to be updated, but then what ?

The easiest assumption – always one worth considering, providing you remember that there are other possibilities – is that we have two subqueries executing for each row we update – but in this example there’s a little clue in the numbers that this isn’what the statement does. The driving query (operations 2 – 3) identifies 1,000 rows, so how can operations 4 – 6 start 1,103 times each ? Something more complicated is going on.

I’ve combined the features of the first two examples. I have a single set subquery, but it contains a scalar subquery in the select list – so operations 4 – 9 are the sub-plan for a single select statement – and a select statement with a scalar subquery in the select list puts the main plan last (operations 7 – 9) and its scalar subquery sub-plan first (operations 4 – 6). Here’s the SQL:


update t1
set
        (n1,v1) = (
                select
                        max(n1),
                        max((select /*+  no_unnest */ max(v1) from t3 where t3.id = t2.n1))
                from    t2
                where   t2.id = t1.n1
        )
where
        id between 1000 and 1999
;

What happens here is that the driving body of the update statement identifies 1,000 rows so the scalar subquery against t2 should execute 1,000 times; thanks to scalar subquery caching, though, it only executes 916 times. Each time it executes it finds 2 row and for each of those rows it executes the scalar subquery against t3 which, fortunately, also benefits from its own scalar subquery caching and so runs only 1,103 times in total

I could go on, of course, with increasingly complex examples – for example scalar subqueries that contain decode() calls with scalar subqueries as their inputs; where clauses which compare the results of scalar subqueries, and so on. It can get quite difficult to see, even with the SQL in front of you, what the plan is trying to tell you so I’ll leave you with one thought: always use the qb_name() hint to name every single subquery so that, if interpretation of the plan gets a bit difficult, you can use the ‘alias’ formatting option in the call to dbms_xplan to connect each table reference in the plan with the query block it came from in the query.

 

 


Oracle and Mellanox to Collaborate on High Speed Cloud Networking Standards

Oracle Press Releases - Mon, 2016-06-20 06:00
Press Release
Oracle and Mellanox to Collaborate on High Speed Cloud Networking Standards Companies to Work on Open Standards, Interoperability, and Backward Compatibility for InfiniBand products

Frankfurt – ISC—Jun 20, 2016

Oracle and Mellanox today announced a partnership to drive interoperability between their products and to jointly develop further industry standards for InfiniBand—a high speed Cloud networking technology.

Products supporting the InfiniBand standard not only provide extreme bandwidth and ultra-low latency for fast, agile, and secure cloud infrastructure, but they are also based upon industry standards and available from multiple vendors. Oracle and Mellanox Enhanced Data Rate (EDR) 100G InfiniBand products are engineered to the InfiniBandTM Architecture Specification Release 1.3 and allow customers to deploy a 100Gb/s fabric that is backwards compatible with the previous generations of InfiniBand-based systems (FDR, QDR, DDR, and SDR), enabling customers to protect their investments in InfiniBand-enabled applications.

Oracle’s core strategy is to enable its cloud solutions and Engineered Systems with open technologies that deliver the highest performance, efficiency, and security. Due to its superior performance, efficiency, and scalability, InfiniBand is a dominant fabric for building Super Computers, which rapidly process massive amounts of data. Oracle has chosen InfiniBand as the foundational technology for Engineered Systems to provide unparalleled performance and scale to Oracle’s suite of business critical applications.

“Customers using InfiniBand as the interconnect for clouds and high performance computing can be certain that as new speeds and features are incorporated into the InfiniBand architecture that they will be able to continue to work with the supplier of their choice and that new products will seamlessly integrate with their existing InfiniBand infrastructures,” said Raju Penumatcha, senior vice president, Netra and Networking, Oracle. “This continued partnership represents an enduring commitment to provide customers with the technologies and standards needed for true enterprise-class cloud infrastructures that enable them to accelerate application performance and quickly respond to changing business needs.”

“With a standards-based architecture, and the most advanced roadmap that is guided by the InfiniBand Trade Association, and a mature, open source software stack, InfiniBand offers customers the best of performance and interoperability with the support of a large community of suppliers, developers, and users,” said Gilad Shainer, vice president, marketing at Mellanox Technologies. “This partnership enables us to continue to optimize and expand InfiniBand’s role as a key technology for any cloud environment.”

Oracle EDR InfiniBand Fabric is on display this week at: ISC High Performance (Booth 910). For more information, please visit: www.oracle.com/goto/InfiniBand.

Mellanox EDR InfiniBand solutions are on display this week at the International Supercomputing Conference (booth 905). For more information, please visit: www.mellanox.com.

Contact Info
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage, telecom and financial services. More information is available at: http://www.mellanox.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Sarah Fraser

  • +1.650.743.0660

PeopleTools October 2014 CPU Security Patch

The prior blog post (PeopleSoft Security Patches) reviewed PeopleSoft CPU patching. Worthy of its own post is the October 2014 CPU. A show of hands back in April at our PeopleSoft database security presentation at Collaborate 2016 (PeopleSoft Database Security) further confirmed Integrigy’s research that a surprising number of PeopleSoft installations have not applied this patch.

The PeopleTools October 2014 CPU (8.52.24, 8.53.17, 8.54.04) fixes a critical issue with the security of the database passwords for the Connect and Access Ids. This patch MUST be applied in order to safeguard the password for the Access Id (e.g. SYSADM) – regardless of how complex you have made it. The details of the specific vulnerability are best not given further explanation on the Internet.

This said if you have not already applied the October 2014 CPU or any CPU since (they are cumulative) and you have questions and/or concerns, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Oracle PeopleSoft, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

database option - multi tenant

Pat Shuff - Mon, 2016-06-20 02:07
Before we dive into what multi-tenant databases are, let's take a step back and define a few terms. With an on premise system we can have a computer loaded with a database series of databases. Historically the way that this was done was by booting the hardware with an operating system and loading the database onto the operating system. We load the OS onto the root file system or "/" in Unix/Solaris/Linux. We create a /u01 directory to hold the ORACLE_HOME or binaries for the database. Traditionally we load the data into /u02 or keep everything in /u01. Best practices have shown us that splitting the database installation into four parts is probably a good idea. Keeping everything in the root partition is not a good idea because your can fill up your database and lock the operating system at the same time. We can put the binaries into /u01 and do a RAID-5 or RAID-10 stripe for these binaries. We can then put all of our data into /u02 and name the /u02 file system a flash disk or high speed disk to improve performance since this has a high read and write performance requirements. We can RAID-5 or RAID-10 this data to ensure that we don't loose data or will use a more advanced striping technology provided by a hardware disk vendor. We then put our backups into /u03 and do a simple mirror for this partition. We can go with a lower performing disk to save money on the installation and only keep data for a few days/weeks/months then delete it as we get multiple copies of this data. We might replicate it to another data center or copy the data to tape and put it into cold storage for compliance requirements as well as disaster recovery fall backs. If we are going to replicate the data to another data center we will create a /u04 area for change logs and redo logs that will be shipped to our secondary system and applied to the second system to reduce recovery time. Backups give us recovery to the last backup. A live system running Data Guard or Active Data Guard gives us failure back to a few seconds or a transaction or two back rather than hours or days back.

The biggest problem with this solution is that purchasing a single system to run a single database is costly and difficult to manage. We might be running at 10% processor utilization the majority of time but run at 90% utilization for a few hours a week or few days a month. The system is idle most of the time and we are paying for the high water mark rather than the average usage. Many administrators overload a system that have different peak usage times and run multiple database instances on the same box. If, for example, our accounting system peaks on the 25th through the 30th and our sales system peaks on the 5th through the 10th, we can run these two systems on the same box and resource limit each instance during the peak periods and let them run at 20% the rest of the month. This is typically done by installing two ORACLE_HOMEs in the /u01 directory. The accounting system goes into /u01/app/oracle/production/12.1.0/accounting and the sales system goes into /u01/app/oracle/production/12.1.0/sales. Both share the /u02 file system as well and put their data into /u02/app/oracle/oradata/12.1.0/accounting and /u02/app/oracle/oradata/12.1.0/sales. Backups are done to two different locations and the replication and redo logs are similarly replicated to different locations.

Having multiple ORACLE_HOMEs has been a way of solving this problem historically for years. The key drawback is that patching can get troublesome if specific options are used or installed. If, for example, both use ASM (automated storage management) you can't patch one database without patching ASM for both. This makes patch testing difficult on production systems because suddenly sales and accounting are tied together and upgrades have to be done at the same time.

Virtualization introduced a solution to this by allowing you to install different operating systems on the same computer and sublicense the software based on the virtual processors assigned to the application. You suddenly are able to separate the storage interfaces and operating system patches and treat these two systems as two separate systems running on the same box. Unfortunately, the way that the Oracle database is licensed has caused problems and tension with customers. The software does not contain a license key or hardware limit and will run on what is available. Virtualization engines like VMWare and HyperV allow you to soft partition the hardware and dynamically grow with demand. This is both good and bad. It is good because it makes it simpler to respond to increase workloads. It is bad because licensing is suddenly flexible and Oracle treats the maximum number of cores in the cluster as the high water mark that needs to be licensed. This is called soft partitioning. Operating systems like Solaris and AIX have hard partitions and virtualization engines like OracleVM and ZEN provide hard partitions. Customers have traditionally solved this by running an Oracle instance on a single socket or dual socket system to limit the core count. This typically means that the most critical data is running on the oldest and slowest hardware to limit price. Alternatively they run the database on a full blade and license all cores in this blade. This typically causes a system to be overlicensed and underutilized. The admin might limit the core count to 8 cores but there could be 32 cores in the blade and all 32 cores must be licensed. Using a virtualization engine to limit the resources between database instances is not necessarily practical and not fine enough resolution. Going with multiple ORACLE_HOME locations has been a growing trend since you have to license all of the cores based on current licensing policies.

Another big problem with the multiple ORACLE_HOME or multiple operating system approach is that you have multiple systems to manage and patch. If we use the 32 core system to run four instances of application databases we have four patches to make for the virtualization engine, the operating systems, and the databases. An optimum solution would be to run one operating system on all 32 cores and spread the four databases with one ORACLE_HOME across each and resource limit each instance so that they don't become a noisy neighbor for the other three. We can then use resource manager to assign shares to each instance and limit the processor, memory, and network bandwidth based on rules so that noisy neighbors don't stop us from getting our job done. We get our shares and can be the noisy neighbor if no one else is using resources.

With the 12c instance of the database, Oracle introduced an option called multi-tenant. Let's think of a company like SalesForce.com. They don't spin up a new instance for each company that they do business with. They don't install a new ORACLE_HOME for each company. They don't spin up a new operating system and install a new database instance for each company. This would not make economic sense. A five person company would have to spend about $3K/month with SalesForce to cover just the cost of the database license. On the flip side, custom code must be written to isolate user from company A from reading customer contact information from company B. A much simpler way would be to spin up a pluggable database for company A and another for company B. No custom code is required since the records for the two companies are stored in different directories and potentially different disk locations. If we go back and look at our partitioning blog entry we notice that we have our data stored in /u02/app/oracle/oradata/ORCL/PDB1. The ORCL directory is the location of our container database. This contains all of the configuration information for our database. We define our listener at this location. We create our RMAN backup scripts here. We define our security and do auditing at this level. Note that we have a PDB1 subdirectory under this. This is our pluggable database for company A. We would have a PDB2 for company B and the system01.dbf file in that directory is different from the system01.dbf file located in the PDB1 directory. This allows us to create unique users in both directories and not have a global name issue. With SalesForce all usernames must be unique because users are stored in a master database and must be unique. I can not, for example, create a user called backupadmin that allows users to log in to company A and backup the data set if there is a user defined by that same name for any other company world wide. This creates script issues and problems. We can't create a single backup script that works across all companies and must create a unique user and script for each company.

The main concept behind the multi-tenant option is to allow you to run more databases on the same box and reduce the amount of work required to support them. By putting common tasks like backup and restore at the container level, all pluggables on this system are backed up in a central location but separated by the pluggable container so that there is no data mingling. Data can be replicated quickly and easily without having to resort to backup and restore onto a new instance. The system global area (SGA) is common for the container database. Each pluggable container gets their own personal global area (PGA) that manages I/O buffers, compiled sql statements, and cached data.

Note that we have one redo log and undo log area. As changes are made they are copied to a secondary system. We don't have to configure Data Guard for each pluggable instance but for the container database. When we plug a instance into a container it inherits the properties of the container. If we had a container configured to be RAC enabled, all pluggables in the database instance would be RAC enabled. We can use the resource manager in the container database to limit the shares that each pluggable instance gets and reduce the noisy neighbor overlap that happens on a virtual machine configuration. We also reduce the patching, backup, and overall maintenance required to administer the database instance.

To create a pluggable instance we need to make sure that we have requested the High Performance or Extreme Performance Edition of the database. The Standard Edition and Enterprise Edition do not support multi-tenant. It is important to note that to get this same feature on Amazon you can not use RDS because they prohibit you from using this option. You must use IaaS and go with Amazon EC2 to get this feature to work. Microsoft Azure does not offer the Oracle database at the platform level so your only option is Azure Compute.

The pluggable creation is simple and can be done from the command line through sqlplus. The 12c Database Documentation details this process.

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password;

or

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password ROLES=(DBA);

or more complex

CREATE PLUGGABLE DATABASE salespdb ADMIN USER salesadm IDENTIFIED BY password
  STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
  DEFAULT TABLESPACE sales 
    DATAFILE '/disk1/oracle/dbs/salespdb/sales01.dbf' SIZE 250M AUTOEXTEND ON
  PATH_PREFIX = '/disk1/oracle/dbs/salespdb/'
  FILE_NAME_CONVERT = ('/disk1/oracle/dbs/pdbseed/', '/disk1/oracle/dbs/salespdb/');
Note that we can make the creation simple or define all of the options and file locations. In the last example we create the pluggable instance by cloning the existing pdbseed. In our example this would be located in /u02/app/oracle/oradata/ORCL. We would pull from the pdbseed directory and push into the salespdb directory. All three examples would do this but the third details all options and configurations.

When we create the instance from the sql plus command line, it could assume a PDB name for the file system. We might want to use the more complex configuration. When we executed this from the command line we got a long string of numbers for the directory name of our new pluggable instance called salespdb.

We could do the same thing through sql developer and have it guide us through the renaming steps. It prompts us for the new file name showing where the seed is coming from. We could have just as easily have cloned the salespdb and used it as our foundation rather than creating one from the pdbseed. We right click on the container database header and it prompts us to create, clone, or unplug a pluggable. If we select create we see the following sequence.

One thing that we did not talk about was economics. If you wanted to run multi-tenant on premise you need to purchase a database license at $47.5K per two processors and the multi-tenant option at $23K per two processors as well. This comes in at $60.5K for the license and $13,310 per year for support. Using our four year cost of ownership this comes in at $2,495 per month for the database license. The High Performance edition comes in at $4K per month. Along with this you get about $5K in additional features like diagnostics, tuning, partitioning, compression, and a few other features that we have not covered yet. If you are going to run these options on Amazon or Azure you will need to budget the $2.5K for the database license and more if you want the other features on top of the processor and storage costs for those cloud services. You should also budget the outgoing data charges that you do not have to pay for with the non-metered database service in the Oracle Cloud. Going with the multi-tenant option is cheaper than running the database on two servers and easier than running two ORACLE_HOME instances on the same machine. Going with the High Performance Edition gets you all of these options and offloads things like scale up, backup, initial configuration, and restart of services if a process fails.

In summary, multi-tenant is a good way of overloading services on a single server. The resource management features of the container allow us to dynamically change the allocation to a pluggable database and give more resources to instances that need it and limit noisy neighbors. With the High Performance edition and Extreme Performance Edition we get multi-tenant as a foundation for the service. Our primary interface to create a pluggable instance is either SQL Developer, Enterprise Manager, or sqlplus. We can easily clone an existing instance for a dev/test replica or export an instance and plug it into another system. We will look at this more in depth tomorrow.

Next Round Of ANZ “Let’s Talk Database” Events (July/August 2016)

Richard Foote - Mon, 2016-06-20 00:51
I’ll be presenting the next round of “Let’s Talk Database” events around Australia and NZ this winter in July/August 2016. These are free events but due to limited places have often “sold out” in the past, so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. We always have […]
Categories: DBA Blogs

Links for 2016-06-18 [del.icio.us]

Categories: DBA Blogs

An Eye Opener - Oracle Data Visualization

Making sense of your data shouldn't be tough! Visualizing data is a big part of making it understandable, actionable and in general useful. Oracle Data Visualization is stunningly visual and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c online datafile move and resize

Yann Neuhaus - Sat, 2016-06-18 12:24

I’ve described in previous posts how the 12c online datafile move works: your session process do the copy and tells other writers (dbwr and direct-path inserts) to write blocks to both files (old one and new one for blocks that have already been copied). Readers read old file until the copy is completed.
The target file is created at the start of the move, with the same size, and then is filled as long as the copy phase runs. What happens if the size of the source file increases?

I run a datafile move from one session. The datafile size is 14.6 GB.

SQL> alter database move datafile '/u02/app/oracle/oradata/CDB/sysaux011460.dbf' to '/u03/app/oracle/oradata/CDB/sysaux014244.dbf';

We see the start of the operation in the alert.log:

2016-06-18 14:23:09.254000 +00:00
Moving datafile /u02/app/oracle/oradata/CDB/sysaux011460.dbf (3) to /u03/app/oracle/oradata/CDB/sysaux014244.dbf
2016-06-18 14:23:10.600000 +00:00

manual resize

If I want to resize the datafile manually, I can’t:

SQL> alter database datafile '/u02/app/oracle/oradata/CDB/sysaux011460.dbf' resize 15G
*
ERROR at line 1:
ORA-63000: operation disallowed: data file
/u02/app/oracle/oradata/CDB/sysaux011460.dbf is being moved

automatic resize

So what happens if the datafile is autoextensible and I add data to it? I’ve run some ‘allocate extent’ and inserts and got the resize to occur:


2016-06-18 14:23:10.600000 +00:00
Resize operation completed for file# 3, old size 15319040K, new size 15329280K
2016-06-18 14:23:12.126000 +00:00
Resize operation completed for file# 3, old size 15329280K, new size 15400960K
2016-06-18 14:23:13.836000 +00:00

and let that continue

Resize operation completed for file# 3, old size 18513920K, new size 18585600K
2016-06-18 14:27:08.730000 +00:00
Resize operation completed for file# 3, old size 18585600K, new size 18657280K
2016-06-18 14:27:11.079000 +00:00
2016-06-18 14:28:03.905000 +00:00
Resize operation completed for file# 3, old size 18657280K, new size 18728960K
2016-06-18 14:28:05.179000 +00:00
Resize operation completed for file# 3, old size 18872320K, new size 18944000K

until the filesystem is full

2016-06-18 14:28:28.647000 +00:00
ORA-1653: unable to extend table SYS.TEST01 by 128 in tablespace SYSAUX
ORA-1653: unable to extend table SYS.TEST01 by 128 in tablespace SYSAUX
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_26342.trc:
ORA-27061: waiting for async I/Os failed
Linux-x86_64 Error: 28: No space left on device
Additional information: 4294967295
Additional information: 1048576
ORA-1653: unable to extend table SYS.TEST02 by 128 in tablespace SYSAUX
ORA-1653: unable to extend table SYS.TEST02 by 1024 in tablespace SYSAUX

My datafile is now 18GB.

file size

I’m checking the size for both files here:

[oracle@CDB]$ ls -l /u0?/app/oracle/oradata/CDB/sysaux*
-rw-r----- 1 oracle oinstall 19398664192 Jun 18 14:38 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
-rw-r----- 1 oracle oinstall 15686705152 Jun 18 14:38 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

The source one is 19398664192=18GB which is exactly what I got in the last resize message from the alert.log but the target one is still 14.6GB which is the size when it has been created at the beginning of the move. The double write occurs only for the blocks that have already been copied and the move did not reach the 14.6 GB yet.

We can see that from disk usage. ‘ls’ displays the declared size but ‘du’ counts the actual size – only the blocks that have been written yet.


[oracle@CDB]$ du -k /u0?/app/oracle/oradata/CDB/sysaux*
18944012 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
6894604 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

The target file has only 6894604=6.5 GB yet but it keeps increasing:

[oracle@CDB]$ du -k /u0?/app/oracle/oradata/CDB/sysaux*
18944012 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
7013388 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

Note: I verified that if the move datafile session is suspended, the target file disk usage does not increase even when we have activity on the tablespace.

The move is continuing and at the point it reaches a block above the initial size the target file is resized:


[oracle@CDB]$ ls -l /u0?/app/oracle/oradata/CDB/sysaux*
-rw-r----- 1 oracle oinstall 19398664192 Jun 18 15:18 /u02/app/oracle/oradata/CDB/sysaux011460.dbf
-rw-r----- 1 oracle oinstall 19398664192 Jun 18 15:18 /u03/app/oracle/oradata/CDB/sysaux014244.dbf

And finally, the move is completed without any problem:


Sat Jun 18 15:52:57 2016
Move operation committed for file /u03/app/oracle/oradata/CDB/sysaux011460.dbf

Conclusion: no problem

Online datafile move is compatible with autoextensible datafile resize, without any problem. Of course, you should not plan a resize at the same time as a large load, for performance reasons, but it is works. Actually this feature is very reliable: no locks, efficient, and cleanup is well done even in case of crash (info is in controlfile).

 

Cet article 12c online datafile move and resize est apparu en premier sur Blog dbi services.

ADF BC Range Paging and ADF UI Table Pagination Use Case

Andrejus Baranovski - Sat, 2016-06-18 09:45
ADF UI table pagination and ADF BC range paging sounds like a perfect combination. But to make it work perfect, a bit of extra effort is required. In the case of search/edit implementation, it can't remember updated record when navigating back to search screen (VO runs with Range Paging and UI table is displayed with pagination). I will explain how to solve it and show how to keep UI table displaying current page and prevent jumping to the first page.

Here is the working use case (download sample application - SearchEditApp.zip). Navigate to any page except first in UI table and select a record:


On edit, selected record should be opened in the fragment. Both buttons Save and Close/Close would return back to search screen, where current record and page should be preserved:


Close button will trigger Rollback operation, but current row should not be lost (see solution here: ADF Rollback Operation and Stay On Current Row):


Current row and page remains selected on return to search fragment:


Huge advantage of Range Paging mode for VO - faster navigation between UI table pages. This is especially important now, when we support pages. It is much easier for the user, to navigate to the last page (one click, without scrolling). We must ensure fast navigation in the table. This can be achieved with Range Paging - it will force VO to generate SQL query with ROWNUM and what is also very important - it will fetch only a subset of data (by default, ADF BC would fetch all rows until last, when navigating to the last page). Here we navigate to the last page:


Example of SQL query generated by VO enabled with Range Paging and subset of row data fetched for last page:


To force UI table to return display to correct page (navigation to search from edit), when Range Paging is enabled in VO - make sure to set first property for UI table (see solution here: ADF Rollback Operation and Stay On Current Row). We should keep track of current range start before navigating away from search screen and reset it back to the same value, when navigating from edit. I'm keeping this value in pageFlowScope variable managed through the bean method invoked on edit:


This is the method - it keeps track of current range start and updates pageFlowScope variable (later referenced from first property on UI table):


Bonus - see example how to refresh VO data (this is useful, when you want to display latest data to the user, before he starts editing it) and keep current row. When Range Paging is enabled, current row can be retained by re-setting current row key obtained from iterator binding (this executes SQL with find row by key and returns only current row):

Object Oriented Pl/Sql

Darwin IT - Sat, 2016-06-18 05:01
Years ago, in my Oracle years I wrote an article on Oracle (Object) Types, and how those make Pl/Sql so much more powerfull. It was in Dutch, since I wrote it for our monthly internal consulting magazine called 'Snapshot'. Since it was in Dutch and I regularly refer to it on our blog or in questions on forums, I wanted to rewrite it for years. So let's go. Oracle Types are introduced in the Oracle 8/8i era. And enabled me to build stuff that were not possible using regular Pl/Sql.

In Oracle 8/8i the implementation lacked constructors, but it was already very powerful. From Oracle 9i the possibilities were extended a lot and brought it to where it still is, I think. So everything that I post here is possible from Oracle 9i and later.

And you may ask: why bother an extension in Pl/Sql dated about 10 tot 15 years ago? And why if I'm into SOA Suite and/or OSB? Well, I really think that Pl/Sql in combination with Object Types is the best tool at hand for creating API's to Oracle Database applications. And the DB Adapter's capabilities of calling Pl/Sql functions with Object Type parameters is very strong. Together it is the best integration pattern for the Oracle Database. Even for data retrieval, it's much stronger than stand alone queries or views.

Do these Object Types make Pl/Sql an object oriented language? I'm not going to discuss that in extend. I think it is much more like Turbo Pascal 5.5 was OO: I think it's more a 3GL with Object-extensions. So if you're an OO-purist: you're right upfront, as far as I'm concerned. But object-types make the life of a Pl/Sql programmer a lot more fun. And I feel that still after all those years the capabilities aren't utilized as much as could be.

 So let's dive in to it. We start at the basics.
A type with a constructor A type with a constructor and some methods can be created as follows:

create or replace type car_car_t as object
(
-- Attributes
license varchar2(10)
, category number(1)
, year number(4)
, brand varchar2(20)
, model varchar2(30)
, city varchar2(30)
, country varchar2(30)
-- Member functions and procedures
, constructor function car_car_t(p_license in varchar2)
return self as result
, member function daily_rate(p_date date)
return number
, member procedure print
)
/

I don't intent to give a college on object orientation here,  but if you look at the type specification it is immediatly clear that an Oracle Type is a kind of record-type on it's own, but that besides attribute it also contains executable additions: methods.

Methods are functies and/or procedures that execute on the attributes of the type. Within the method you can see the attributes as 'global' package variables.

As said, a very convenient addition in the Types Implementation is the possibility to define your own  constructors. They're declared as:
  , constructor function car_car_t(p_license in varchar2)
return self as result

A constructor starts with the keyword constructor and is always a function that returns the 'self' object as a result. Also, the constructor is always named the same as the type itself. Implicitly there's always a constructor with all attribute as a parameter. This was already the case in Oracle 8, but from Oracle 9i/10g onwards this is still delivered for free. But besides the default constructor you can define several of your own. br /> This enables you to instantiate an object based on a primary key value, for example. Based on that key you can do a select into the attributes from a particular table. Or instantiate a type based on the read of a file. Or parameter-less so that you can just instantiate a dummy object that can be assigned values in a process. This is especially convenient if you have a very large object, where not all the attributes are mandatory.
Often I add a print method or a to_xml or to_string method. This enables you to print all the attributes or return an XML with them, including the call of the same method in child objects. Child objects are attributes based on other types or collections.
The implementation of the methods are in the Type Body:
create or replace type body car_car_t is

-- Member procedures and functions
constructor function car_car_t(p_license in varchar2)
return self as result
is
begin
select license
, category
, year
, brand
, model
, city
, country
into self.license
, self.category
, self.year
, self.brand
, self.model
, self.city
, self.country
from cars
where license = p_license;
return;
end;
member function daily_rate(p_date date)
return number
is
l_rate number;
cursor c_cae( b_license in varchar2
, b_date in date)
is select cae.dailyrate
from carsavailable cae
where b_date between cae.date_from and nvl(cae.date_to, b_date)
and cae.car_license = b_license
order by cae.date_from;
r_cae c_cae%rowtype;
begin
open c_cae( b_license => self.license
, b_date => p_date);
fetch c_cae into r_cae;
close c_cae;
l_rate := r_cae.dailyrate;
return l_rate;
end;
member procedure print
is
l_daily_rate number;
begin
dbms_output.put_line( 'License : '||self.license);
dbms_output.put_line( 'Category : '||self.category);
dbms_output.put_line( 'Year : '||self.year);
dbms_output.put_line( 'Brand : '||self.brand);
dbms_output.put_line( 'Model : '||self.model);
dbms_output.put_line( 'City : '||self.city);
dbms_output.put_line( 'Country : '||self.country);
l_daily_rate := daily_rate(p_date => sysdate);
if l_daily_rate is not null
then
dbms_output.put_line('Daily Rate: '||l_daily_rate);
else
dbms_output.put_line('No cars available');
end if;
end;

end;
/

Here you see that I used a primary key based constructor to do a select from the cars table into the attributes. And do a simple return. I do not have to specify what I want to return, since it somehow does return 'itself'. That is: an instance of the type. So the return variable is more or less implicit.

The print method enables me to test the object easily after the instantiation:
declare 
-- Local variables here
l_car car_car_t;
begin
-- Test statements here
l_car := car_car_t(:license);
l_car.print;
end;
Collections An object don't come alone very often. Same counts for Object-instances. We talk with the database so in most cases we have more than one instance of an entity

Een a set of object-instances is called a collection. And is defined as:
create or replace type car_cars_t as table of car_car_t;


So a collection is actually a table of objects  of a certain type. Oracle is even able to query on such a collection, but I'll elaborate on that later.

Note, by the way, that there now is a reference to, or put otherwise, a dependency to the particular object type. This means that the object-specification of in this case 'car_car_t' can't be changed anymore, without dropping all the references to it. This may become quite inconvenient when changing a large hierarchy of object types. So you'd better create an install script right away, that can recreate (drop and create) the complete tree.

The 'body', the source code, can be recompiled. This is important, because the specification defines the  structure of the object (the class) and other objects are to depended on this interface. Maybe Oracle should define an interface type, so this can be made a bit more loosly coupled.

This counts especially when it comes to table definitions (in the database) where you can add an object-type based column. After all, a physical table can't become invalid. What should become of the data in that case? That could become 'undefined'.  For the rest, you can see Collections an ordanary Pl/Sql table, comparable to an "index by binary_integer"-table. But  with the difference that it is an (stand-alone) object in itself, containing other objects. This means that to be able to use a Collection it has to be instantiated. This can be done implicitly by means of a query:
select cast(multiset(
select license
, category
, year
, brand
, model
, city
, country
from cars
) as car_cars_t)
from dual

What this actually does is that the result set of the select on the table cars is being redefined as a Collection. The Multiset-function denotes that what is returned is actually a data-set of zero or more rows. The Cast-function is used to denote as what datatype/objecttype the multiset should be considered. You could say that over the result set a collection layer is layed. I haven't been able to test it, but I am curious about the performance effects. What would be the difference between this query and for example a for-cursor-loop? Now it is seasoned with a collection-sauce you can handle this resultset as were it a Pl/Sql-table in memory after all.


Of course you can instantiate and fill the collecation more explicitly like:
declare 
l_cars car_cars_t;
begin
l_cars := car_cars_t();
for r_car in (select license from cars)
loop
l_cars.extend;
l_cars(l_cars.count) := car_car_t(r_car.license);
end loop;
if l_cars.count > 0
then
for l_idx in l_cars.first..l_cars.last
loop
dbms_output.put_line('Car '||l_idx||':');
l_cars(l_idx).print;
end loop;
end if;
end;

In this case in the collection is instantiated in the first line. Then a for loop is started based on the select of the primary key on the cars table, which is the license column. Then in the loop, on each iteration the collection is extended. And then a new instance of car_car_t, using the primary key constructor with the car's license is assigned to the last row of the collection, denoted with the implicit count attribute of the collection.
In the second loop an example is given, which shows how easily you can traverse the collection and print each row-object.

Object-Functions and Views The creation and propagation of a collection can also be put in a function of course:
create or replace function get_cars 
return car_cars_t is
l_cars car_cars_t;
begin
select cast(multiset(
select license
, category
, year
, brand
, model
, city
, country
from cars
) as car_cars_t)
into l_cars
from dual;
return(l_cars);
exception
when no_data_found then
l_cars := car_cars_t();
return l_cars;
end get_cars;
/

This function get_cars has no input parameters, you could restrict the query based on model or year for instance. But it returns the car_cars_t collection. If there are no cars available this query raises a no_data_found exception, since it does a select-into. But since it happens in a function, the nice thing is that you can catch the exception and just return an empty collection.

But the fun part is that you can use the result of that function as the source of a query. So, you see in the function that you can lay a collection-sause over a result-set, but the other way around is also possible: a collection can be queried:
select car.license
, car.category
, car.year
, car.brand
, car.model
, car.city
, car.country
, car.daily_rate(sysdate) daily_rate
from
table(get_cars) car

The trick is in the table function. That directs Oracle to consider the outcome of the function as a result set. The example also shows that the methods are also available. But of course only if it's a function. By the way, in this example the attributes and the method-results are simple scalair datatypes, but they also could be types or collection. And those are available in the query. The attributes of object-attributes can be referenced in the query with the dot-notiation ('.'). In other words: hierarchical deeper attributes can be fetched as a column value and returned this way.

In this case we use a function as the base for the query. In that case it is also possible to create a view on top of it. As long as the function and the object-types that are returned  are 'visible' for the user/the schema that is owner ovthe view and/or uses the view.

But to stretch it some more: not only the result of a function can be used as the base of a function. Also a local variable or package-variable can be supplied as the source of a query:
declare
l_cars car_cars_t;
l_rate number;
begin
l_cars := get_cars;
select car.daily_rate(sysdate - 365) daily_rate
into l_rate
from table(l_cars) car
where license = '79-JF-VP';
dbms_output.put_line( l_rate);
end;
Isn't this a lot easier than to traverse a pl/sql-tabel in search for that one particular row?
Now, you could think: isn't this a full-table scan then? Yes indeed. But this fulltable scan is done completely in memory and therefor very fast. And let's be honest: who created a pl/sql-table of more than a gigabyte? Although using the examples above this can be done quite easily. So a bit of  performance-aware programming is recommended.

PipeliningIn the previous paragraph I already mentioned performance. With the collection-function-methodevan above you could program your own 'External Tables' in Oracle 8i already (External Tables were introduced in 9i). So you could, for example, read a file in a Pl/Sql-function using UTL_File and process it into a collection and return this.

Then you could create a view around it with the table-functie and do a query on a file! Impressive, huh? A very important disadvantage of this method is that the function is executed as a logical/functional unite completely. So the complete file is read, the complete collection is built and returned to the caller as a whole. That means that doing a sleect on that function, the function is executed completely, before you'll get your result. This is especially inconvenient when the after-processing on the result of the function is time-expensive as well. This is why in Oracle 9i pipelining is introduced.

A pipelined function is functionally identical to the collection-returning-function as I described above. The most important difference is that is denoted that is about pipelined function (duh!), but more-over that the in-between=results are piped (sorry, in Dutch this is funny, but I did not made up the term) and thus returned as soon as they become available.

This looks like:
create or replace function get_cars_piped(p_where in varchar2 default null)
return car_cars_t
pipelined
is
l_car car_car_t;
type t_car is record
( license cars.license%type);
type t_cars_cursor is ref cursor;
c_car t_cars_cursor;
r_car t_car;
l_query varchar2(32767);
begin
l_query := 'Select license from cars '||p_where;
open c_car for l_query;
fetch c_car into r_car;
while c_car%found
loop
l_car := car_car_t(p_license => r_car.license);
pipe row(l_car);
fetch c_car into r_car;
end loop;
close c_car;
return;
end get_cars_piped;

So you see indeed the keyword 'pipelined' in the specification of the function, and after that in the loop that each separate object using the 'pipe row' statement is returned. You could say that 'pipe row' is like an intermediate return. Besides that you get in this function, completely for free and on the house, an example of the use of a ref-cursor. With this it is possible to build up a flexibele cursor of which you can adapt the query. You can call this function as follows:
select *
from table(get_cars_piped('where license != ''15-DF-HJ'''))

I found that is is not possible to call this function in a pl/sql-block directly. If you think about it, it seems logical What happens in the statemnt is that the sql-engine calls the pl/sql-function and receives each row directly and is able to process it. This way it is possible to execute the function and process the result simultaneously. Pl/Sql in it self does not support threads or pipe-lines. Pl/Sql expects the result of a functie-call as a whole and can advance with processing only if the function is completely done.
Object ViewsNow you have seen how to create a Collection sauce over a resultset and how a Collection can be queried using Select-statements. An other important addition Oracle 9i are the so called  object views (I say important, but I haven't seen them much out in the open). Object views are views that can return object-instances. This contrast to regular views that return rows with columns.
An object view is defined as follows:
create or replace view car_ov_cars
of car_car_t
with object oid (license)
as
select license
, category
, year
, brand
, model
, city
, country
from cars

Typical to an object view is that you denote what the object-type is where the view is based upon and what the object-identifier (oid) is. That is actually the attribute or set of attributes that count as a  primary-key of the object.

You could query this view as a regular view, but the strength is in the ability to fetch a row in the form of an object. This is done using the function 'value':
declare
l_car car_car_t;
begin
select value(t)
into l_car
from car_ov_cars t
where license = '79-JF-VP';
l_car.print;
end;

This delivers you an object-instantie from the view without any hassle. Very handy if you're using the objects extensively.
ReferencesWhen you have a extensive object model, than you might run into objects with one or more collections  as attributes. Those collections can also have multiple instances of other object types. This can become quite memory intensive. Besides that, you can run into the need to implement circulaire-references. For example a department with a manager is an employee him/her self and directs one or more other empoyees. It could be that you wanted to model that as an employee with an attribute typed as a collection type on the employee-type. It could be convenient to have a more louse coupling  between objects.

For that a concept of References is called into live. In fact, a reference is nothing more than a pointer to another object-instance. And that uses less memory than a new object instance. You could refer to an object-instance in a object-table of or an object-view. And than the object-identifier from the previous paragraph comes in handy.

An collection of references is defined as:
create or replace type car_cars_ref_t as table of ref car_car_t;

You can propagate this with the make_ref function:
declare
l_cars car_cars_ref_t;
l_car car_car_t;
begin
-- Bouw collectie met references op
select cast(multiset(select make_ref( car_ov_cars
, cae.car_license
)
from carsavailable cae) as car_cars_ref_t)
into l_cars
from dual;
-- Verwerk collection
if l_cars.count > 0
then
for l_idx in l_cars.first..l_cars.last
loop
dbms_output.put_line( 'Car '||l_idx||': ');
-- Haal object-value op basis van reference op
select deref(l_cars(l_idx))
into l_car
from dual;
-- Druk object af
l_car.print;
end loop;
end if;
end;

Here you see that the make_ref needs a reference to an object-view and the particular object identifier. The underlying query than delivers a reference to the objects that need to be processed. That query can be different to the query of the object-view.

What it actually means is that you first determine which  objects are to be taken into account. For those objects you determine a reference/pointer based on the  object-view. And than you can get the actual instance using the reference in a later stage.

The latter is done using the deref-function. This deref-function expects a reference and delivers the actual object instance. The deref is only available in a SQL-function taste, by the way. You cannot use it in Pl/Sql directly. Under water a  'select deref()'-query is translated to a  select on the object-view.

It is important then, to design your object model and object view in a way that the actual query on that object view is indexed properly. The experience learns that it can be quite difficult to determine why the optimiser does or doesn't use the index with derefs.  In that the deref is a nasty abstraction.

The ref that you see in the ref-collection declaration, you can use in the declaration of attributes as well. When you want to use an object as an attribute in another object, for instance an object car in the object garage, than you can use the keyword ref  to denote that you don't want the object itself but a reference:
create or replace type car_garage_t as object
(
car ref car_car_t
)

Then there is also a ref function that creates references to seperate objects:
select ref(car) reference
, license
from car_ov_cars car

This function is actually a counterpart of the value-function.
The difference between the functions ref and make_ref is actually that 'ref'  gets the object as a parameter for which a reference must be determined. Make_ref, however is based on an object-view or object-table and determince the reference based upon the primary-key or object-id in the object-view or -table.

The ref-function is used when you ned to create a reference to an object that is a direct result of a query on the object-view. But if you want to determine the primary keys of objects you want to process,  based upon a query on other tabels and/or views than make_ref comes in handy. Because then you deliver the primary-keys of the objects to process separately and  then make_ref uses the object-view and the primary-key values to determine the references.
MAP and Order methodsNow sometimes you need to order a objects. Which one is bigger or smaller and how do I sort them? Obviously this is important when comparing objects but also when querying object-views and object-tables.
For the comparison of objects you can create a map-method:
map member function car_size
return number
is
begin
return 1000; -- or a calculation of the content of the car, or the prize or fuel-consumption
end;

In the implementation you can do a calculation on the attributes of the object. The result needs to be of a scalair datatype (number, date, varchar2) and 'normative' for the object with regards to other objects of the same  object-type. The map-method can then be used by Oracle to do comparisons like  l_car1 > l_car2, and comparisons that are implied in  select-clausules as: DISTINCT, GROUP BY, and ORDER BY. Imagine how compact your code can be if you implement methods like these.

You can also make use of an Order method:
order member function car_order(p_car car_car_t)
return number
is
l_order number := 0;
c_smaller constant number := -1;
c_larger constant number := 1;
begin
if licence < p_car.license
then
l_order := c_smaller;
elsif licence > p_car.license
then
l_order := c_larger;
end if;
return l_order;
end;

The difference with the map-method is that the map-method returns a value that only has meaning for the object it self. The implicit parameter is only the 'self'-object. Oracle determines the results of the map-method for the two objects to be compared and compares the two results. With the order-method Oracle will provide one object as a parameter to the order-method of the other object. Therefor the order method always needs an extra parameter besides the implicit self-parameter. In the function's implementation you code the comparison between those two objects yourself. And that can be a lot more complex then above. Then you provide a negative value if the self-object is smaller then the provided object and a positive value if the self-object turns out larger. A value of 0 denotes an equalness of the two objects.  The Order-method is used with l_car1 > l_car2 comparisons and always need to have a numeral return datatype.

An object can have only one map-method and one order-method.
ConclusionMaybe it dazzles you by now. But if you got through to here, then I'm impress. It might seem like a bit boring stuf. And it might seem quite devious if you start with it. Most functionality you need to build can be done in the Oracle 7 way. But certain solution can become a lot more powerful if you do it using object-types. I use them thankfully for years now. But then I am someone who likes to solve a similar problem in a different way the next time it comes around.

Because of object-types Pl/Sql becomes a lot more powerful and it provides you with more handles to solve some nasty performance-problems. Or pieces of functionality that really aren't solvabale int the Oracle 7 way.

And as said in the intro: Oracle Types are really the a game-saver for SOA Suite and Service Bus integrations with the Database Adapter. Because using a hierarchy of objects you'll be able to fetch a complete piece of database with one database call. I even created a Type-Generation-framework (I called it Darwin-Hibernate) that can create types based on the datamodel in the database. It then creates constructors and collection-objects over foreign-keys that allows you to instantiate a complete structure based on the constructor of the top-level object. For instance a Patient with all it's medical records, addresses, etc.

Al the examples already work with Oracle 9i. But under 10g, 11g, 12c it will run a lot smother and faster because of the performance optimalisations of the Pl/Sql-engine.(Oracle 9i was not quite a performance topper).

This wasn't a story about Object Oriented Pl/Sql, actually. I didn't talk about super and sub-types. you can read about that in Chapter 12 of the Pl/Sql User's Guide enReference van Oracle 10g (I really ran into that page when Googling on it...). Or this page in 11g.
But I wanted you to get started with Object Types, and show you what you can do with it and how powerful Pl/Sql has become with it.

For some more advanced stuff you can read my earlier article about type inheritance, regarding EBS, but interesting enough for non-EBS developers. And another one. And yet another one.

Have fun with Pl/Sql (you might think by now that I really feel Pl/Sql needs this uplift), because I think with Object Types Pl/Sql is really fun. The scripts and datamodel for the examples can be found here.

Salesforce Buys SteelBrick—What Does it Mean for Oracle CX Cloud and CPQ?

Linda Fishman Hoyle - Fri, 2016-06-17 16:10

A Guest Post by VP Chris Shutts, Oracle Cloud Development (pictured left)

Just like Oracle, Salesforce from time to time snaps up hot start ups and large established players to round out its enterprise applications portfolio. These acquisitions normally do not involve SFDC’s Force.com partners. So, in December 2015 when Salesforce announced its intent to acquire SteelBrick (a CPQ vendor on the Force.com platform), a ripple went through the rest of the Force.com ecosystem.

This acquisition was not good news for other Force.com based CPQ solutions such as Apttus. SteelBrick is a relatively simple solution, mostly focused toward small and mid-market companies requiring configuration capability in their sales process. Cash-hungry Apttus, who now may need to make some staffing and R&D decisions that would make them less competitive, has also tried to serve this market in recent years, making both these companies direct and common competitors in the ecosystem.

In any case, our own CPQ Cloud, based on the BigMachines’ acquisition, is rock solid and has been enjoying very strong market momentum. The solution is frequently sold and/or added to existing Sales Cloud, Commerce, and SCM Cloud instances.

Upsides for Oracle

Trefis Team published Here’s The Reason Behind Salesforce.com’s Acquisition of SteelBrick in Forbes.com and said, “The purchase doubtlessly adds additional firepower to Salesforce’s industry-leading Sales Cloud…” However, I see real upsides for Oracle. The SteelBrick acquisition unquestionably helps validate the importance of configuration, pricing, and quoting in the overall sales process.

Also SteelBrick’s solution and Oracle’s CPQ Cloud are not in the same league. Oracle CPQ Cloud is a CPQ application delivered in the cloud that can handle complex products, high transaction volumes, and large user populations. CPQ applications help solve complicated business problems, and therefore require lots of technology investment and time to perfect. Our product has matured for 16 years and delivers innovation and value to great companies, such as HP, ADP, Vodafone, Schneider-Electric, and Siemens.

Oracle Database in the Cloud

Peeyush Tugnawat - Fri, 2016-06-17 11:47

Combine the power and flexibility of the number 1 Database in the world with all the benefits of a modern Cloud.

Oracle Database Cloud Service provides the power and flexibility of the Oracle Database in the cloud with your choice of a dedicated database instance with full administrative control, or a dedicated schema with a complete development and deployment platform managed by Oracle. Database as a Service provides dedicated virtual machines with fully configured and running Oracle Database instances.

Database Schema Service offers dedicated database schemas, RESTful Web Services for access to data, and a powerful declarative development and deployment environment for custom applications. Oracle Database Cloud Service enables businesses to reap all the benefits of the cloud including subscription-based, self-service access to reliable, scalable, and secure cloud environments.

Try it free for 30 days. Visit https://cloud.oracle.com/database for more details.

Quick intro below

OAM 11g : Authorization headers are not passed to downstream applications

Online Apps DBA - Fri, 2016-06-17 11:31

Readers, Just another post on OAM 11g issue that I have recently seen. OAM version could be 11g R1 or later. It is common practice for OAM to pass headers to downstream applications for userid or any other user/session attributes for SSO to work. Recently while working on EBS 12.2 and OAM PS3 SSO integration, I […]

The post OAM 11g : Authorization headers are not passed to downstream applications appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

VirtualBox 5.0.22 Released

Tim Hall - Fri, 2016-06-17 08:02

VirtualBox 5.0.22 has been released.

The downloads and changelog are in the usual places.

Judging by the progress of 5.1, now in beta 3, it won’t be long before we’ve got something new to play with.

NoSQL for Oracle Developers

Gerger Consulting - Fri, 2016-06-17 07:16
Oracle Developers! Want to learn more about NoSQL, but don't know where to start? Look no further.

Attend our free webinar by the venerable Pramod Sadalage. He'll answer all the questions you have about NoSQL but were too afraid to ask. :-) 

After all, he wrote the book.


 Sign up at this link. 
Categories: Development

Cardinality trick

Jonathan Lewis - Fri, 2016-06-17 07:02

In the absence of a virtual column or function-based index, the optimizer uses a basic selectivity guess of 1% for a predicate of the form: “function(column) = constant”; but there is (at least) one special case where it gets clever; simple type conversion:


create table t1 nologging
as
select  cast(
                case
                        when mod(rownum,1000) = 0 then 0
                        when mod(rownum,100)  = 0 then 1
                                                  else 9
                end as varchar2(1)
        ) v1
from
        all_objects
where   rownum <= 50000
;

execute dbms_stats.gather_table_stats(user,'t1')

set autotrace on explain

select count(*) from t1 where v1 = 9;
select count(*) from t1 where sign(v1) = 1;

set autotrace off

If you think about the table creation script you’ll agree that there are 49,500 rows where v1 = ‘9’ so the first query could (in theory) produce an estimated cardinality of 49,500. However I’ve got a datatype error in the predicate and I haven’t created a histogram – and that’s not very helpful in two different ways. In general Oracle will use a guessed selectivity of 1% after applying a function to a column with equality, which would make it report an estimated cardinality of 500 for my sample query, but in this case Oracle uses the number of distinct values for the column (i.e. 3) to infer a value for the number of distinct values for the funciton and uses that in the first query:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |    25   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   | 16667 | 33334 |    25   (4)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(TO_NUMBER("V1")=9)

On the other hand, while the optimizer “knows” that the number of distinct values for the varchar2 will match the number of distinct numerical equivalents (not that that’s actually true), it has no idea how many of the varchar2 values will equate to negative, zero, or positive values, so the 1% selectivity re-appears for the second query:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |    25   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   500 |  1000 |    25   (4)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(SIGN(TO_NUMBER("V1"))=1)

It shouldn’t surprise you to see that you would also get 500 as the estimated cardinality if the predicate were to read “sign(v1) = 2” — a value that the sign() function can’t take. The optimizer is using a generic rule, it doesn’t know the specifics of the function you’re using.

Footnote:

If you’re wondering when the number of distinct character values doesn’t match the number of distinct numeric values (and assuming all the character values are valid for conversion to numeric) just remember that the same number can be represented in different ways, for example you might change the original cast() that I used the generate the data to:

        cast(
                case
                        when mod(rownum,1000) = 0 then '0'
                        when mod(rownum, 100) = 0 then '1'
                        when mod(rownum,   2) = 0 then '9'
                                                  else '09'
                end as varchar2(2)
        ) v1

Now we have 4 distinct character values (so the optimizer’s estimate would drop to 15,000) but only 3 distinct numeric equivalents.

This, by the way, is why the optimizer transforms a predicate like “character_column = {numeric value}” into “to_number(character_column) = {numeric value}”, rather than converting it to “character_column = to_char({numeric value})”. A character string can only represent one numeric value while a numeric value can be displayed as an infinite number of different character strings (assuming the availability of the appropriate number of typing monkeys).

 


CPU_COUNT

Yann Neuhaus - Fri, 2016-06-17 06:48

When you have less CPU threads than the number of processes that has something to run in CPU, the OS will schedule them to share the CPU resource. Increasing the workload at that point will not increase the throughput because you have reached the capacity of your system, and response time will increase because of queuing. Actually, performance will even decrease because of the overhead of context switching when trying to share the processors.
When you don’t want the OS scheduler to do the resource sharing job, you can, and should, use Instance Caging. For sure, the database instance can do resource sharing more intelligently than the OS as it knows the kind of workload and the performance requirement of each process.

I did some tests on a 8 CPU machine running SLOB from 32 concurrent sessions, then 31, then 30,… down to the last run with 1 sessions, each for 5 minutes. This is what you see on the right-most dark green triangle here:
CaptureSLOBCPUCOUNT01
After a very short library cache contention when all 32 sessions are parsing their statements. The each run go decreasing. The dark green here is labelled as ‘CPU + CPU wait’ and is coming from ASH where all sessions are on state ‘ON CPU’ even when they are actually in the OS runqueue. Of course, I’ve only 8 CPU threads, so I cannot have 32 sessions running on CPU.

The runs on the left where you can see the same but with some light green is from same runs but with Instance Caging active. I’ve a resource manager plan set and I’ve set CPU_COUT to 8 (the first run on the left), then 7, … down to 1. The dark green is still the ‘ON CPU’ state and with Instance Caging Oracle allows at maximum CPU_COUNT processes in that state. The remaining processes are switched to a waiting state, instrumented as ‘resmgr: cpu quantum’ and displayed in light green.

My goal is to show that you can increase the throughput with Instance Caging. I measured the logical reads per second and made an Excel chart from them. The blue lines are from different CPU_COUNT settings from 8 to 1. The orange line is from no setting CPU_COUNT which means that instance caging is not enabled. On the X axes you have the number of conccurent SLOB sessions I’ve run. What you see from the bluse lines is that the throughput increases linearly with the number of concurrent session until it reaches the limit: either the CPU_COUNT limit or the physical limit when CPU_COUNT is not set. Note that the CPU threads are not cores here. Tests were done on Oracle Public Cloud 4 OCPUs (aka OC5 compute shape) which are actually 8 threads from E5-2690 v2 Intel processors. This is why running on two threads here do not double the throughput. Actually, when running 8 sessions on 8 threads the throughput is only x6 from running one session on one thread.

CaptureSLOBCPUCOUNT

The second goal is to compare Oracle instance caging with OS scheduler when instance is using full capacity of the server. On the top you can see the darker blue line which is when CPU_COUT is set to the actual number of CPU threads (CPU_COUNT=8). The orange line is when no CPU_COUNT is set: instance caging is disabled. The maximum throughput then, 3.6 MLR/s, is reached when we run same number of sessions as the number of CPU threads. What you see here is that when the server is overloaded scheduling at instance level is more efficient than scheduling at OS level. Without instance caging, the orange line, the LR/s degrades because of context switching overhead. So the recommandation here is to always do instance caging even if you have only one instance on your server.

Why is the instance caging algorithm better than the OS scheduler? Because it is focused at database processes workload. Here is the graphs of the ‘resmgr: cpu quantum’ wait times.

CaptureSLOBCPUCOUNT02

On the left, I’ve run with CPU_COUNT=8. When I have 32 concurrent sessions each of them spend 3/4 of their time waiting for CPU. Those waits are about 300 milliseconds. When I’ve only 9 sessions, each one have to spend only small part of their response time on waiting. They wait about 25 milliseconds on ‘resmgr: cpu quantum’. The wait time is not fixed and depends on the load. This makes sens: when you know you will have to spend a long time waiting, it’s better to have longer waits in order to avoid too many context switches. On the right, it’s the same but with CPU_COUNT=1 which gives x8 less CPU time to the processes. They will have to spend more time on waiting. And we see that the wait time is adjusted: can go up to 4 seconds time slices. The OS scheduler will never do that, putting a process on runqueue wait for several seconds, because the scheduler tries focus on the response time. It’s different with instance caging. When you know that you will have to spend a long time waiting, then it’s better to optimize throughput by lowering the context switching.

The recommandation is to enable instance caging: set a resource manager plan and set cpu_count. It’s not an option. There’s no additional costs for it. And it will always be better than letting the OS manager CPU starvation.

On Standard Edition 2, it’s even easier: Oracle Corp. enabled instance caging for you ;)

 

Cet article CPU_COUNT est apparu en premier sur Blog dbi services.

database options - review and looking forward

Pat Shuff - Fri, 2016-06-17 02:07
For the past two weeks we have been looking at database as a service (DBaaS) offered as platform as a service (PaaS) on the Oracle Public Cloud. We started this journey on the 2nd of this month by looking at the differences between Standard Edition, Enterprise Edition, High Performance Edition, and Extreme Performance Edition. To quickly review, Standard Edition is the basic database with table encryption as the only option. This is a full feature database without the ability to replicate data with any tool other than copying files and RMAN backup. You can't do things like transportable table spaces, streams, Data Guard, or any other replication technologies to make a system more highly available. Enterprise Edition is a more full featured database that allows for data replication from the installation and comes with Advanced Security (TDE) as a basis for the installation. This edition does not come with Data Guard but does have the option for transportable tablespaces, external references, and ways of replicating data manually from the database that Standard Edition does not contain. We then looked at the High Performance Edition which comes with
  • Transparent Data Enctyption
  • Diagnostics
  • Tuning
  • Partitioning
  • Advanced Compression
  • Advanced Security
  • Data Guard
  • Label Security
  • Multitenant
  • Audit Vault
  • Database Vault
  • Real Application Testing
  • OLAP
  • Spatial and Graphics
We then looked at Extreme Performance Edition that contains all of the above plus
  • Active Data Guard
  • In Memory
  • Real Application Clusters (RAC)
  • RAC One
We then went into a simple description of each option and prepared for the following blogs that will go into more detail and code samples of not only what the options are but look at how to use them and try to tie them back to business benefits. Part of our description was a financial analysis of running a database in infrastructure as a service (IaaS) vs PaaS and the time and efficiency benefits that we get from PaaS over IaaS.

We wrapped up the week on the 3rd with a blog detailing what it takes to get a Windows desktop prepared to use a database in the cloud. The list of software is relatively generic and is not unique to Windows. We could just as easily have selected MacOSX or Linux but selected a Windows 2012 Server running in the Oracle Public Compute Cloud as IaaS. We did this primarily so that we would have a teaching platform that can be saved with a snapshot, reloaded for hands on classes, and accessible from a customer site to demonstrate cloud services. The software that we loaded on the Windows platform includes

  • To access cloud storage
    • Mozilla Firefox
    • RestClient extension for Firefox
    • Google Chrome
    • Postman extension for Chrome
    • CloudBerry for OpenStack (Windows only right now)
  • To access files in our instance
    • Putty
    • Filezilla
    • Cygwin (only needed on Windows)
  • To access our database instance
    • SQL Developer
    • Microsoft Visual C++ libraries (only needed on Windows)
We installed this on the Oracle Public Cloud because we have free accounts on this platform and we can keep them persistent. It would normally cost $150/month to keep this instance active if we purchased it as IaaS. We could just as easily have done this on Amazon EC2 or Microsoft Azure at a similar cost. We provisioned 30 GB of disk for the operating system and 10 GB for the binaries. We requested a 2 vCPU with 30 GB of RAM. If we were doing this on Amazon or Azure we probably would have gone for a smaller memory footprint but this is the base configuration for IaaS and Windows with 2 vCPUs in the Oracle Public Cloud. The idea is that a class of 15-30 people can log into this system with different user names and do minimal configuration to get started on workshops. We can refresh the users but not refresh the system for classes the following day or week. To provision this instance we went to the Oracle Cloud Marketplace to get a pre-configured Windows 2012 Server instance. We then downloaded the list of software and install them on the desktop.

On the 6th we dove into database partitioning to figure out that we can reduce storage costs and improve performance by fitting active data into memory rather than years or months of data that we typically throw away with a select statement. We talked about using partitioning to tier storage on premise and how this makes sense in the cloud but does not have as much impact as it does on premise. We talked about different partitioning strategies and how it can be beneficial to use tools like Enterprise Manager and the partition advisor to look at how you are accessing the data and how you might partition it to improve performance. On the 7th we looked at code samples for partitioning and talked about tableextents and file system storage. We talked about the drawbacks to Amazon RDS and how not having file system access, having to use custom system calls, and not having sys access to the database causes potential issues with partitioning. We walked through a range partition example where we segmented the data into dates and stored the different dates into different tablespaces.

On the 8th we focused on database compression. This in conjunction with partitioning allows us to take older data that we typically don't access and compress for query or historical storage. We talked about the different compression methodologies

  • using historic access patterns (heat map and ADO options)
  • using row compression (by analyzing update and insert operations as they occur)
  • file compression (duplicate file links and file compression of LOBS, BLOGS, and CLOBS)
  • backup data compression
  • Data Guard compression of redo logs before transmission
  • index compressions
  • network transmission compression of results to client systems
  • hybrid columnar compression (Exadata and ZFS only)
  • storage snapshot optimization (ZFS only)
We did not really dive into the code for compression but referred to a variety of books and blogs that have good code samples. We did look at the compression advisor and talked about how to use it to estimate how your mileage could potentially vary. On the 9th we dove into an Oracle by Example tutorial on compression and followed the example using DBaaS. The examples that we followed were for an 11g instance but could have been done in a 12c instance if we had the demo tables installed on the 12c instance.

On the 10th we focused on database tuning options and dove into how to use SQL Tuning Advisor. In the example that we used we referenced an external table that was not part of the select statement which caused an unnecessary table index and full table scan. The example we used was again for 11g to utilize the sample database that we have installed but could just as easily have worked with 12c. On the 13th we dove a little deeper into tuning with a focus on Enterprise Manager and the Performance Advisor utilities. We followed the Oracle by Example Tuning Tutorial to diagnose a performance problem with a sql statement.

On the 14th we looked at transparent data encryption (TDE) and how to enable and disable table compression in the cloud. We talked about the risks of not encrypting by default and tried to draw lesions from Target Corporate and how failure to protect credit card data with encryption led to job losses across the company.

On the 15th we looked at the backup and restore utilities in the cloud and how they differ from traditional RMAN utilities. You can use RMAN just like you do today and replicate your backup and restore as you do today but there are automation tools that monitor RMAN and kick off alternate processes if the backup fails. There are also tools to help restore for those unfamiliar with RMAN and the depths and details of this powerful package.

Today we are reviewing what we have done in the last week and a half and are looking forward to the next week or two. We are going to finish out the database options. On Monday we are going to dive into multi tenant and talk about pluggable databases. This discussion will probably spill over into Tuesday with an overview happening on Monday and code samples and demos on Tuesday. We will need to use a 12c database since this is a new feature that is specific to 12c only. We might split our code samples into using SQL Developer to clone and manage PDBs on Tuesday and cover the same functionality with Enterprise Manager on Wednesday. Following this discussion we will do a high level discussion on Data Guard and look at the change log and log replication strategies that can be used for physical and logical replication. The following days we will look at code samples and configurations from the command line, enterprise manager, and sql developer. We will look at what it will take to setup a primary on premise and standby in the cloud. We will also look at what is required to have both in the cloud and what it takes to flip primary and standby to emulate a failure or maintenance action then flip the two back.

Once we cover Data Guard we will be in a position to talk about real application testing. In essence Data Guard copies all of the writes that happen on the primary and replay them on the standby. Real Applicaiton Testing records the reads as well and replays the reads and writes to help measure performance differences between configurations. This is good for code change testing, patch testing, configuration change testing, and other compare/contrast changes to your production system in a safe environment.

Once we finish the high availability and data replication options we will dive into OLAP and Spatial options. OLAP reorganizes the data for data warehouse analysis and spatial allows you to run geographical select statements like show me all crimes that happened within a mile of this address. Both are optimizations on select statements to help optimize usage of the database in specific instances.

We will wrap up our coverage by looking at Audit Vault and Database Vault. Both of these options are additional levels of security that not only help us protect data but restrict and track access to data. Many financial and healthcare institutions require interfaces like this to show separation of duty as well as traceability to see who accessed what when.

Once we finish the High Performance Edition options we will dive into the Extreme Performance Edition options looking at Active Data Guard, In Memory, RAC and RAC One. Going through all of the options will probably take us through the month of June. We will probably look at the other PaaS options listed in cloud.oracle.com starting some time in July and see how they relate or differ from the DBaaS services that we are currently covering.

OTN Summit - July 12th, 13th and 14th!

OTN TechBlog - Thu, 2016-06-16 18:28

The Oracle Technology Network team invites you to attend the OTN Summit on July 12, 13 and 14, 2016 to explore the latest demos and insights from Oracle and community experts.

Inderjeet Singh, Executive Vice President, Fusion Middleware Development, will kick off the OTN Summit to discuss the Oracle Cloud based on questions from the Oracle Community.


This free, online event also includes:


· 3 presentations per track for each of the Database, Java, Middleware and Systems categories

· Over 12 hours of content in one 3.5 hour event

· Access to 15 community and Oracle product experts who will be online to answer your questions

Don’t miss out on the latest trends, sign up today for the live event!

July 12
 - 9:30 am - 1 pm (PST)

July 13
 - 9:30 am - 1 pm (BST)

July 14
 - 9:30 am - 1 pm (IST)


You can view the full agenda and abstracts here.





Pages

Subscribe to Oracle FAQ aggregator