Feed aggregator

Some ideas about Oracle Database on Docker

Yann Neuhaus - Tue, 2018-05-08 15:55

This is not a ‘best practice’ but just some ideas about building Docker images to provide an Oracle Database. I started with the images provided by Oracle: https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance and this is great to validate the docker environment. Then I customized for my needs and here are the different points about this customization.

Do not send a huge context at each build attempt

I work by iteration. Sending a 3GB context each time I try a build is a no-go for me. Then I quickly stopped to put the Oracle installation .zip in the context of my build. I already blogged about this.

There are several ways to avoid to send a big context, such as having the .zip in an NFS or HTTP server and ADD it or RUN wget from there. I prefer to build one small container with this .zip that I’ll use later

In my current directory I have the linuxx64_12201_database.zip which I explicitly send to the context with this .dockerignore:

*
!linuxx64_12201_database.zip

And I build a franck/oracle122/zip image with it:

FROM oraclelinux:7-slim
ADD linuxx64_12201_database.zip /var/tmp

When done, I’ll not have to send the context again and I will build my container from this one with another Dockerfile:

FROM franck/oracle122/zip
RUN yum -y install oracle-database-server-12cR2-preinstall unzip tar wget openssl vi && rm -rf /var/cache/yum

Do not yum install at each build attempt

In the same idea, I build another intermediate image with the yum install above. The reason is that once I have it, I don’t need internet access anymore. I did that before boarding for an 8 hours flight. I build the above Dockerfile as franck/oracle122/prereq while on airport wifi and will use it later as the base for the final Dockerfile:

.dockerignore:

*

Dockerfile:

FROM franck/oracle122/prereq
# then follow all the work which do not need large context or internet connection
...

Even if you are not on a plane, it is always good to avoid internet access. You probably had to get some doors opened in the firewall in order to pull the base image. Now that you have it, you should keep it. Or one day, the security team will close the door again and you will waste a few hours. That also means that you do not start with a :latest image but with a specific version.

Do the long steps first

The Dockerfile provided by Oracle starts with all ENV and a COPY to add all scripts into the container. The problem is that each time you want to change a script, the build has to start from this step. And then the long operations have to be done again: unzip, install,…

I have a small context here (only the scripts and configuration files) but I ADD or COPY them only when needed. For example, here, a modification in install.rsp will re-do the runInstaller step, but the unzip one will not have to be done again because the cache is re-used:

WORKDIR /var/tmp
RUN unzip linuxx64_12201_database.zip
COPY install.rsp /var/tmp
RUN ./database/runInstaller -silent -force -waitforcompletion -responsefile /var/tmp/install.rsp -ignoresysprereqs -ignoreprereq ; true

The script that will run the container is added only at the end so that I can modify and re-build quickly without re-doing the previous steps.

VOLUME ["/opt/oracle/pdbs"] EXPOSE 1521 5500
COPY docker_cmd.sh /opt/oracle
CMD exec /opt/oracle/docker_cmd.sh ${CDB} ${PDB

Another step that I do at the end is removing the files I do not need in the container. Because that’s a guess and try approach and I want to build quickly. Of course, this may not be optimized for the size of all those layers, but I can reduce the final image later. The main feature of Docker build are the layers and I use them to develop the Dockerfile without wasting my time. For the waste of storage, I use ZFS with block level Cow, dedup and compression. For the final image, I’ll –squash it.

Remove all unnecessary files

The detail will probably go into a future blog post. But, as soon as runInstaller is done, and latest bundle patch applied, you can remove a lot of directories that I do not need anymore:

rm -rf $ORACLE_HOME/inventory $ORACLE_HOME/.patch_storage

As soon as the database has been created with DBCA, I do not need the DBCA templates anymore:

rm -rf $ORACLE_HOME/assistants

As this container will run only the instance, I can remove:

rm -rf $ORACLE_HOME/sqldeveloper $ORACLE_HOME/suptools $ORACLE_HOME/jdk

And depending on the options I will provide in the database, I remove the big ones:

rm -rf $ORACLE_HOME/apex $ORACLE_HOME/javavm $ORACLE_HOME/md

There is also a lot to remove from $ORACLE_HOME/lib (I need only a few *.so* that I can determine with strace, perf, lsof, ldd) and from $ORACLE_HOME/bin (basically, I need oracle, tnslsnr, lsnrctl, and sqlplus). Those are executables and you can strip them to reduce the size further. Definitely remove the last relink ones renamed as oracleO, …

Those are just examples, your list will depend on your version and usage, but this may reduce the image to 1GB or less. Of course, this is not supported. But the goal is to provide a small development database. Not an reliable and efficient one for production.

Use ZFS for the storage driver

An Oracle Database is full of large files that are updated sparsely. Just forget about OVERLAY and OVERLAY2 which copies the whole file to the new layer when you update a single byte of a file. I do not consider BTRFS seriously. In my opinion, ZFS is the only filesystem to consider for storing Docker images with large files. Enforce deduplication and compression to overcome the inflation of layering and the ignorance of sparse files. I think that recordsize=32k is a good idea from what I’ve seen about how docker applies writes to layers. More detail in a future blog post.

Note that layering issues are not only for build efficiency but also for container run. You will see that I put some datafiles in the image. Then, at database open, some blocks are changed (at least the headers) and I do not want a full file copy to the runnable layer. Block level CoW is required for that.

Create the CDB in the container

The container is the place to store all the software, and most of CDB$ROOT and PDB$SEED is part of the software distribution. This is what takes time when creating a database (catalog, catproc,…) and I definitely refuse to give a container to a developer where he will have to wait 10 minutes at run because the database has to be created on the external volume. A ‘docker run’ must be fast. And the external volume must contain only the data that has to be persisted, not 500MB of dbms_% package code, which will be all the same for all containers from the same image.

This means that I create the CDB during the build:

RUN /opt/oracle/product/12.2.0.1/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName ${CDB} -sid ${CDB} -initParams db_unique_name=${CDB},service_names=${CDB},shared_pool_size=600M,local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))' -createAsContainerDatabase true -numberOfPdbs 0 -sysPassword oracle -systemPassword oracle -emConfiguration NONE -datafileDestination /opt/oracle/oradata -recoveryAreaDestination /opt/oracle/fast_recovery_area -recoveryAreaSize 1024 -storageType FS -sampleSchema false -automaticMemoryManagement false -totalMemory 1024 -databaseType OLTP -enableArchive false -redoLogFileSize 10 -useLocalUndoForPDBs true -createListener LISTENER:1521 -useOMF true -dbOptions OMS:false,JSERVER:false,SPATIAL:false,IMEDIA:false,ORACLE_TEXT:false,SAMPLE_SCHEMA:false,CWMLITE:false,DV:false,APEX:false
RUN rm -rf /opt/oracle/product/12.2.0.1/dbhome_1/assistants/dbca/*

No PDB here, as the PDB will be created at ‘docker run’ time into the external volume. I use a template with datafiles here, but I may prefer to run the whole creation to control the creation. I may even hack some .bsq and .sql files in RDBMS admin to reduce the size. I’m in archivelog mode here because I want to allow to flashback the PDB. The container is ephemeral. If it becomes too large (archive logs, audit, …) just remove it and start another one. Or add a script to remove the old ones (those not required by guarantee restore points).

Create the PDB in the external volume

The PDB is the only thing that must be persistent, and controlled by the developer. I create it with the following in my docker_cmd.sh (which is called from the Dockerfile CMD line providing CDB name and PDB name as arguments) script:

create pluggable database $2 admin user admin identified by oracle create_file_dest='/opt/oracle/pdbs';
alter pluggable database $2 open;
show pdbs

The PDB is bound to the external volume ( VOLUME [“/opt/oracle/pdbs”] ) thanks to 12.2 CREATE_FILE_DEST clause so that the developer can create datafiles only there. Then the ‘docker run’ is fast as a clone of PDB$SEED.

The developer will connect only to the PDB. He has nothing to do in CDB$ROOT. If there is a need to change something in CDB$ROOT, I’ll provide a new image. I may even define lockdown profiles rules to limit the PDB and define a listener where only the PDB registers.

Unplug the PDB at container stop

When the developer stops the container, I want to leave something consistent in the external volume. The way to do that quickly is a PDB unplug. An unplug to a PDB archive (a .pdb zip with all datafiles) would be nicer, but that takes too much time to create. I unplug to a .xml file. This is what I do on stop (SIGTERM and SIGSTOP):

alter pluggable database all close;
column pdb_name format a30
select pdb_id,pdb_name,status from dba_pdbs;
begin
for c in (select pdb_name from dba_pdbs where pdb_id>2) loop
dbms_output.put_line('-- Unpluging '||c.pdb_name);
execute immediate 'alter pluggable database "'||c.pdb_name||'" unplug into ''/opt/oracle/pdbs/'||c.pdb_name||'.xml''';
end loop;
for c in (select pdb_name from dba_pdbs where pdb_id>2 and status='UNPLUGGED') loop
dbms_output.put_line('-- Dropping '||c.pdb_name);
execute immediate 'drop pluggable database "'||c.pdb_name||'" keep datafiles';
end loop;
end;
/
-- All pluggable databases have been unplugged:
host ls /opt/oracle/pdbs/*.xml
-- Shutdown the CDB:
shutdown immediate;
-- You can plug those PDBs by passing the volume to a new container

The script iterates on all PDBs but I see no reason to create more than one. I unplug the PDB and drop it, and then shutdown the instance. We need the unplug to be completed before the stop timeout. The container may be killed before the drop or shutdown, but as long as we have the .xml we can plug the PDB into a new container.

Plug the PDB at container re-start

I mentioned earlier that at start I create the pluggable database mentioned by ${PDB}. But this is only when there is no /opt/oracle/pdbs/${PDB}.xml
If this file is found, this means that we provide a PDB that was unplugged by a previous container stop.
Actually, when the start detects this file, the following will be run:

whenever sqlerror exit failure;
create pluggable database "${PDB}" using '/opt/oracle/pdbs/${PDB}.xml';
host rm /opt/oracle/pdbs/${PDB}.xml
alter pluggable database "${PDB}" open;
select message from pdb_plug_in_violations;

Finally, because I may start a container which has a newer Release Update than the one which unplugged my PDB, I run:

$ORACLE_HOME/OPatch/datapatch

One PDB per container

My scripts process all PDBs but I think that in most cases we need to have a one-to-one relationship between the container and the PDB. The idea is to provide a database that is ready to use and where no administration/troubleshooting is required. The key here is to keep it simple. If you need to provide a large CDB with several PDBs, then Docker is not the solution to your problem. A virtual machine is a better answer for that.

SPfile? Password file?

The image build provided by Oracle stores the persistent configuration files with the database, in the external volume, through symbolic links from ?/dbs. But with my design, I don’t need to. The configuration of the instance, running in the container, is within the container. The passwords for SYS is in the container. Then SPfile and password files stay in the container. The runnable image is not read-only. It is writeable. We can write here as long as the changes do not have to persist beyond the container end of life.

The ‘scope=spfile’ parameters that can be modified by the developer will be PDB parameters. They are persisted because they go to the .xml file at unplug. Only in case of crash, without a clean unplug, those parameters may be stored only in the container. That’s a special case. A crashed container is not dead and jsut waits to be re-started.

Crash recovery needs the same container

There’s one big flaw with my CDB-in-container/PDB-in-volume design. The whole database datafiles must be consistent, are checkpointed together, and are all protected by the same redo stream, which is located in the container. But what’s the real problem about that? If the container is cleanly stopped, the PDB is unplugged and there is a clear separation between my external volume and the container. And both are consistent.

However, if the container crashes, the datafiles in my external volume are fuzzy and need recovery. This cannot be done without the CDB files which are on the container. This has only one consequence: the user must know that if the container was not cleanly stopped, she will need to start the PDB with the same container. I don’t think this is a real problem. I just ensure that the user gets the warning (a big README file in the external volume for example, created at start and removed at clean stop) and that the container will always be able to recover (no 100% full filesystem at start – anyway I have some log cleanups at start).

Handle all errors and signals

My startup script handle 3 situations.
The first one is the first start after creation of the container. This creates the pluggable database.
The second one is the re-start after a clean stop. This plugs the existing pluggable database.
The third one is crash-recovery after a kill. This just runs the automatic instance recovery.

Then the startup script will run in a loop, either tailing the alert.log or displaying some status info every minutes.

But before all of that, the startup script must handle the termination signals.

The clean stop is handled by the following signals:

trap 'shutdown SIGINT' SIGINT
trap 'shutdown SIGTERM' SIGTERM
trap 'shutdown EXIT' 0

SIGINT is for ^C when running the container, SIGTERM is when ‘docker stop’, and the signal 0 is when the container exits by itself. This can happen when my ‘tail -f’ on alert log is killed for example. All of them call my shutdown() procedure which is trying a clean stop (unplug the PDBs).

When the stop timout is expired or when we do a ‘docker kill’, there’s no time for that. The only thing I do here before a shutdown abort is an ‘alter system checkpoint’ to try to reduce the recovery needed. And display a WARNING message saying that the container that was killed must not be removed but be re-started asap to recover the PDB in the external volume. Maybe explicitly name the container and the command to re-start.

I do that with an abort() function called by the following:

trap 'abort SIGKILL' SIGKILL

The kill -9 of the instance, or container crash, cannot be handled. Recovery is needed as for the SIGKILL one. Here is the reason for keeping a permanent README file near the PDB to explain that the container which crashed should be restarted as soon as possible to recover this.

Conclusion

This is not a recipe of how to build an Oracle Database Docker image, but just some ideas. The most important is to know the requirement. If you provide Oracle on Docker just because the developers want that, the solution will probably be wrong: too large, too long, inefficient, and too complex,… They will not use it and they will tell everybody that Oracle is not cool because it cannot be dockerized.
CaptureDockerCDBPDB
With my PDB-in-volume / CDB-in-container design, I have :

  • Docker Images with the ephemeral software, one per version (down to patches), and with different set of component installed
  • External volume (personal directory in a NFS share, or a local one) with the persistent data and settings
  • Containers running the software on the data, linking them together for the whole lifecycle of the container

Think of them as 2 USB sticks, one with the software (binaries and packages), and one with the data (user metadata and data). When plugged together on the same container, it runs one version of software with one state of data. If the container crashes, you just run it again without unplugging any of the sticks. When you are done with your test or development iteration, you stop the container and remove it. Then you have unplugged the sticks to run another combination of data and software.

 

Cet article Some ideas about Oracle Database on Docker est apparu en premier sur Blog dbi services.

Reminder: Upgrade Oracle Enterprise Manager 11 to EM 13c

Steven Chan - Tue, 2018-05-08 13:00

You can use the Application Management Pack plug-in to manage your E-Business Suite instances using Oracle Enterprise Manager.

Extended Support for Enterprise Manager Grid Control 11.1 ended on April 30, 2018. EM 11 is now in Sustaining Support. You will still have access to existing resources, but no new updates, fixes, or certifications will be produced for EM 11.

All Enterprise Manager 11 customers should upgrade to the latest EM release certified with EBS. As of today, the latest EM release certified with EBS is Enterprise Manager Cloud Control 13c.

Related Articles

Categories: APPS Blogs

Developer Cloud Service May Release Adds K8N, OCI, Code Editing and More

OTN TechBlog - Tue, 2018-05-08 11:00

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds even more option to help you extend your DevOps and CI/CD processes to support additional use cases.

Here are some highlights of the new version:

Extended build server software

You can now create build jobs and pipelines that leverage:

  • Kubernetese - use the kubectl command line to manage your docker containers
  • OCI Command line - to automate provisioning and configuration of Oracle Compute 
  • Java 9 - for your latest java projects deployments
  • Oracle Development Tools - Oracle Forms and Oracle JDeveloper 12.2.3 are now available to automate deployment of Forms and ADF apps

 

Build Server Software Options SSH Connection in Build

You can now define SSH connection as part of your build configuration to allow you to securely connect and execute shell scripts on Oracle Cloud Services.

In Browser Code Editing and Versioning 

A new "pencil" icon let's you edit code in your private git repositories hosted in Developer Cloud Service directly in your browser. Once you edited the code you can commit the changes to your branch directly providing commit messages.

Code editing in the browser

PagerDuty Webhook

Continuing our principle of keeping the environment open we add a new webhook support to allow you to send events to the popular PagerDuty solution.

Increased Reusability

We are making it easier to replicate things that already work for your team. For example, you can now create a new project based on an existing project you exported. You can copy an agile board over to a new one. If you created a useful issue search - you can share it with others in your team.

There are many other feature that will improve your daily work, have a look at the what's new in DevCS document for more information.

Happy development!

Oracle Marketing Cloud Transforms Sales Process with Powerful New Data Integrations

Oracle Press Releases - Tue, 2018-05-08 07:00
Press Release
Oracle Marketing Cloud Transforms Sales Process with Powerful New Data Integrations New Oracle Eloqua integrations with 6sense, Demandbase, LookBookHQ and Mintigo streamline and optimize sales process

REDWOOD SHORES, Calif.—May 8, 2018

Helping sales teams engage customers and accelerate and close more deals, Oracle today announced new product integrations with the Oracle Marketing Cloud. The new integrations between Oracle Eloqua and 6sense, Demandbase, LookBookHQ and Mintigo deliver powerful data-driven insights that empower sales teams to identify high-potential buyers, drive engagement and close deals faster.

A new integration between Oracle Eloqua and 6sense helps sales teams increase pipeline conversion. The product integration enables Oracle Eloqua customers to leverage AI-powered in-market predictions from the 6sense Demand Orchestration Platform and achieve a unified view of account activity that includes the prospect’s anonymous website intent, competitor research and other relevant need-based intent.

“Understanding as much as possible of an account’s behavior provides a competitive edge to sales organizations to convert pipeline to revenue,” said Amar Doshi, VP of Product, 6sense. “Many of our customers that use Oracle’s Eloqua Profiler and rely on 6sense AI have asked for an integrated experience to support their account-based marketing and sales efforts. We’re excited to be a part of this customer-driven initiative for B2B companies.”

A new integration between Oracle Eloqua and Demandbase helps sales teams quickly and easily access the insights needed to accelerate and close deals. The new product integration improves the sales process by delivering AI-enabled intent data and insights from Demandbase’s Conversion Solution within Oracle Eloqua so that sales teams can contact the right accounts with the right messages at the right time to move deals forward.  

“Marketers need to work hand-in-hand with sales teams across the entire buying cycle,” said Dom Lindars, VP of product, Demandbase. “We’re excited to be part of the launch for Oracle’s Sales Tools Extensions with our Conversion Solution, which will allow sales teams to truly understand the behavior and needs of their target accounts and increase their close rates.”

A new integration between Oracle Eloqua and LookBookHQ gives sales teams rich, actionable insight into buyer education, based on how leads and accounts are engaging with content. The new product integration provides a full summary of content engagement "after the click" (assets viewed in session, time spent, account rollup) within a contact’s profile in Oracle Eloqua and allows for the exploration of other contacts at the same account.

“LookBookHQ's ability to show the depth and quality of leads' engagement, coupled with the ability to see total engagement at account level over time, equips sales with a powerful extension to Oracle Eloqua Profiler,” says Stephen Streich, VP of product and engineering, LookBookHQ. “And this in turn provides better alignment between sales and marketing teams on buyer readiness and content performance.”

A new integration between Oracle Eloqua Profiler and Mintigo enables sales teams to learn, discover and engage their prospective buyers in personalized ways at scale. The new product seamlessly integrates Mintigo’s AI powered prospecting application, Sales Coach 360, with Oracle Eloqua Sales Tools.

“Prospecting is hard and the current CRM solutions do very little to enable sellers to engage their prospective buyers intelligently,” said Atul Kumar, chief product officer, Mintigo. “Understanding who is in-market to buy, why, when and how you should engage them is critical in achieving positive outcomes. This is what Oracle Eloqua Profiler and Mintigo’s Sales Coach 360 delivers to sales.”

Oracle Marketing Cloud is part of Oracle Customer Experience (CX) Cloud Suite, which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences and outcomes, Oracle CX Cloud Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

For additional information about Oracle CX, follow @OracleCX on Twitter, LinkedIn and Facebook or visit SmarterCX.com.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

20 Indexes

Jonathan Lewis - Tue, 2018-05-08 06:53

If your system had to do a lot of distributed queries there’s a limit on indexes that might affect performance: when deriving an execution plan for a distributed query the optimizer will consider a maximum of twenty indexes on each remote table. if you have any tables with a ridiculous number of indexes (various 3rd party accounting and CRM systems spring to mind) and if you drop and recreate indexes on those tables in the wrong order then execution plans may change for the simple reason that the optimizer is considering a different subset of the available indexes.

Although the limit is stated in the manuals (a few lines into a section on managing statement transparency) there is no indication about which 20 indexes the optimizer is likely to choose – a couple of experiments, with tracing enabled and shared pool flushes, gives a fairly strong indication that it’s the last 20 indexes created (or, to be more explicit, the ones with the 20 highest object_id values).

Here’s a little code to help demonstrate the point – first just the table and index creation


rem
rem	Script:		indexes_20.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Apr 2008
rem
rem	Last tested 
rem		12.2.0.1
rem

create table t1
as
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects 
	where	rownum <= 3000 -- > comment to avoid WordPress format issue
)
select
	mod(rownum,trunc(5000/1))	n01,
	mod(rownum,trunc(5000/2))	n02,
	mod(rownum,trunc(5000/3))	n03,
	mod(rownum,trunc(5000/4))	n04,
	mod(rownum,trunc(5000/5))	n05,
	mod(rownum,trunc(5000/6))	n06,
	mod(rownum,trunc(5000/7))	n07,
	mod(rownum,trunc(5000/8))	n08,
	mod(rownum,trunc(5000/9))	n09,
	mod(rownum,trunc(5000/10))	n10,
	mod(rownum,trunc(5000/11))	n11,
	mod(rownum,trunc(5000/12))	n12,
	mod(rownum,trunc(5000/13))	n13,
	mod(rownum,trunc(5000/14))	n14,
	mod(rownum,trunc(5000/15))	n15,
	mod(rownum,trunc(5000/16))	n16,
	mod(rownum,trunc(5000/17))	n17,
	mod(rownum,trunc(5000/18))	n18,
	mod(rownum,trunc(5000/19))	n19,
	mod(rownum,trunc(5000/20))	n20,
	mod(rownum,trunc(5000/21))	n21,
	mod(rownum,trunc(5000/22))	n22,
	mod(rownum,trunc(5000/23))	n23,
	mod(rownum,trunc(5000/24))	n24,
	rownum				id,
	rpad('x',40)			padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5 -- > comment to avoid WordPress format issue
;


alter table t1 add constraint t1_pk primary key(id)

create table t2
as
with generator as (
	select	--+ materialize
		rownum 	id
	from	all_objects 
	where	rownum <= 3000 -- > comment to avoid WordPress format issue
)
select
	mod(rownum,trunc(5000/1))	n01,
	mod(rownum,trunc(5000/2))	n02,
	mod(rownum,trunc(5000/3))	n03,
	mod(rownum,trunc(5000/4))	n04,
	mod(rownum,trunc(5000/5))	n05,
	mod(rownum,trunc(5000/6))	n06,
	mod(rownum,trunc(5000/7))	n07,
	mod(rownum,trunc(5000/8))	n08,
	mod(rownum,trunc(5000/9))	n09,
	mod(rownum,trunc(5000/10))	n10,
	mod(rownum,trunc(5000/11))	n11,
	mod(rownum,trunc(5000/12))	n12,
	mod(rownum,trunc(5000/13))	n13,
	mod(rownum,trunc(5000/14))	n14,
	mod(rownum,trunc(5000/15))	n15,
	mod(rownum,trunc(5000/16))	n16,
	mod(rownum,trunc(5000/17))	n17,
	mod(rownum,trunc(5000/18))	n18,
	mod(rownum,trunc(5000/19))	n19,
	mod(rownum,trunc(5000/20))	n20,
	mod(rownum,trunc(5000/21))	n21,
	mod(rownum,trunc(5000/22))	n22,
	mod(rownum,trunc(5000/23))	n23,
	mod(rownum,trunc(5000/24))	n24,
	rownum				id,
	rpad('x',40)			padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5 -- > comment to avoid WordPress format issue
;

create index t2_a21 on t2(n21);
create index t2_a22 on t2(n22);
create index t2_a23 on t2(n23);
create index t2_a24 on t2(n24);

create index t2_z01 on t2(n01);
create index t2_z02 on t2(n02);
create index t2_z03 on t2(n03);
create index t2_z04 on t2(n04);
create index t2_z05 on t2(n05);
create index t2_z06 on t2(n06);
create index t2_z07 on t2(n07);
create index t2_z08 on t2(n08);
create index t2_z09 on t2(n09);
create index t2_z10 on t2(n10);

create index t2_i11 on t2(n11);
create index t2_i12 on t2(n12);
create index t2_i13 on t2(n13);
create index t2_i14 on t2(n14);
create index t2_i15 on t2(n15);
create index t2_i16 on t2(n16);
create index t2_i17 on t2(n17);
create index t2_i18 on t2(n18);
create index t2_i19 on t2(n19);
create index t2_i20 on t2(n20);

alter index t2_a21 rebuild;
alter index t2_a22 rebuild;
alter index t2_a23 rebuild;
alter index t2_a24 rebuild;
 

begin
        dbms_stats.gather_table_stats(
                ownname 	 => user,
		tabname		 =>'t1',
		method_opt 	 => 'for all columns size 1',
		cascade		 => true
	);

	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'t2',
		method_opt 	 => 'for all columns size 1',
		cascade		 => true
	);

end;
/

I’m going to use a loopback database link to join “local” table t1 to “remote” table t2 on all 24 of the nXX columns. I’ve created indexes on all the columns, messing around with index names, order of creation, and rebuilding, to cover possible selection criteria such as alphabetical order, ordering by data_object_id (rather than object_id), even ordering by name of indexed columns(!).

Now the code to run a test:


define m_target=orcl@loopback

alter session set events '10053 trace name context forever';
set serveroutput off

select
	t1.id,
	t2.id,
	t2.padding
from
	t1			t1,
	t2@&m_target		t2
where
	t1.id = 99
and	t2.n01 = t1.n01
and	t2.n02 = t1.n02
and	t2.n03 = t1.n03
and	t2.n04 = t1.n04
and	t2.n05 = t1.n05
and	t2.n06 = t1.n06
and	t2.n07 = t1.n07
and	t2.n08 = t1.n08
and	t2.n09 = t1.n09
and	t2.n10 = t1.n10
/*			*/
and	t2.n11 = t1.n11
and	t2.n12 = t1.n12
and	t2.n13 = t1.n13
and	t2.n14 = t1.n14
and	t2.n15 = t1.n15
and	t2.n16 = t1.n16
and	t2.n17 = t1.n17
and	t2.n18 = t1.n18
and	t2.n19 = t1.n19
and	t2.n20 = t1.n20
/*			*/
and	t2.n21 = t1.n21
and	t2.n22 = t1.n22
and	t2.n23 = t1.n23
and	t2.n24 = t1.n24
;

select * from table(dbms_xplan.display_cursor(null,null,'outline'));

alter session set events '10053 trace name context off';

I’ve used a substitution variable for the name of the database link – it’s a convenience I have with all my distributed tests, a list of possible defines at the top of the script depending on which database I happen to be using at the time – then enabled the optimizer (10053) trace, set serveroutput off so that I can pull the execution plan from memory most easily, then executed the query.

Here’s the execution plan – including the Remote section and Outline.


-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   270 (100)|          |        |      |
|   1 |  NESTED LOOPS      |      |     1 |   243 |   270   (6)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |   101 |   268   (6)| 00:00:01 |        |      |
|   3 |   REMOTE           | T2   |     1 |   142 |     2   (0)| 00:00:01 | ORCL@~ | R->S |
-------------------------------------------------------------------------------------------


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T2"@"SEL$1")
      USE_NL(@"SEL$1" "T2"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."ID"=99)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "N01","N02","N03","N04","N05","N06","N07","N08","N09","N10","N11","N1
       2","N13","N14","N15","N16","N17","N18","N19","N20","N21","N22","N23","N24","ID","PA
       DDING" FROM "T2" "T2" WHERE "N01"=:1 AND "N02"=:2 AND "N03"=:3 AND "N04"=:4 AND
       "N05"=:5 AND "N06"=:6 AND "N07"=:7 AND "N08"=:8 AND "N09"=:9 AND "N10"=:10 AND
       "N11"=:11 AND "N12"=:12 AND "N13"=:13 AND "N14"=:14 AND "N15"=:15 AND "N16"=:16
       AND "N17"=:17 AND "N18"=:18 AND "N19"=:19 AND "N20"=:20 AND "N21"=:21 AND
       "N22"=:22 AND "N23"=:23 AND "N24"=:24 (accessing 'ORCL@LOOPBACK' )

There’s a little oddity with the plan – specifically in the Outline: there’s a “full(t2)” hint which is clearly inappropriate and isn’t consistent with the cost of 2 for the REMOTE operation reported in the body of the plan. Fortunately the SQL forwarded to the “remote” database doesn’t include this hint and (you’ll have to take my word for it) used an indexed access path into the table.

Where, though, is the indication that Oracle considered only 20 indexes? It’s in the 10053 trace file under the “Base Statistical Information” section in the subsection headed “Index Stats”:


Index Stats::
  Index: 0  Col#: 20    (NOT ANALYZED)
  LVLS: 1  #LB: 204  #DK: 250  LB/K: 1.00  DB/K: 400.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 19    (NOT ANALYZED)
  LVLS: 1  #LB: 204  #DK: 263  LB/K: 1.00  DB/K: 380.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 18    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 277  LB/K: 1.00  DB/K: 361.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 17    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 294  LB/K: 1.00  DB/K: 340.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 16    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 312  LB/K: 1.00  DB/K: 320.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 15    (NOT ANALYZED)
  LVLS: 1  #LB: 205  #DK: 333  LB/K: 1.00  DB/K: 300.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 14    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 357  LB/K: 1.00  DB/K: 280.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 13    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 384  LB/K: 1.00  DB/K: 260.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 12    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 416  LB/K: 1.00  DB/K: 240.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 11    (NOT ANALYZED)
  LVLS: 1  #LB: 206  #DK: 454  LB/K: 1.00  DB/K: 220.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 10    (NOT ANALYZED)
  LVLS: 1  #LB: 207  #DK: 500  LB/K: 1.00  DB/K: 200.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 9    (NOT ANALYZED)
  LVLS: 1  #LB: 207  #DK: 555  LB/K: 1.00  DB/K: 180.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 8    (NOT ANALYZED)
  LVLS: 1  #LB: 207  #DK: 625  LB/K: 1.00  DB/K: 160.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 7    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 714  LB/K: 1.00  DB/K: 140.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 6    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 833  LB/K: 1.00  DB/K: 120.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 5    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 1000  LB/K: 1.00  DB/K: 100.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 4    (NOT ANALYZED)
  LVLS: 1  #LB: 208  #DK: 1250  LB/K: 1.00  DB/K: 80.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 3    (NOT ANALYZED)
  LVLS: 1  #LB: 209  #DK: 1666  LB/K: 1.00  DB/K: 60.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 2    (NOT ANALYZED)
  LVLS: 1  #LB: 209  #DK: 2500  LB/K: 1.00  DB/K: 40.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 
  Index: 0  Col#: 1    (NOT ANALYZED)
  LVLS: 1  #LB: 209  #DK: 5000  LB/K: 1.00  DB/K: 20.00  CLUF: 2002.00  NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0
  KKEISFLG: 0 

We have 20 indexes listed, and while they’re all called “Index 0” (and reported as “Not Analyzed”) we can see from their column definitions that they are (in reverse order) the indexes on columns n01 through to n20 – i.e. the last 20 indexes created. The optimizer has created its plan based only on its knowledge of these indexes.

We might ask whether this matters or not – after all when the remote SQL gets to the remote database the remote optimizer is going to (re-)optimize it anyway and do the best it can with it, so at run-time Oracle could still end up using remote indexes that the local optimizer didn’t know about. So let’s get nasty and give the local optimizer a problem:


create index t2_id on t2(id);

select
        t1.id,
        t2.id,
        t2.padding
from
        t1                      t1,
        t2@&m_target            t2
where
        t1.id = 99
and     t2.n01 = t1.n01
;

I’ve created one more index on t2, which means the local optimizer is going to “forget” about the index that was the previous 20th index on the most recently created list for t2. That’s the index on (n01), which would have been a very good index for this query. If this query were to run locally the optimizer would do a nested loop from t1 to t2 using the index on (n01) – but the optimizer no longer knows about that index, so we get the following plan:


-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   538 (100)|          |        |      |
|*  1 |  HASH JOIN         |      |    20 |  1140 |   538   (7)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |     9 |   268   (6)| 00:00:01 |        |      |
|   3 |   REMOTE           | T2   |   100K|  4687K|   268   (6)| 00:00:01 | ORCL@~ | R->S |
-------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T2"@"SEL$1")
      USE_HASH(@"SEL$1" "T2"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N01"="T1"."N01")
   2 - filter("T1"."ID"=99)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "N01","ID","PADDING" FROM "T2" "T2" (accessing 'ORCL@LOOPBACK' )

Oracle is going to do a hash join and apply the join predicate late. Although the remote optimizer can sometimes rescue us from a mistake made by the local optimizer and use indexes that the local optimizer doesn’t know about, there are times when the remote SQL generated by the local optimizer is so rigidly associated with the expected plan that there’s no way the remote optimizer can workaround the assumptions made by the local optimizer.

So when you create (or drop and recreate) an index, it’s just possible that a distributed plan will have to change because the local optimizer is no longer aware of an index that exists at the remote site.

tl;dr

Be very cautious about dropping and recreating indexes if the table in question

  1. has more than 20 indexes
  2. and is used at the remote end of a distributed execution plan

The optimizer will consider only 20 of the indexes on the table, choosing the ones with the highest object_ids. If you drop and recreate an index then it gets a new (highest) object_id and a plan may change because the index that Oracle was previously using is no longer in the top 20.

Oracle 18c Security utl_file_dir and schema no authentication

Pete Finnigan - Tue, 2018-05-08 06:26
I have managed to build an 18c database this weekend to test and learn on. I have not had a massive time to look into 18c yet but I will do over the coming days and weeks. The new features....[Read More]

Posted by Pete On 07/05/18 At 09:10 PM

Categories: Security Blogs

18c Scalable Sequences Part III (Too Much Rope)

Richard Foote - Tue, 2018-05-08 02:06
I previously looked in Part I and Part II how Scalable Sequences officially released in 18c can reduce index contention issues, by automatically assigning a 6 digit prefix to the sequence value based on the instance ID and session ID of the session. We need to be careful and consider this 6 digit prefix if […]
Categories: DBA Blogs

All About Oracle Autonomous Data Warehouse Cloud

Gerger Consulting - Tue, 2018-05-08 00:06
Oracle has just released their Autonomous Data Warehouse Cloud Service (ADW). There is no one better to tell you all about it than the Senior Principal Product Manager Yasin Baskan who's been leading the project.


Attend the free webinar by ADW Senior Principal Product Manager Yasin Baskan on May 22nd and learn all about the Autonomous DWH Cloud from the people who created it.


About the Webinar

Oracle’s new cloud service, Autonomous Data Warehouse Cloud, is now generally available. This is a fully-managed service that automates many day-to-day operations for DBAs and DWH developers.

We will talk about the functionality Autonomous Data Warehouse provides and go into details about the technical aspects of the service.

You will be able to learn how you can provision new databases, load data, and run queries. We will walk through the automated functionality like database configuration, security, backup and recovery, and performance. This session will provide you the information you need to start trying out the service.

Sign up for the webinar.

About the Presenter

Yasin has been with Oracle for nine years in different roles such as pre-sales and product management. Before working for Oracle, he had been an Oracle DBA in the financial industry starting with Oracle Database version 7.3. He has 20 years of IT and Oracle Database experience. Since 2014 he has been part of the data warehousing product management team at Oracle HQ.

In his current role, he is leading the Autonomous Data Warehouse Cloud as the Senior Principal Product Manager.

Sign up for the webinar.
Categories: Development

EBS 12 Certified with Safari 11 and macOS High Sierra 10.13

Steven Chan - Mon, 2018-05-07 17:03

Oracle E-Business Suite Release 12 (12.1.3, 12.2.4 or higher) is now certified with Safari 11 on macOS High Sierra 10.13 with the following desktop configuration:

  • macOS High Sierra version 10.13.3 or higher
  • Safari version 11 (11.0.3 or higher)
  • Oracle JRE 8 plugin (1.8.0_171 or higher)

Users should review all relevant information along with other specific patching requirements and known limitations posted here:

Related Articles

Categories: APPS Blogs

A New Oracle Autonomous Visual Builder Cloud Service - Visual and Coding Combined

OTN TechBlog - Mon, 2018-05-07 14:39

We are happy to announce the availability of Oracle Autonomous Visual Builder Cloud Service (VBCS) - Oracle's visual low-code development platform for JavaScript based applications with built-in autonomous capabilities.

Over the past couple of years, the visual development approach of VBCS has made it a very attractive solution to citizen developers who leveraged the no-code required nature of the platform to build their custom applications.

Many professional developers also expressed interest in the visual development experience they saw, but they were looking for additional capabilities.

Specifically developers were demanding an option to have direct access to the code that the visual tools created so they can change it and enhance it with their own custom code to achieve richer behaviors.

With the new VBCS version we are addressing these demands adding direct access to manipulate code, while keeping the low-code characteristics of VBCS.

Visual and Code Based Development Combined

Just like in previous versions, constructing the UI is done through a visual WYSIWYG layout editor. Existing VBCS users will notice that they now have access to a much richer set of UI components in the component palette. In fact they now have access to all of the components offered by Oracle JET (Oracle's open-source JavaScript Extension Toolkit). In addition you can add more components to the palette using the Web-components standard based Oracle JET composite components architecture (CCA).

The thing to note about the visual editor is the new "Code" button at the top right, clicking this button will give professional developers direct access to the HTML code that makes up the page layout.  They'll be happy to discover that the code is pure HTML/JavaScript/CSS based - which will let them leverage their existing expertise to further enhance and customize it. Developers can directly manipulate that code through the smart code editor leveraging features such as code insight, syntax highlighting, doc access, and reformatting directly in their browser.

The visual development approach is not limited to page layouts. We extend it also to the way you can define business logic. Defining the flow of your logic is done through our new action flow editor. With a collection of operations that you can define in a declarative way, and the ability to invoke your specific JavaScript code for unique functionality.

Now that developers have direct access to the code, we also added integration with Git, leveraging the private Git repositories provided through Oracle Developer Cloud Service (DevCS). Teams can now leverage the full set of Agile methodology capabilities of DevCS when working on VBCS applications, including issue tracking, version management, agile planning and code review processes.

Mobile and Web Development Unified

With the new version of VBCS we further integrated the development experience across both web browser-based and on-device mobile applications. 

In the same project you can create both types of applications, leveraging the same development approach, application architecture, UI components, and access to custom business objects and external REST services.

Once you are done developing your mobile application, we'll package it for you as an on-device mobile app that you install, test, and run on your devices - leveraging the native look and feel provided by Oracle JET for the various mobile platforms.

Standard-Based Data Openness

With the new version you can now hook up VBCS to any REST data source with a few button clicks, leveraging a declarative approach to consuming external REST source in your application. VBCS is able to parse standard Swagger based service descriptors for easy consumption. Even if you don't have a detailed structure description for a service, the declarative dialog in VBCS makes it easy to define the access to any service, including security settings, header and URL parameters, and more. VBCS is smart enough to parse the structure returned from the service and create variables that will allow you to access the data in your UI with ease.

Let's not forget that VBCS also lets you define your own custom reusable business services. VBCS will create the database objects to store the information in these objects, and will provide you with a powerful secure set of REST services to allow you to access these objects from both your VBCS and external applications.

Visual Builder Cloud Service Goes Autonomous

Today’s Visual Builder Cloud Service release also has built-in autonomous capabilities to automate and eliminate repetitive tasks so you can instead focus on app design and development.

Configuring and provisioning your service is as easy as a single button click.All you need to do is tell us the name you want for your server, and with a click of a button everything is configured for you. You don't need to install and configure your underlying platform - the service automatically provision for you a database, an app hosting server, and your full development platform.

One click install

The new autonomous VBCS eliminates any manual tasks for the maintenance of your development and deployment platforms. Once your service is provisioned we'll take care of things like patching, updates, and backups for you.

Furthermore autonomous VBCS automatically maintains your mobile app publishing infrastructure. You just need to click a button and we'll publish your mobile app to iOS or Android packages, and host your web app on our scalable backend services that host your data and your applications.

But Wait There is More

There are many other new features you'll find in the new version of Oracle Visual Builder Cloud Service. Whether you are a seasoned JavaScript expert looking to accelerate your delivery, a developer taking your first steps in the wild world of JavaScript development, or a citizen developer looking to build your business application - Visual Builder has something for you.

So take it for a spin - we are sure you are going to enjoy the experience.

For more information and to get your free trial visit us at http://cloud.oracle.com/visual-builder

 

 

Oracle Dev Moto Tour 2018

OTN TechBlog - Mon, 2018-05-07 14:00
 "Four wheels move the body. Two wheels move the soul."
 
The 2018 Developers Motorcycle Tour will start their engines on May 8th, rolling through Japan and Europe to visit User Groups, Java Day Tokyo and Code events. Join Stephen Chin, Sebastian Daschner, and other community luminaries to catch up on the latest technologies and products, as well as bikes, food, Sumo, football or anything fun. 
 
Streaming live from every location! Watch their sessions online at @OracleDevs and follow them for updates. For details about schedules, resources, videos, and more through May and June 2018, visit DevTours 
 
Japan Tour: May 2018
In May, the dev tour motorcycle team will travel to various events, including the Java Day Tokyo conference.  Meet Akihiro Nishikawa, Andres Almiray, David Buck, Edson Yanaga, Fernando Badapoulis, Ixchel Ruiz, Kirk Pepperdine, Matthew Gilliard, Sebastian Daschner, and Stephen Chin.
 
May 8, 2018 Kumamoto Kumamoto JUG
May 10, 2018 Fukuoka Fukuoka JUG
May 11, 2018 Okayama Okayama JUG
May 14, 2018 Osaka Osaka JUG
May 15, 2018 Nagoya Nagoya JUG
May 17, 2018 Tokyo Java Day Tokyo
May 18, 2018 Tokyo JOnsen
May 19, 2018 Tokyo JOnsen
May 20, 2018 Tokyo JOnsen
May 21, 2018 Sendai Sendai JUG
May 23, 2018 Sapporo JavaDo
May 26, 2018 Tokyo JJUG Event
 
The European Tour: June 2018
In June, the dev tour motorcycle team will travel to multiple European countries and cities to meet Java and Oracle developers. Depending on the city and the event, which will include the Code Berlin conference, you'll meet Fernando Badapoulis, Nikhil Nanivadekar, Sebastian Daschner, and Stephen Chin.
 
June 4, 2018 Zurich JUG Switzerland
June 5, 2018 Freiburg JUG Freiburg
June 6, 2018 Bodensee JUG Bodensee
June 7, 2018 Stuttgart JUG Stuttgart
June 11, 2018 Berlin JUG BB
June 12, 2018 Berlin Oracle Code Berlin
June 13, 2018 Hamburg JUG Hamburg
June 14, 2018 Hannover JUG Hannover
June 15, 2018 Münster JUG Münster
June 16, 2018 Köln / Colone JUG Cologne
June 17, 2018 Munich JUG Munich
 

Build a Integrated Replicat using JSON

DBASolved - Mon, 2018-05-07 13:15

In a previous post, I showed you how to build an Integrated Extract (IE) and Distribution Path by using JSON and cURL. In this post, let’s look at how you can build an Integrated Replicat (IR) in the same manner.

To build a replicat using JSON, the JSON document is made up of the following 8:

Config – Details for the associated parameter file
Source – Where the replicat should read transactions from
Credentials – What credentials in the credential stores should be used
Checkpoint – What checkpoint table is used by the replicat
Mode – What type of replicat will be built
Registration – Register the replicat with the database
Begin – At what timeframe the replicat should start
Status – If the extract should be started or not

The resulting JSON document would look like the following:

{
“config”:[
“Replicat REPTS”,
“UseridAlias TGGATE”,
“Map SOE.*, Target SOE.*;”
],
“source”:{
“name”:”bc”
},
“credentials”:{
“alias”:”TGGATE”
},
“checkpoint”:{
“table”:”ggate.checkpoints”
},
“mode”:{
“type”:”integrated”,
“parallel”: true
},
“registration”: “standard”,
“begin”:”now”,
“status”:”stopped”
}

Now that you have a valid JSON document, a cURL command for building the integrated replicat can be done as follows:

curl -X POST \
http://localhost:17001/services/v2/replicats/REPTS\
-H ‘Cache-Control: no-cache’ \
-d ‘{
“config”:[
“Replicat REPTS”,
“UseridAlias TGGATE”,
“Map SOE.*, Target SOE.*;”
],
“source”:{
“name”:”bc”
},
“credentials”:{
“alias”:”TGGATE”
},
“checkpoint”:{
“table”:”ggate.checkpoints”
},
“mode”:{
“type”:”integrated”,
“parallel”: true
},
“registration”: “standard”,
“begin”:”now”,
“status”:”stopped”
}’

Just like the Integrated Extract (IE) and Distribution Service, the Integrated Replicat (IR) is created in a stopped state. At this point, you can start the IR and validate whatchanges need to be made to ensure replication happens.

Enjoy!!!

Categories: DBA Blogs

Oracle Raises the Bar with New Free Platinum-Level Support Services for Fusion Cloud Applications

Oracle Press Releases - Mon, 2018-05-07 07:10
Press Release
Oracle Raises the Bar with New Free Platinum-Level Support Services for Fusion Cloud Applications

REDWOOD SHORES, Calif.—May 7, 2018

Oracle (NYSE: ORCL) CEO, Mark Hurd today unveiled new customer support offerings designed to help customers get more value from Oracle Fusion SaaS applications including Fusion ERP, EPM, HCM, Supply Chain, Manufacturing, Sales and Service. Hurd described Oracle’s new SaaS Support Services as a new standard in the SaaS industry, designed to ensure customer success.

“As the software industry moves to the cloud, our application support offerings need to become more agile and responsive,” said Hurd. “We need to provide our SaaS customers with everything they need for rapid low-cost implementations and a successful rollout to their users.  In response to those needs, we are announcing that our highest level service, Oracle Platinum-Level Support, will now be provided to all of our Fusion Oracle SaaS customers at no extra cost. In addition, we are also announcing a new set of Custom Support Services that can be flexibly tailored to the particular requirements of any individual customer.”

Oracle SaaS applications customers will be able to take advantage of the new SaaS Support Services at no extra cost. The comprehensive base-support package features 24/7 rapid response technical support, proactive technical monitoring, success planning and adoption guidance, and a new digital experience platform for on-demand education resources.

Oracle has also launched a set of Advanced Services, which enable SaaS customers to gain a deeper level of support and customized services based on their specific business requirements. These tailored offerings can be added as and where needed, enabling the customer to choose which services they buy instead of being charged for bundled services they may not want or need.

“We have a good relationship with Oracle and get a lot of value out of Oracle SaaS solutions,” said Robert Sukharev,” IT manager for the Digital Product Team at AIG. “We have more than 100 users of Oracle tools and having a single location for relevant information and free education resources will allow us to use these tools in a more effective, productive and successful way.”

Large SaaS vendors currently upsell customers on premium support, charging for faster response times, dedicated support and rarely utilized services within premium support packages. In contrast, Oracle SaaS Support Services will be offered free of charge to Oracle SaaS customers including cloud Enterprise Resource Planning, Supply Chain, Human Capital Management, Marketing, Sales, Service and Commerce customers.

“The rapid adoption of SaaS has accelerated the evolution of support services away from technical support to support for the business,” said Elaina Stergiades, research manager at IDC.  “Most SaaS providers include only basic remedial support as part of the subscription, and additional support can add significant cost. Offerings like Platinum-Level Support from Oracle for SaaS, provided at no additional cost, can help ensure customer success in the cloud with expanded access to support for severity 1 issues, personalized training and adoption guidance for specific functional areas.”

“We have a shared interest in our customers’ success so we’re going above and beyond to ensure our customers have everything they need to succeed,” said Catherine Blackmore, group vice president of North America Customer Success at Oracle. “Oracle SaaS Support Services raise the bar for the SaaS industry and further reduce friction for companies moving to the cloud.”

Oracle SaaS Support Services will include:

  • 24/7 technical support: To help customers resolve issues faster, without adding to costs, the new SaaS Support Services will enable all customers to receive a technical support response typically in less than 15 minutes for severity one issues.
  • Dedicated implementation support: Customers will receive a dedicated, named engineer to provide guidance and hands-on support through the implementation process.
  • Proactive technical monitoring: Oracle’s technical team will take advantage of the latest machine learning technology to diagnose and resolve technical problems without the customer having to file a service request. In fact, the customer may not even realize there was a problem.
  • Success planning support: Oracle will work with customers on a technical and business execution strategy to help ensure the successful deployment and end-user adoption of Oracle SaaS applications.
  • Business process monitoring and guidance: Oracle will leverage data and process efficiency insights to save the customer time and money by analyzing customer use cases and providing recommendations for business process improvements.
  • Education on-demand: Oracle is investing in free education courses and the creation of a new on-demand knowledge center, which is expected to feature more than 2,000 new training guides, as well as guided learning starter packs to walk users through how to execute processes in the application. Customers can also attend QuickStart events on the cloud service of their choice.
  • Customer Success Portal: To help customers expertly adopt SaaS and achieve their business objectives, Oracle SaaS Support Services will include a new digital platform where customers can: find Oracle resources such as education, training, technical support and consulting; interact with relevant expert content to help achieve specific business goals; and design personalized training and education paths.

Oracle Advanced Services will include:

  • Additional tailored services: Extended services designed to provide customized dedicated support through the entire SaaS lifecycle.
  • Certified resources: Oracle certified-to-know experts have deep expertise, product and domain knowledge and extensive experience driving change so customers get the right knowledge and experience to drive business outcomes.
Contact Info
Jessica Moore
Oracle
650.506.3297
Jessica.moore@oracle.com
Evelyn Tam
Oracle
650.506.5936
evelyn.tam@oracle.com
Additional Information

For additional information on Oracle SaaS Support Services, visit oracle.com

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jessica Moore

  • 650.506.3297

Evelyn Tam

  • 650.506.5936

Oracle Delivers Next Set of Autonomous Cloud Platform Services

Oracle Press Releases - Mon, 2018-05-07 07:05
Press Release
Oracle Delivers Next Set of Autonomous Cloud Platform Services Expands scope of autonomous capabilities with built-in AI and machine learning in its application development, integration, and analytics services

Redwood Shores, Calif.—May 7, 2018

As part of its pledge to extend autonomous capabilities across its entire Cloud Platform, Oracle today announced the availability of Oracle Autonomous Analytics Cloud, Oracle Autonomous Integration Cloud, and Oracle Autonomous Visual Builder Cloud. With built-in advanced artificial intelligence (AI) and machine learning algorithms, these groundbreaking new PaaS services automate and eliminate key tasks to enable organizations to lower cost, reduce risk, accelerate innovation, and get predictive insights.

Earlier this year, Oracle made available the world’s first Autonomous Database for data warehouse workloads. The company is committed to adding self-driving, self-securing, and self-repairing capabilities to all of its PaaS services coupled with specific autonomous capabilities for each functional area. In addition to today’s news, Oracle plans to release more autonomous services later in calendar year 2018 focused on mobile and chatbots, data integration, Blockchain, security and management, and additional database workloads, including OLTP.

“Embedding AI and machine learning in these cloud services will help organizations innovate in revolutionary new ways,” said Amit Zavery, executive vice president of development, Oracle Cloud Platform. “These new cloud services are the latest in a series of steps from Oracle to incorporate industry-first autonomous capabilities that will enable customers to significantly reduce operational costs, increase productivity, and decrease risk.”

As organizations focus on delivering innovation fast, they want a secure set of comprehensive, integrated cloud services to build new applications and run their most demanding enterprise workloads. Only Oracle’s cloud services can automate key operational functions like tuning, patching, backups and upgrades while running to deliver maximum performance, high availability, and in demand security features.

Brake Parts Speeds Ahead with Oracle Autonomous Integration Cloud

Brake Parts, a global company that supplies the world’s top brake brands for cars, vans, SUVs, light trucks and heavy duty vehicles, has partnered with Oracle to build an intelligent sales operation. By automating its end-to-end customer change, RFQ, and new product introduction processes, Brake Parts was able to eliminate manual steps and increase customer responsiveness.

“Using Oracle Autonomous Integration Cloud with Oracle Sales Cloud, we were able to modernize our applications and automate customer-facing activities across departments—transforming our sales operations,” said Lisa Oliver, Sales Operations, Brake Parts. “Oracle’s autonomous PaaS services enable us to digitize many of our manual processes so we can decrease costs, increase business insights, and better innovate.”

New Oracle Autonomous Cloud Platform Services

Newly available autonomous services include Oracle Autonomous Analytics Cloud, Oracle Autonomous Integration Cloud, and Oracle Autonomous Visual Builder Cloud. These services help organizations easily build and deploy modern applications, as well as integrate and analyze critical organizational data. 

Oracle Autonomous Analytics Cloud

Oracle Autonomous Analytics Cloud combines machine learning, adaptive intelligence, and service automation to create an analytics platform that breaks down barriers between people, places, data, and systems, fundamentally changing the way people analyze, understand, and act on information. Learn more here.

  • Empowers business users to uncover more insights, quickly. Users can ask questions on their mobile devices, and natural language processing converts those questions into queries in the backend to deliver rich visualizations on their device. It uses machine learning to gain intelligence and proactively suggest insight on data the user might not even have asked for.
  • Reveals hidden patterns and performance drivers through predictive insights and automatic natural-language explanations powered by machine learning.
  • Provides predictive analytics on IoT data applying domain specific machine learning algorithms on large volumes of sensor data and historical patterns of failure.
  Oracle Autonomous Integration Cloud

Oracle Autonomous Integration Cloud accelerates digital transformation by enabling business processes spanning multiple Oracle and non-Oracle SaaS and on-premises applications through a combination of machine learning, embedded best-practice guidance, and pre-built application integration and process automation. Learn more here.

  • Speeds up integrations in the complex process of mapping attributes of objects across two different applications by using crowd sourced data of all executed integrations, and machine learning to deliver visual recommendations of how to connect those objects.
  • Delivers intelligent dynamic adaptive case management by providing APIs to embed AI/machine learning frameworks, and suggest the next best action in an automated process flow.
  • Increases resiliency and performance by providing self driven and tuning Integrations to manage large workloads intelligently.
  • Enables Robotic Process Automation with AI/machine learning to automate the last mile of case management or process automation with systems that are not API enabled.
  Autonomous Visual Builder Cloud

Oracle Autonomous Visual Builder Cloud helps to accelerate mobile and web application development and deployment by enabling business users and developers to build these applications with no coding. Learn more here.

  • Automates code generation using the latest industry-standard technologies with single click deployment enabling rapid application development even by line-of-business users.
  • Automates delivery of mobile applications across multiple platforms including iOS and Android.
  • Easily extensible and based on standard open-source technology (Oracle JET, Swagger)
 

Oracle Cloud Platform services all share foundational autonomous capabilities including:

  • Self-Driving to Lower Costs and Increase Productivity: Eliminate human labor to provision, secure, monitor, backup, recover and troubleshoot. Automatically upgrade and patch itself while running. Instantly grow and shrink compute or storage without downtime.
  • Self-Securing to Lower Risk: Protect from external attacks and malicious internal users. Automatically apply security updates while running to protect against cyberattacks, and automatically encrypt all data.
  • Self-Repairing for Higher Availability: Provide automated protection from all planned and unplanned downtime with up to 99.995 percent availability, resulting in less than 2.5 minutes of downtime per month including planned maintenance.
 

Organizations can try Oracle Autonomous Cloud Platform services, including Oracle Autonomous Data Warehouse Cloud by signing up for a Free Oracle Cloud Trial via http://cloud.oracle.com/tryit

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Kristin Reeves
Blanc and Otus
+1.925.787.6744
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.925.787.6744

Organizations Worldwide Turn to Oracle Cloud to Fuel their Modernization Efforts

Oracle Press Releases - Mon, 2018-05-07 07:00
Press Release
Organizations Worldwide Turn to Oracle Cloud to Fuel their Modernization Efforts

Redwood Shores, Calif.—May 7, 2018

Companies and organizations around the globe are rapidly adopting Oracle Cloud solutions. Offering its customers a unique advantage, Oracle is the only provider to deliver a complete cloud portfolio, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Today, companies in 195 countries and territories are processing more than 55 billion transactions a day through the Oracle Cloud. Recent wins and expansions include: Alsea, Broadcom Limited, Exelon, Gonzaga University, Heineken Urban Polo, Providence St. Joseph Health, Sinclair Broadcast Group, and T-Mobile US Inc.

“Oracle Cloud is redefining how organizations modernize, innovate and compete,” said Thomas Kurian, president, product development, Oracle. “Regardless of where organizations want to start their cloud journeys, Oracle delivers complete and integrated cloud services that deliver faster innovation and business transformation. The momentum and adoption we have seen over the past few quarters is a testament to the value Oracle Cloud provides.”

Industry’s Most Complete, Integrated Cloud Platform

The combined power of Oracle’s Cloud IaaS and PaaS offerings enable customers to extend, enrich, and customize their application environments with unprecedented intelligence, security, speed and predictability. Driving the future of business, Oracle Cloud integrates its deep product knowledge, along with the latest emerging technologies, such as artificial intelligence (AI), machine learning (ML), blockchain, IoT and chatbots into every layer of the stack to allow customers to drive innovation and scale.

Further transforming the market, Oracle today announced the general availability of new autonomous capabilities across its PaaS portfolio. With self-driving, self-securing and self-repairing capabilities for analytics, data integration, mobility, content management and more, customers can focus on accelerating business results by limiting human labor, human error and manual tuning. The availability of Oracle Autonomous Data Warehouse Cloud, the first service based on the revolutionary new Oracle Autonomous Database, was also recently announced.

Intelligent Business Applications

Oracle Cloud Applications help organizations around the world grow faster, differentiate from competitors and better serve their customers.

With Oracle Cloud Applications, organizations across all industries can deliver the experiences customers expect, attract and retain the talent to succeed, and achieve the performance the market demands. To further empower business users, Oracle has embedded AI capabilities across Oracle Cloud Applications for finance, human resources, supply chain, manufacturing, commerce, customer service, marketing and sales professionals.

Additional recent customer wins and expansions across Oracle IaaS, PaaS and SaaS include:

Atomitech Inc., Edith Cowan University, Future Robot, Barrick Gold Corporation, C’s NEXT, Colegio de Bachilleres, Contribute NV, DP World Ltd, Dubai Airport Freezone Authority, Emirates Flight Catering, Hytera Communications Ltd, Ildong Pharmaceutical Co., Ltd., Konyang University, Land O’Lakes, Inc., Leshan City Commercial Bank, McDermott, MCore Group, New Hope Liuhe Co. LTD, Promata, Stallion Oilfield, Ornua Co-Operative, RecVue, Saga d.o.o Beograd., TCI, TaskUs, University of Wisconsin Hospital and Clinics Authority, and Zeneral Heatpump Industry Co Ltd. 

“We needed a robust system to ensure business continuity in the event of a disaster,” said Punit Rastogi, AVP and head of IT, TCI. “We chose Oracle Cloud and saved both time and money in establishing our disaster recovery system. Oracle Cloud also eliminated the need for regular maintenance, patching, and upgrades of our systems.”

"We considered providing a service to monitor on-premises and public clouds as an added-value improvement for the IT monitoring service we deliver,” said Katsuhiro Kawasaki, managing director, Atomitech Inc. “By adopting Oracle Management Cloud, we were able to build a new business pillar with monitoring infrastructure and reduce the operational burden."

“Our focus is to be one of the world's most innovative free zones and provide services that exceed expectations in every way,” said Amna Lootah, assistant director general, Dubai Airport Freezone Authority (DAFZA). “Oracle Cloud helps us deliver superior customer service and enhance management's ability to proactively ensure continuous business improvement, anytime and anywhere.”

“It normally took our team at least one week to prepare executive reports,” said Arun Tewary, CIO, Emirates Flight Catering (member of Emirates Group). “Finally, with modern BI Interactive tools in the cloud, we can generate daily intelligence reports which will improve data accuracy, accessibility and our entire decision making process.”

"Oracle Management Cloud offers powerful log analytics functionality, enabling us to efficiently predict patterns and anomalies, and resolve performance issues for our online store,” said Jae Yoo Kim, information strategy team manager, Ildong Pharmaceutical Co., Ltd. “Oracle Cloud is the perfect solution to increase our productivity and business agility."

“Oracle Cloud Applications will support our vision of digital innovation in our business processes, agility in our deployment of solutions, and long-term value and scalability,” said Janice Newell, executive vice president, information services and CIO at Providence St. Joseph Health.

Contact Info
Christine Allen
Oracle
16037434534
christine.allen@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • 16037434534

FBIs don’t exist

Jonathan Lewis - Mon, 2018-05-07 03:24

This is a reprint (of a reprint) of a note I wrote more than 11 years ago on my old website. I’ve decided to republish it on the blog simply because one day I’ll probably decide to stop paying for the website given how old all the material is and this article makes an important point about the need (at least some of the time) for accuracy in the words you use to describe things.

—————————————————————————-

There’s no such thing as a function-based index.

Well, okay, that’s what the manuals call them but it would be so much better if they were called “indexes with virtual columns” – because that’s what they are and that’s a name that would eliminate confusion.

To demonstrate what I mean, ask yourself this question: “Can the rule based optimizer use a function-based index ?”. The answer is ‘Yes’, as the following code fragment demonstrates:


rem
rem     Script:         fbi_rule.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2005
rem

create table t1 as
select
         rownum                         id,
         dbms_random.value(0,500)       n1,
         rpad('x',10)                   small_vc,
         rpad('x',100)                  padding
from
         all_objects
where
         rownum <= 3000
;
 
create index t1_i1 on t1(id, trunc(n1));
 
set autotrace traceonly explain
 
select
         /*+ rule */
         small_vc
from
         t1
where    id = 55
and      trunc(n1) between 1 and 10
;


set autotrace off
 
Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=HINT: RULE
1   0    TABLE ACCESS (BY INDEX ROWID) OF 'T1'
2   1      INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE)

Last time I asked an audience if the rule-based optimizer (RBO) could use a function-based index, most of them thought the answer was ‘No’. Even the Oracle manuals make the same mistake – for example in the 10g Release 2 Application Developers Guide p5-8, one of the restrictions on function-based indexes is “Only cost based optimization can use function-based indexes”.

If I had asked the audience “Can the rule-based optimizer use an index which includes a virtual column ?” I wonder how many of them would have paused for thought, then asked themselves what would happen if the index started with “ordinary” columns and the “function-based” bit was later on in the index.

The manuals should, of course, state: “The rule-based optimizer cannot take advantage of any virtual columns in an index, or of any columns that follow the first virtual column”. Given a correct name and a correct description of functionality you can then conclude that if the first column is a virtual column the rule-based optimizer won’t use the index.

I’m not suggesting, by the way, that you should be using the rule-based optimizer, or even that this specific example of functionality is going to be particularly beneficial to many people (RBO still uses the “trunc(n1)” as a filter predicate after reaching the table rather than as an access predicate – or even filter predicate – on the index); but it does demonstrate how easy it is for the wrong name, or terminology, to distract people from the truth.

And here’s another thought for Oracle Corporation. Since it seems to be easy to implement virtual columns (there is a hidden entry for each such column in the data dictionary, and the text of the function defining the column appears as the default value), why should they exist only in indexes? Why can’t we have virtual columns which aren’t indexed, so that we can collect statistics on a virtual column and give the optimizer some information about the data distribution of some commonly used expression that we don’t actually want to build an index on.

(Update Jan 2007 – this is likely to happen in 11g according to ‘sneak preview’ presentations made by Oracle at OW2006.

P.S. There really are function-based indexes in Oracle. But Oracle Corp. calls them domain indexes (or co-operative indexes) and tells you that the things you build them with are operators, not functions … which actually makes them operator-based indexes!

—————————————————————————-

 Footnote (May 2018)

I’ve updated the reference to the 10g manuals (chapter 5 page 8) to include a URL, but the URL is for 11gR2 since the only 10g manual I could find online was the full pdf download.  It’s  interesting to note what restrictions on the use of “function-based” indexes are reported in this manual, and I’m not sure that all of them were true at the time, and I’m fairly sure that some of them must be false by now, which is why it’s always good to have test scripts that you can run as you upgrade.

There is an interesting variation over time for this example:

  • In 9.2.0.8 and 10.2.0.5 the predicate on trunc(n1) is a filter predicate on the table
  • In 11.1.0.7 the predicate trunc(n1) became an access predicate in the index
  • In 11.2.0.4 the optimizer (finally) declined to use the index under the rule hint (but introduced a strange side effect … more about that later)

Execution plan from 11.1.0.7


Execution Plan
----------------------------------------------------------
Plan hash value: 1429545322

---------------------------------------------
| Id  | Operation                   | Name  |
---------------------------------------------
|   0 | SELECT STATEMENT            |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |
|*  2 |   INDEX RANGE SCAN          | T1_I1 |
---------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=55 AND TRUNC("N1")>=1 AND TRUNC("N1")<=10)

Note
-----
   - rule based optimizer used (consider using cbo)

In passing – the change in the execution plan from 10g to 11.1 to 11.2 does mean that anyone still using the rule-based optimizer could find that an upgrade makes a difference to rule-based execution plans.

As well as ignoring the index, 11.2.0.4 did something else that was new. I happened to have a second index on the table defined as (n1, trunc(id)); this had no impact on the execution plan for all the previous versions of Oracle, apart from switching to a full tablescan 11.2.0.4 also introduced an extra predicate:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3617692013

----------------------------------
| Id  | Operation         | Name |
----------------------------------
|   0 | SELECT STATEMENT  |      |
|*  1 |  TABLE ACCESS FULL| T1   |
----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TRUNC("N1")<=10 AND TRUNC("N1")>=1 AND
              TRUNC("ID")=TRUNC(55) AND "ID"=55)

Note
-----
   - rule based optimizer used (consider using cbo)

Some piece of code somewhere must have been looking at the second “function-based index” – or, at least, it’s virtual column definition – to be able to generate that trunc(id) = trunc(55) predicate. This was a detail introduced in 11.2.0.2, affected by fix control 9263333: “generate transitive predicates for virtual column expressions”. It’s possible that a change like this could result in changes in execution plan due to the extra predicates – even under rule-based optimisation.

idql and its column output

Yann Neuhaus - Mon, 2018-05-07 03:23
idql and its column output

A few days ago, I was reading an interesting blog from distinguished colleague Clemens Bleile with the title “sqlplus and its column output” (link here https://blog.dbi-services.com/sqlplus-and-its-column-output/) and I said to myself: the lucky Oracle administrators and developers have sqlplus, a rather good, out of the box command-line tool to talk to their databases. What equivalent tool do we have with Documentum ? Well, we have mainly idql, which, to put it mildly, sucks. Unlike sqlplus, idql has no column formatting, no reporting, no variable substitution, no error trapping, actually almost nothing, not even command editing or command history (at least, under Unix/Linux). It just reads DQL statements, passes them to the content server and displays back the received answer. Pretty basic. However, for their defense, the Documentum creators gave away the source code of an ancient version (look for $DOCUMENTUM/share/sdk/example/code/idql.c), so it is relatively easy to enhance it the way you like.
Needless to say, having a nicely displayed output is not possible within idql. Whereas sqlplus’ column formatting allows to control the column width to make it narrower and thusly avoids those unending lines filled with spaces, in idql a query is displayed as is. For example, in sqlplus, “column mycol format A10″ tells sqlplus to display the column mycol as an alphanumeric value in a field not larger than 10 characters; exceeding characters are wrapped around on the next line(s). However, if, many columns are SELECTed, long result lines are unvoidable in both sqlplus and idql and the solution proposed in Clemens’ blog can help with idql too since it applies to the terminal as a whole.
Hereafter though, I’d like to propose a few alternatives that don’t require another terminal software, although they may somewhat lack in interactivity. One of them uses the less command, another uses less + a named pipe and a third one a simple awk script to compact and reflow a query’s output. Here we go.
1. Use less command
If reading the output can be done separately from entering the commands into idql, “less -S” is pretty cool:

# run the select and output the result into a text file;
idql dmtest -Udmadmin -Pdmadmin -w100 > tmp_file
select * from dm_sysobject
go
quit
EoQ
# now use less -S to examine the result;
less -S tmp_file

blog1blog2blog3
Now, it is possible to scroll left and right and have a good look at the result.
Some columns, though, are so wide and filled with trailing blanks that it is quite distracting. This will be taken care of later in the last alternative.

2. A more interactive variant with less -S
It is possible to stay in idql while the output is being redirected into a named pipe which is read by “less -S” and displayed in a second terminal. To do this, follow the steps below:
a. Create a named pipe named idqlp:

mknod -p idqlp

Contrary to the usual anonymous pipes, named pipes have, well, a name, and are created like files in a filesystem. As expected, like their counterparts, they can be written to and read from.
b. copy/paste the following command, it will create a pre-processor script for the less command:

cat - <<EoScript > lesspipe.sh
#! /bin/sh

# must define LESSOPEN environment variable to be used;
# export LESSOPEN="|~/lesspipe.sh %s"

case "$1" in
   idqlp)
      cat idqlp
      ;;
   *) exit 1
      ;;
esac
exit $?
EoScript

The preprocessor script will be invoked when less is launched, right before it. This is a nifty feature of less which allows to play tricks with binary files, e.g. decompressing them before viewing them (if they are compressed files). I guess less-ing a java class file could first invoke a decompiler and then pass the result to less. There is also a postprocessor for tasks to be performed after less exits, such as cleaning up the intermediate file created by the preprocessor. All this is very well presented in less’ man page.
c. make it executable:

chmod +x lesspipe.sh

d. copy/paste the following command in a terminal, it will create the consumer script that will continuously be reading from the named pipe idqlp:

cat - <<EoScript > readpipe.sh
#! /bin/bash

export LESSOPEN="|~/lesspipe.sh %s"

while [ true ]; do
   less -S idqlp
done
EoScript

e. make it executable:

chmod +x readpipe.sh

f. in the first terminal, run this script in the foreground:

./readpipe.sh

g. in the second terminal, run idql with a redirection into the named pipe idqlp:

idql dmtest -Udmadmin -Pdmadmin -w100 > idqlp

f. now, type your DQL statements with or without those endless lines:

execute show_sessions

go

select * from dm_sysobject
go

select * from dm_document
go

Result:

blog7

The DQL output of the second terminal (bottom) is displayed in the first terminal (top) where it can be browsed by less.
This trick works quite well as long as a few precautions are respected. As you know, a pipe blocks the producer when it gets full until the consumer starts reading it at the other end. Thus, idql is blocked as long as less has not finished reading its output; for short outputs, no special action is required but for those long listing, in order to force less to reach the end of the output, type shift-g in the less window; give it enough time so the DQL statement completes its output, then ctrl-C. idql is then released whereas the output can be quietly navigated from within less in the first terminal. Once done, BEFORE entering any new command in idql, quit less (command q) so the next cycle begins. Now, the next command can be typed in idql. Failure to do this can hang the consumer in the first terminal and the commands below must be used to get it back on track:

ctrl-Z         # send the consumer into the background;
jobs -l        # identify the pid that's messed up;
kill -9 pid    # send it ad patres;
./readpipe.sh  # restart the consumer;

Or use the one-liner:

jobs -l | grep readpipe.sh | cut -d\ -f3 | xargs kill -9

Sometimes, even the producer process must be restarted. If all this looks clumsy at first, once you get the habit of it, it becomes quite automatic.
This alternative is nice because it avoids cluttering the command window: the DQL commands are separated from their output and therefore stay visible in the second terminal. Moreover, as illustrated, error messages don’t show in the less-ed output.

3. The third alternative: compact the output
Eventhough those pesky long lines are now tamed, the issue of those extra-wide columns mostly filled with blanks remains and this alternative is aimed at it.
Firstly, here is where we take our inspiration from, Oracle sqlplus. Consider the SQL query below:
blog5
— The USER_PASSWORD column is still too wide, let’s narrow it:
blog6
See what happened here ? Column USER_PASSWORD’s text has been reflowed inside the column’s width, not truncated.
OK, we want all these 3 things:
. compact the columns by removing trainling blanks;
. control the columns width by resizing the way we like;
. introduce column wrapping if our width is too narrow;
Obviously, since we don’t have the source code of idql, we cannot enhance the way it displays the query results so we will do this outside idql and in 2 steps, execute the query and capture its output to process it.
The output processing is performed by the following awk script:

-- compact_wwa.awk;
# Usage:
#    gawk -v maxw=nn -f compact_wwa.awk file
# or:
#    cmd | gawk -v maxw=nn -f compact_wwa.awk
# where:
#     maxw is the maximum column width; characters outside this limit are wrapped around in their own column;
# example:
#     gawk -v maxw=50 -f compact_wwa.awk tmp_file  | less -S
# C. Cervini, dbi-services.com
BEGIN {
   while (getline && !match($0, /^([0-9]+> )+/));
   header = substr($0, RLENGTH + 1)
   getline
   nbFields = NF
   fs[0] = 0; fw[0] = -1 # just so that fs[1] = 1, see below;
   headerLine = ""; sepLine = ""
   for (i = 1; i <= NF; i++) {
      fs[i] = fs[i - 1] + fw[i - 1] + 2
      fw[i] = length($i)
      sepLine = sepLine sprintf("%s  ", substr($0, fs[i], min(fw[i], maxw)))
   }
   printWithWA(header)
   printf("%s\n", sepLine)
}
{
   if (match($0, /^\([0-9]+ rows? affected\)/)) {
      print
      exit
   }
   printWithWA($0)
}
function printWithWA(S) {
   do {
      left_over = ""
      for (i = 1; i <= nbFields; i++) {
         Min = min(fw[i], maxw)
         printf("%s  ", substr(S, fs[i], Min))
         subS = substr(S, fs[i] + Min, fw[i] - Min)
         if (length(subS) > 0) {
            left_over = left_over sprintf("%-*s  ", fw[i], subS)
         }
         else
            left_over = left_over sprintf("%*s  ", fw[i], "")
      }
      printf "\n"
      gsub(/ +$/, "", left_over)
      S = left_over
   } while (left_over)
}
function min(x, y) {
   return(x <= y ? x : y)
}

Now, let’s put it to use:

idql dmtest -Udmadmin -Pdmadmin <<EoQ | gawk -v maxw=35 -f compact_wwa.awk | less -S
select r_object_id, user_name, user_os_name, user_address, user_group_name, user_privileges, owner_def_permit, world_def_permit, group_def_permit, default_folder, user_db_name, description,
acl_domain, acl_name, user_os_domain, home_docbase, user_state, client_capability, globally_managed, user_delegation, workflow_disabled, alias_set_id, user_source, user_ldap_dn, user_xprivileges,
failed_auth_attempt, user_admin, user_global_unique_id, user_login_name, user_login_domain, user_initials, USER_PASSWORD, user_web_page, first_failed_auth_utc_time, last_login_utc_time,
deactivated_utc_time, deactivated_ip_addr, root_log_dir
from
   dm_user
go
exit
EoQ

By the way, funny thing, here “select user_password” is not the same as “select USER_PASSWORD”. The first returns a sequence of asterisks while the second an ASCII representation of the encrypted password. The generated SQL explains why.
“select user_password” gets compiled into the SQL statement below:

select all '****************' as user_password from dm_user_sp dm_user

whereas “select USER_PASSWORD” is the real one:

select all dm_user.USER_PASSWORD from dm_user_sp dm_user

Unlike unquoted column names in Oracle SQL, attribute names in DQL are case-sensitive !
And here is the result:

blog4

The script takes one parameter, maxw, the maximum column width. If the columns have too many characters, they are wrapped around on the next line(s) until the whole column has been displayed.

4. The altogether

What if we want the above line compaction and column wrapping around but interactively like in alternative 2 ? Easy. Just edit the script readpipe.sh and change line

cat idqlp

to

cat idqlp | gawk -v maxw=35 -f compact_wwa.awk

Said otherwise, we are preprocessing idql’s output through the awk filter before giving it to less.

A final alternative

We can achieve the same result with dmawk by writing a generic procedure that takes a DQL query to execute and a maximum column width, or a list of columns name and width (like if we entered a sequence of “col mycol format …” in sqlplus) so everything is done on the fly, but, as they say, this is left as an exercise to the reader. Or, why not, maybe in a blog to come.

 

Cet article idql and its column output est apparu en premier sur Blog dbi services.

Migrate On-premise DB to database Cloud service using Cloud DB backup module

Are you considering migrating your Oracle Database to Oracle Database on the Cloud? Are you planning to backup your Oracle Database to the Cloud? The Oracle Database Cloud DB backup module, allows...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Offline Relocation of a PDB using RMAN

Hemant K Chitale - Sun, 2018-05-06 11:17
I've published a new video on Offline Relocation of a PDB using RMAN in 12.2
.
.
.

Categories: DBA Blogs

The top 3 reasons to attend malagAPEX!

Joel Kallman - Sun, 2018-05-06 10:05

The explosion and adoption of Oracle APEX continues across the globe, and clear evidence of this is the introduction of not one but two new all-APEX conferences.  At the end of May 2018 is the latest APEX conference, malagAPEX.  If you're looking for an opportunity to get connected with the growing APEX community, this is an excellent place to start.

Here are the top 3 reasons why you should consider attending malagAPEX:

  1. The collection of speakers they have assembled for this conference is extraordinary.  They are all highly-respected and well-known luminaries in the Oracle APEX and Oracle application development communities.  Many of them are Oracle ACEs, recognized experts by Oracle and community champions.  Appdev is what they do for a living, and they will share their real-world experience with you.
  2. The agenda and topics covered at this conference are cutting edge - cloud, Docker, machine learning, Oracle JET, REST and more.  And of course, there will be plenty of APEX too!
  3. The location is breathtaking.  For those who may not be familiar with Málaga (or if you're American!), it is located in southern Spain, in Costa del Sol (Coast of the Sun) at the northern side of the Mediterranean Sea.  It will be beautiful, sunny and hot.  After the winter many of us recently endured, it sounds idyllic.

As I recently posted on Twitter this past week, one of the best things about APEX is...the APEX community!  Others have told me this, and I believe it too.  A community member Denis Savenko recently blogged about his first time experience in the Oracle APEX community at a new conference, and he said "awesomeness was in the air - everybody was extremely positive and ready to share their thoughts when it came to a discussion of any sort of a problem. I met a lot of interesting people from different countries, improved my professional contacts list dramatically and also had a chance to speak to really important people."  I can't state it any better than that.  This is representative of the awesome Oracle APEX community.  If you're looking for an opportunity to get plugged in, consider malagAPEX.

Pages

Subscribe to Oracle FAQ aggregator