Feed aggregator

Import data from Flat File to two different Table using UTL_FILE.

Tom Kyte - 6 hours 5 min ago
Hi Please help this Question. Import data from Following Flat File to two different Table using UTL_FILE. a. EMP and b. DEPT Note --- 1. In Last Line NULL Employee Should not Entry into Table. 2. Deptno Should go to both the Table EMP a...
Categories: DBA Blogs

Repeating parent-nodes in hierarchical query

Tom Kyte - 6 hours 5 min ago
Hello AskTOM Team, with the schema as provided in the LiveSQL Link (which is simply the example EMP/DEPT schema), I ran the following query <code> select case when LEVEL = 1 then ENAME else rp...
Categories: DBA Blogs

Using Fast Offline Conversion to Enable Transparent Data Encryption in EBS

Steven Chan - Mon, 2017-11-20 13:07

We are pleased to announce a new capability that enables you to perform offline, in-place conversion of datafiles for use with Transparent Data Encryption (TDE). This Fast Offline Conversion feature is now available for use with Oracle E-Business Suite 12.1.3 and 12.2.2 and later 12.2.x databases.

What does this feature do?

Fast Offline Conversion converts existing clear data to TDE-encrypted tablespaces.

The encryption is transparent to the application, so code does not have to be rewritten and existing SQL statements will work unchanged. Any authorized database session can read the encrypted data: the encryption only applies to the database datafiles and backups.

This new process is now the recommended procedure for converting to TDE with minimal downtime and lowest complexity. It supersedes previous methods for converting to TDE.

How do I go about using this feature?

You enable Fast Offline Conversion by applying a patch to your EBS 12.1.0.2 or 11.2.0.4 database. The patch - which is available on request from Oracle Support - enables offline, in-place TDE conversion of datafiles.

Where are the detailed instructions?

Full steps for enabling Fast Offline Conversion are provided in the following My Oracle Support knowledge document:

Related Articles

Categories: APPS Blogs

Enabling Fluid for Firefox on Linux for PeopleTools 8.54 and 8.55

Javier Delgado - Mon, 2017-11-20 08:09
In one of our customers we came across an issue by which users connecting to PeopleSoft using Firefox in Ubuntu would be shown the classic home page instead of the Fluid landing page.

After some research, we found out that this would happen in PeopleTools 8.54 and 8.55 due to a known issue. The document 2235517.1 in My Oracle Support actually indicates that this issue is resolved in PeopleTools 8.56.



So we started looking for workarounds, until we finally found one, which was to modify the file under the following web server directory:

%PS_CFG_HOME%\webserv\%domain%\applications\peoplesoft\PORTAL.war\WEB-INF\psftdocs\%site%\browscap

In this file, we included the following changes:

(...)
browserPlatform=MACIPAD;
if mac
browserPlatform=MAC;

# customer - author - BEGIN
if (?=.*Linux)
browserPlatform=WIN8;
# customer - author - END
}

{# Form Factors
(...)
if (iPhone)|(iPad)|(iPod)
browserPlatformClass=ios;
if mac
browserPlatformClass=mac;
browserDeviceTypeClass=pc

# customer - author - BEGIN
if (?=.*Linux)
browserPlatformClass=win;
browserDeviceTypeClass=pc
# customer - author - END
}
if android
browserPlatformClass=android;
(...)

Once this was done, and after rebooting the web server, the issue was solved.

Note: I would like to thank Nicolás Zocco for his invaluable contribution in finding this workaround.

firewalld rules for Veritas Infoscale 7.3 with Oracle

Yann Neuhaus - Mon, 2017-11-20 06:30

You might wonder, but yes, Veritas is still alive and there are customers that use it and are very happy with it. Recently we upgraded a large cluster from Veritas 5/RHEL5 to Veritas InfoScale 7.3/RHEL7 and I must say that the migration was straight forward and very smooth (when I have time I’ll write another post specific to the migration). At a point in time during this project the requirement to enable the firewall on the Linux hosts came up so we needed to figure out all the ports and then setup the firewall rules for that. This is how we did it…

The first step was to create a new zone because we did not want to modify any of the default zones:

root@:/home/oracle/ [] firewall-cmd --permanent --new-zone=OracleVeritas
root@:/home/oracle/ [] firewall-cmd --reload
success
root@:/home/oracle/ [] firewall-cmd --get-zones
OracleVeritas block dmz drop external home internal public trusted work

The ports required for Veritas InfoScale are documented here. This is the set of ports we defined:

##### SSH
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-service=ssh
##### Veritas ports
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=4145/udp            # vxio
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=4145/tcp            # vxio
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=5634/tcp            # xprtld
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=8199/tcp            # vras
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=8989/tcp            # vxreserver
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14141/tcp           # had
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14144/tcp           # notifier
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14144/udp           # notifier
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14149/tcp           # vcsauthserver
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14149/udp           # vcsauthserver
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14150/tcp           # CmdServer
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14155/tcp           # wac
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14155/udp           # wac
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14156/tcp           # steward
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=14156/udp           # steward
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=443/tcp             # Vxspserv
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=49152-65535/tcp     # vxio
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=49152-65535/udp     # vxio
#### Oracle ports
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=1521/tcp            # listener
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --permanent --add-port=3872/tcp            # cloud control agent

Because we wanted the firewall only on the public network, but not on the interconnect we changed the interfaces for the zone:

root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --change-interface=bond0
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --change-interface=eth0
root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --change-interface=eth2

One additional step to make this active is to add the zone to the interface configuration (this is done automatically if the interfaces are under control of network manager):

root@:/home/oracle/ [] echo "ZONE=OracleVeritas" >> /etc/sysconfig/network-scripts/ifcfg-eth0
root@:/home/oracle/ [] echo "ZONE=OracleVeritas" >> /etc/sysconfig/network-scripts/ifcfg-eth2
root@:/home/oracle/ [] echo "ZONE=OracleVeritas" >> /etc/sysconfig/network-scripts/ifcfg-bond0

Restart the firewall service:

root@:/home/oracle/ [] systemctl restart firewalld

… and it should be active:

root@:/home/postgres/ [] firewall-cmd --get-active-zones
OracleVeritas
  interfaces: eth0 eth2 bond0
public
  interfaces: eth1 eth3

root@:/home/oracle/ [] firewall-cmd --zone=OracleVeritas --list-all
OracleVeritas (active)
  target: default
  icmp-block-inversion: no
  interfaces: bond0 eth0 eth2
  sources: 
  services: 
  ports: 4145/udp 4145/tcp 5634/tcp 8199/tcp 8989/tcp 14141/tcp 14144/tcp 14144/udp 14149/tcp 14149/udp 14150/tcp 14155/tcp 14155/udp 14156/tcp 14156/udp 443/tcp 49152-65535/tcp 49152-65535/udp 1521/tcp 3872/tcp
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Just for completeness: You can also directly check the configuration file for the zone:

root@:/home/oracle/ [] cat /etc/firewalld/zones/OracleVeritas.xml

Hope this helps …

 

Cet article firewalld rules for Veritas Infoscale 7.3 with Oracle est apparu en premier sur Blog dbi services.

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

David Kurtz - Mon, 2017-11-20 05:55
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.
CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
TREE_NODE_NUM INTEGER NOT NULL,
RANGE_FROM_10 VARCHAR2(10) NOT NULL,
RANGE_TO_10 VARCHAR2(10) NOT NULL)
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10
( SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 
Compound Triggers on Tree Selector Tables There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.
  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 
The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.
However, there are two problems that then have to be worked around. 
Library Cache Contention 
Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).
CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.
Limitation on the Maximum Number of Partitions In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.
INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
*
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

What causes a materialized view to get invalidated

Tom Kyte - Mon, 2017-11-20 05:46
Hello, I have a materialized view whose definition looks like this: CREATE MATERIALIZED VIEW <owner>.<materialized view name> (<column list>) TABLESPACE <tablespace name> PCTUSED 0 PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE (...
Categories: DBA Blogs

Distributed Option in Oracle 7

Tom Kyte - Mon, 2017-11-20 05:46
Hi Tom! Is Oracle Distributed Option required for accessing remote databases? Also, if there is some restriction on database version i.e. the version shuld be same on all nodes?
Categories: DBA Blogs

Is it an index, a table or what?

Yann Neuhaus - Sun, 2017-11-19 10:54

A recent tweet from Kevin Closson outlined that in PostgreSQL it might be confusing if something is an index or table. Why is it like that? Lets have a look and start be re-building the example from Kevin:

For getting into the same situation Kevin described we need something like this:

postgres=# create table base4(custid int, custname varchar(50));
CREATE TABLE
postgres=# create index base4_idx on base4(custid);
CREATE INDEX

Assuming that we forgot that we created such an index and come back later and try to create it again we have exactly the same behavior:

postgres=# create index base4_idx on base4(custid);
ERROR:  relation "base4_idx" already exists
postgres=# drop table base4_idx;
ERROR:  "base4_idx" is not a table
HINT:  Use DROP INDEX to remove an index.
postgres=# 

They keyword here is “relation”. In PostgreSQL a “relation” does not necessarily mean a table. What you need to know is that PostgreSQL stores everything that looks like a table/relation (e.g. has columns) in the pg_class catalog table. When we check our relations there:

postgres=# select relname from pg_class where relname in ('base4','base4_idx');
  relname  
-----------
 base4
 base4_idx
(2 rows)

… we can see that both, the table and the index, are somehow treated as a relation. The difference is here:

postgres=# \! cat a.sql
select a.relname 
     , b.typname
  from pg_class a
     , pg_type b 
 where a.relname in ('base4','base4_idx')
   and a.reltype = b.oid;
postgres=# \i a.sql
 relname | typname 
---------+---------
 base4   | base4
(1 row)

Indexes do not have an entry in pg_type, tables have. What is even more interesting is, that the “base4″ table is a type itself. This means for every table you create a composite type is created as well that describes the structure of the table. You can even link back to pg_class:

postgres=# select typname,typrelid from pg_type where typname = 'base4';
 typname | typrelid 
---------+----------
 base4   |    32901
(1 row)

postgres=# select relname from pg_class where oid = 32901;
 relname 
---------
 base4
(1 row)

When you want to know what type a relation is of the easiest way is to ask like this:

postgres=# select relname,relkind from pg_class where relname in ('base4','base4_idx');
  relname  | relkind 
-----------+---------
 base4     | r
 base4_idx | i
(2 rows)

… where:

  • r = ordinary table
  • i = index
  • S = sequence
  • t = TOAST table
  • m = materialized view
  • c = composite type
  • f = foreign table
  • p = partitioned table

Of course there are also catalog tables for tables and indexes, so you can also double check there. Knowing all this the message is pretty clear:

postgres=# create index base4_idx on base4(custid);
ERROR:  relation "base4_idx" already exists
postgres=# drop relation base4_idx;
ERROR:  syntax error at or near "relation"
LINE 1: drop relation base4_idx;
             ^
postgres=# drop table base4_idx;
ERROR:  "base4_idx" is not a table
HINT:  Use DROP INDEX to remove an index.
postgres=# 

PostgreSQL finally is telling you that “base4_idx” is an index and not a table which is fine. Of course you could think that PostgreSQL should to that on its own but it is also true: When you want to drop something, you should be sure on what you really want to drop.

 

Cet article Is it an index, a table or what? est apparu en premier sur Blog dbi services.

How to Setup Node.js and Oracle JET on Oracle Linux

Andrejus Baranovski - Sun, 2017-11-19 08:35
What if you want to develop Oracle JET in Oracle Linux? Certainly this is possible - both Node.js and Oracle JET run on Oracle Linux or any other Linux distribution. If you follow Oracle JET Setup Guide, you will see Node.js is listed as prerequisite. But it may not be exactly straightforward to install Node.js on Oracle Linux. Below I will guide you through the steps.

Run command to install development tools to build native add-ons to be installed:

yum install -y gcc-c++ make


Enable Node.js yum repository:

curl -sL https://rpm.nodesource.com/setup_8.x | sudo -E bash -


Install Node.js:

sudo yum install nodejs


Verify if node and npm was installed successfully:


Next you can follow steps described in Oracle JET setup guide, all straightforward. Install Oracle JET:

sudo npm -g install @oracle/ojet-cli


Create new JET application with OJET utility:

sudo ojet create jetwinapp --template=navdrawer


Run application with OJET utility:

sudo ojet serve


JET application runs on Oracle Linux:

BEFORE Triggers Fired Multiple Times

Tom Kyte - Sat, 2017-11-18 17:06
Dear Tom, I have a question about triggers execution: according to documentation at http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm#sthref1169 "BEFORE Triggers Fired Multiple Times If an UPDATE or DELETE ...
Categories: DBA Blogs

Behavior of dba_hist_sgastat metrics 'free memory' and 'SQL' for pool 'shared pool' have changed with 12.1.0.2

Tom Kyte - Sat, 2017-11-18 17:06
We are upgrading from 11.2.0.4 to 12.1.0.2 and we are seeing a new behavior for the "free memory" in the shared pool. We are looking at AWR in dba_hist_sgastat. While in 11g it goes up and down, in 12c it is non-decreasing, over several days at least...
Categories: DBA Blogs

Docker-CE: initial configuration

Dietrich Schroff - Sat, 2017-11-18 15:43
After installing docker to a ubuntu server i was keen what kind of processes and files are there.
# ps x|grep docker
 5852 ?        Ssl    0:05 /usr/bin/dockerd -H fd://
 5867 ?        Ssl    0:04 docker-containerd --config /var/run/docker/containerd/containerd.tomlSo the inital configuration is not stored in /etc.

The configuration file looks like this:
# cat /var/run/docker/containerd/containerd.toml
root = "/var/lib/docker/containerd/daemon"
state = "/var/run/docker/containerd/daemon"
subreaper = false
oom_score = -500

[grpc]
  address = "/var/run/docker/containerd/docker-containerd.sock"
  uid = 0
  gid = 0

[debug]
  address = "/var/run/docker/containerd/docker-containerd-debug.sock"
  uid = 0
  gid = 0
  level = "info"

[metrics]
  address = ""

[cgroup]
  path = ""

[plugins]
  [plugins.linux]
    shim = "docker-containerd-shim"
    runtime = "docker-runc"
    runtime_root = "/var/lib/docker/runc"
    no_shim = false
    shim_debug = false
    shim_no_newns = falseThe directory /var/run/docker/containerd/daemon seems to be important, so let's take a look:
# find /var/run/docker/containerd/daemon
/var/run/docker/containerd/daemon
/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux
/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby
But theese entries are only directories...

So what about /var/lib? Here we go:
# find /var/lib/docker/ -maxdepth 1
/var/lib/docker/
/var/lib/docker/overlay2
/var/lib/docker/swarm
/var/lib/docker/image
/var/lib/docker/tmp
/var/lib/docker/runtimes
/var/lib/docker/trust
/var/lib/docker/plugins
/var/lib/docker/containerd
/var/lib/docker/containers
/var/lib/docker/builder
/var/lib/docker/volumes
/var/lib/docker/networkWithin the directory containers you can find information about the containers which run / ran on your docker system:

# ls -l
insgesamt 12
drwx------ 4 root root 4096 Nov 16 22:12 0c7567bb965449f5f2f3dfadfc38f0226bdb42cc6d46daa4641042090e0405c7
drwx------ 4 root root 4096 Nov 16 22:57 28b7f70147aabc94bd174ce7ad7f5d48ed5610ffaa21733d3549ee6e998ee7c3
drwx------ 4 root root 4096 Nov 16 21:59 2db1c3ed1dafc85369536c382e02b6a23e5d134d2ba3d56a738f3441fb624b04
 The first characters of the directories match the container-id:
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                  PORTS               NAMES
28b7f70147aa        hello-world         "/hello"            2 days ago          Exited (0) 2 days ago                       trusting_bartik
0c7567bb9654        hello-world         "/hello"            2 days ago          Exited (0) 2 days ago                       festive_wozniak
2db1c3ed1daf        hello-world         "/hello"            2 days ago          Exited (0) 2 days ago                       epic_tesla


Online Videos with Lucas Jellema–Live recording of Talks, Interviews and Stuff

Amis Blog - Sat, 2017-11-18 11:58

An overview of some of my recent recordings:

expected soon:

November 2017 – Oracle Developer Community Podcast  What’s Hot? Tech Trends That Made a Real Difference in 2017 (with Chris Richardson, Frank Munz, Pratik Patel, Lonneke Dikmans, Bob Rhubart and Lucas Jellema)  – https://blogs.oracle.com/developers/podcast-tech-trends-that-made-a-real-difference-in-2017

November 2nd – Oracle Developer Community Two Minute Tech Tip – No Excuses: Get Hands-On Experience With New Technologies – https://www.youtube.com/watch?v=NrfrWMq0m9Y

image

October 3rd – Oracle OpenWorld DevLive – Interview with Bob Rubart (Oracle Developer Community) on Kafka Streams, Java Cloud, PaaS Integration – https://www.youtube.com/watch?v=L_mhNCT2nao

image

October, Oracle Code/JavaOne San Francisco – Real Time UI with Apache Kafka Streaming Analytics of Fast Data and Server Push – https://www.youtube.com/watch?v=izTuO3IUBBY 

image

August 23rd – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 –  Modern DevOps across Technologies, On Premises and Clouds – https://www.youtube.com/watch?v=q8-wvvod85U

August 14th – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 – The Oracle Application Container Cloud as the Microservices Platform – https://youtu.be/LkMomfG6rv4

July 7th – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 – The Art of Intelligence – A Practical Introduction Machine Learning – https://youtu.be/XmqQhDsJnhY

June – Oracle Code Brussels – DevLive: Get on the (Event) Bus! with Lucas Jellema – https://www.youtube.com/watch?v=4raJRNFRJFk 

imageApril 20th – Oracle Code London – Event Bus as Backbone for Decoupled Microservice Choreography – https://www.youtube.com/watch?v=dRd-QggXqiA

image

April 20th – Oracle Code London – DevLive: Lucas Jellema on Decoupled Microservices with Event Bus – https://www.youtube.com/watch?v=T0gZhzzu5lg image

Older Resources

October 2015 – 2 Minute Tech Tip The Evolution of Flashback in Oracle Database – https://www.youtube.com/watch?v=WOcsYtX69N8

January 2015 – Interviewing Simone Geib (Oracle SOA Suite Product Manager) – https://www.youtube.com/watch?v=MrtpAW9aOHQ 

September 2014 – 2 Minute Tech Tip – Vagrant, Puppet, Docker, and Packer – https://www.youtube.com/watch?v=36ZmfLMFPJI

October 2013 – Interview with Boh Rhubart on SOA, Agile, DevOps, and Transformation – https://www.youtube.com/watch?v=rtiwGqmzmWo

image 

March 2013 – On User Experience – with Bob Rhubart & Jeremy Ashley – https://www.youtube.com/watch?v=8Jm_cVCoQ3o


The post Online Videos with Lucas Jellema–Live recording of Talks, Interviews and Stuff appeared first on AMIS Oracle and Java Blog.

Run Oracle Database in Docker using prebaked image from Oracle Container Registry–a two minute guide

Amis Blog - Sat, 2017-11-18 05:38

imageThis article will show how to run an Oracle Database on a Docker host using the prebaked images on Oracle Continer Registry. It is my expectation that it takes me very little manual effort to run the full 12.2.0.1 Oracle Enterprise Database – just pull and run the Docker image. Once it is running, I get the usual Docker benefits such as clean environment management, linking from other containers, quick stop and start, running scripts inside the container etc.

The minimum requirements for the container is 8GB of disk space and 2GB of memory. There is a slim alternative that requires less resources: The slim (12.2.0.1-slim) version of EE does not have support for Analytics, Oracle R, Oracle Label Security, Oracle Text, Oracle Application Express and Oracle DataVault. I am not sure yet how much that shaves of the resource requirements.

My recent article Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images explained the first steps for getting started with Oracle Container Registry, including how to sign up and accept terms and conditions for individual images.

Once that is out of the way, we can get going with running the database.

The steps are:

  1. start docker
  2. login to Oracle Container Registry
  3. pull image for Oracle Database – I will use the enterprise edition database image in this article
  4. run a docker container based on that image
  5. start interacting with the database, for example from SQLcl or SQL Developer.

In terms of work, it will take less than two minutes of your time. The time before the database is running is mainly determined by the time it takes to download the image. After that, running the container takes just a few dozens of seconds.

The Oracle Database images are published on the website for the Container Registry:

image

image

image

Copy the docker pull command in the upper right hand corner to the clipboard. It is also worth remembering the docker run command for running the database image.

Note that this webpage contains more useful information:

  • how to run SQL scripts from within the container
  • how to expose the database [port]outside the container
  • how to specify SID (default ORCLCDB), PDB (default is ORCLPDB1), DOMAIN (default is localdomain) and allocated MEMORY (default is 2 GB)
  • how to change SYS password (default is Oradoc_db1)
  • how to make use of a volume external to the database container for storing data files, redo logs, audit logs, alert logs and trace files
  • how to run a database server image against an existing set of database files
Let’s run a database

After starting Docker (on my laptop I am using the Docker Quick Start Terminal in the Docker Toolbox), login to the container registry using your Oracle account.

image

SNAGHTMLea0ec12

Then pull the database image, using the command

docker pull container-registry.oracle.com/database/enterprise

image

07:09 Start Pull

10:28 Start Extracting

image

10.30 Image is available, ready run containers off

image

The download took over three and a half hours. I was doing stuff over that time – so no time lost.

Once the pull was finished, the image was added to the local cache of Docker images. I can now run the database.

docker run -d -it –-name ORA12201_1 –P container-registry.oracle.com/database/enterprise:12.2.0.1

The value ORA12201_1 is the self-picked name for the container.

image

Here -P indicates that the ports can be chosen by docker. The mapped port can be discovered by executing

docker port ORA12201_1

image

In a few minutes – I am not sure exactly how long it took – the container status is healthy:

image

The Database server can be connected to – when the container status is Healthy – by executing sqlplus from within the container as

docker exec -it ORA12201_1 bash -c “source /home/oracle/.bashrc; sqlplus /nolog”

image

imageIn addition to connecting to the database from within the container – we can also just consider the container running the database as a back  box that exposes the database’s internal port 1521 at port 32769. And using any tool capable of communicating to a database can be used in a regular way – provided we also have the IP address for the Docker Host if the connect is not made from that machine itself:

image 

Creating a database connection in SQL Developer is done like this:

SNAGHTMLf64a329

Using SYS/Oradoc_db1 as the credentials, the Docker Host IP address for the hostname and the port mapped by Docker to port 1521 in the container, 32769 in this case. The Service Name is composed of the PDB name and the domain name:  ORCLPDB1.localdomain.

A sample query:imageConnecting with SQLcl is similar:

sql sys/Oradoc_db1@192.168.99.100:32769/ORCLPDB1.localdomain as sysdba

image

To stop the container – and the database:

docker stop 62eb

It takes a few seconds to stop cleanly.

image

Restarting takes about 1 minute before the database is up and running:

image

image

Note: with this basic approach, all database files are created in the container’s file system. And are not available elsewhere nor will they survive the removal of the container. A better way of handling these files is through the mounting of a host folder for storing files or through a Docker Volume.

Note: when running on Windows using Docker Toolbox, this may be convenient for increasing the size of memory and disk of the default VM: https://github.com/crops/docker-win-mac-docs/wiki/Windows-Instructions-(Docker-Toolbox)

The post Run Oracle Database in Docker using prebaked image from Oracle Container Registry–a two minute guide appeared first on AMIS Oracle and Java Blog.

Garbled display while running FMW installer on Linux

Amardeep Sidhu - Sat, 2017-11-18 04:56

A colleague faced this while running FMW installer on a Linux machine. The display appeared like this

 

 

 

 

 

 

 

This thread gave a clue that it could have something to do with fonts. So I checked what all fonts related stuff was installed.

[root@someserver ~]# rpm -aq |grep -i font
stix-fonts-1.1.0-5.el7.noarch
xorg-x11-font-utils-7.5-20.el7.x86_64
xorg-x11-fonts-cyrillic-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-1-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-9-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-9-75dpi-7.5-9.el7.noarch
libXfont-1.5.2-1.el7.x86_64
xorg-x11-fonts-ISO8859-14-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-1-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-2-100dpi-7.5-9.el7.noarch
libfontenc-1.1.3-3.el7.x86_64
xorg-x11-fonts-ethiopic-7.5-9.el7.noarch
xorg-x11-fonts-100dpi-7.5-9.el7.noarch
xorg-x11-fonts-misc-7.5-9.el7.noarch
fontpackages-filesystem-1.44-8.el7.noarch
fontconfig-2.10.95-11.el7.x86_64
xorg-x11-fonts-ISO8859-2-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-14-75dpi-7.5-9.el7.noarch
xorg-x11-fonts-Type1-7.5-9.el7.noarch
xorg-x11-fonts-ISO8859-15-75dpi-7.5-9.el7.noarch
[root@someserver ~]#

stix-fonts looked suspicious to me. So I removed that with rpm -e stix-fonts.

That actually fixed the issue. After this the Installer window was displaying fine.

 

Categories: BI & Warehousing

root.sh fails with CRS-2101:The OLR was formatted using version 3

Amardeep Sidhu - Sat, 2017-11-18 04:33

Got this while trying to install 11.2.0.4 RAC on Redhat Linux 7.2. root.sh fails with a message like

ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2017-11-09 15:43:37.883:
[client(37246)]CRS-2101:The OLR was formatted using version 3.

This is bug 18370031. Need to apply the patch before running root.sh.

Categories: BI & Warehousing

Unstructed vs. structured

Yann Neuhaus - Sat, 2017-11-18 01:13

The title of this blog post was: “Tracing DBMS_RCVMAN for reclaimable archivelogs” until I started to write the conclusion…

In a previous post I mentioned that there’s a bug with archivelog deletion policy when you want to mention both the ‘BACKED UP … TIMES TO …’ and ‘APPLIED’ or ‘SHIPPED’ as conditions for archived logs to be reclaimable. I opened a SR, they didn’t even try to reproduce it (and I can guarantee you can reproduce it in 2 minutes on any currently supported version) so I traced it myself to understand the bug and suggest the fix.

I traced the DBMS_RCVMAN with Kernel Recovery Area function SQL Tracing:

SQL> alter session set events 'trace[kra_sql] disk high, memory disable';
SQL> dbms_backup_restore.refreshAgedFiles;
SQL> alter session set events 'trace[kra_sql] off';

I know refreshAgedFiles checks for reclaimable file in FRA since it was an old bug where we had to run it manually on databases in mount.

I compared the traces when changing the order of ‘APPLIED’ and ‘BACKED UP’ and found the following:

< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: setRedoLogDeletionPolicy with policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: setRedoLogDeletionPolicy with policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK
5340c5340
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK
5343c5343
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING setRedoLogDeletionPolicy with policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY with alldest = 1
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING setRedoLogDeletionPolicy with policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK with alldest = 1
5350,5351c5350,5351
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption devtype=DISK
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption backed up conf - devtype=DISK , backups=1
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption devtype=DISK
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption backed up conf - devtype=DISK, backups=1
5363c5363
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpAl with TRUE
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpAl with key = 128 stamp = 958068130
5367c5367
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpFiles with: no_data_found
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpFiles

You see at the top the difference in the way I mentioned the deletion policy. You see at the bottom that the first one (starting with ‘BACKED UP’) didn’t find archivelogs being backed up (no_data_found). But the second one (starting with ‘APPLIED’) mentioned the sequence# 128.

But if you look carefully, you see another difference in the middle: the “devtype=DISK” has an additional space before the comma in the first case.

So I traced a bit further, including SQL_TRACE and I found that the deletion policy is just using some INSTR and SUBSTR parsing on the deletion policy text to find the policy, the backup times, and the device type. For sure, looking for backups with DEVICE_TYPE=’DISK ‘ instead of ‘DISK’ will not find anything and this is the reason for the bug: no archived logs backed up means no archived log reclaimable.

If you look closer at DBMS_RCVMAN you will find that the device type is extracted with SUBSTR(:1, 1, INSTR(:1, ‘ ‘)) when the device type is followed by a space, which is the reason of this additional space. The correct extraction should be SUBSTR(:1, 1, INSTR(:1, ‘ ‘)-1) and this is what I suggested on the SR.

So what?

Writing the conclusion made me change the title. Currently, a lot of people are advocating for unstructured data. Because it is easy (which rhymes with ‘lazy’). Store information as it comes and postpone the parsing to a more structured data type until you need to process it. This seems to be how the RMAN configuration is stored: as the text we entered. And it is parsed later with simple text function as INSTR(), SUBSTR(), and LIKE. But you can see how a little bug, such as reading an additional character, has big consequences. If you look at the archivelog deletion policy syntax, you have 50% chances to run into this bug on a Data Guard configuration. The Recovery Area will fill up and your database will be blocked. The controlfile grows. Or you noticed it before and you run a ‘delete archivelog’ statement without knowing the reason. You waste space, removing some recovery files from local storage, which could have been kept for longer. If the deletion policy was parsed immediately when entered, like SQL DDL or PL/SQL APIs, the issue would have been detected a long time ago. Structure and strong typing is the way to build robust applications.

 

Cet article Unstructed vs. structured est apparu en premier sur Blog dbi services.

how SPM works with cursor_sharing=force?

Tom Kyte - Fri, 2017-11-17 22:46
<code>Hello Tom The post https://blogs.oracle.com/optimizer/how-do-adaptive-cursor-sharing-and-sql-plan-management-interact explains the interaction between cursor sharing and SPM quite clear. But I met some unexpected results if I set the cursor_...
Categories: DBA Blogs

Whitespaces bug

Tom Kyte - Fri, 2017-11-17 22:46
I have noticed a weird bug in our Oracle database as we are moving from one database (Oracle 11.2.0.3.0) to another database (Oracle 11.2.4.0). Database servers are in different data center with different operating systems and support teams. So the v...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator