Feed aggregator

GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2

Yann Neuhaus - Thu, 2018-01-04 08:55

This post is a demonstration on how to deploy a fully supported Oracle GoldenGate as cluster resources.

We are going to use the Oracle Grid Infrastructure Agents, called XAG to integrate the GoldenGate instance within our cluster as a cluster managed resource. This tool is provided by Oracle in the binaries with the release 12c and later but it is recommended to download and use the last version out of the box (available here).

This XAG provide an interface “agctl” which enable us to interact with GoldenGate resources the same way as “srvctl” does for Oracle database resources within the Oracle cluster. This tool support various product that can be integrated to the Oracle cluster like Tomcat, MySQL and so on.

Here what’s can be done with XAG:

[oracle@rac001 ~]$ agctl
Manages Apache Tomcat, Apache Webserver, Goldengate, JDE Enterprise Server, MySQL Server, Peoplesoft App Server, Peoplesoft Batch Server, Peoplesoft PIA Server, Siebel Gateway, Siebel Server, WebLogic Administration Server as Oracle Clusterware Resources

Usage: agctl <verb> <object> [<options>]
   verbs: add|check|config|disable|enable|modify|query|relocate|remove|start|status|stop
   objects: apache_tomcat|apache_webserver|goldengate|jde_enterprise_server|mysql_server|peoplesoft_app_server|peoplesoft_batch_server|peoplesoft_pia_server|siebel_gateway|siebel_server|weblogic_admin_server
For detailed help on each verb and object and its options use:
   agctl <verb> --help or
   agctl <verb> <object> --help

We are going to deploy that GoldenGate HA configuration on top of our existing 12.2 Grid Infrastructure. I choose for that demo to deploy the whole GoldenGate instance and its binaries to a dedicated mount point using ACFS to be sure my GoldenGate resources will have access to their data regardless of the node where the GoldenGate instance will be running. Could also be a DBFS or NFS.

Step 1 – Create an ACFS mount point called /acfsgg

[oracle@rac001 Disk1]$ /u01/app/12.2.0/grid/bin/asmcmd volcreate -G DGFRA -s 4G --width 1M --column 8 ACFSGG

[root@rac001 ~]# /sbin/mkfs -t acfs /dev/asm/acfsgg-215
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl add filesystem -d /dev/asm/acfsgg-215 -m /acfsgg -u oracle -fstype ACFS -autostart ALWAYS
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl start filesystem -d /dev/asm/acfsgg-215
[root@rac001 ~]# chown oracle:oinstall /acfsgg
[root@rac001 ~]# chmod 775 /acfsgg

[oracle@rac001 ~]$ ./execall "df -hT /acfsgg"
rac001: Filesystem          Type  Size  Used Avail Use% Mounted on
rac001: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg
rac002: Filesystem          Type  Size  Used Avail Use% Mounted on
rac002: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg

 

Step 2 – install GoldenGate binaries in /acfsgg

cd /u01/install/
unzip 122022_fbo_ggs_Linux_x64_shiphome.zip
cd fbo_ggs_Linux_x64_shiphome/Disk1

[oracle@rac001 Disk1]$ ./runInstaller -silent -nowait -showProgress \
INSTALL_OPTION=ORA12c \
SOFTWARE_LOCATION=/acfsgg \
START_MANAGER=false \
MANAGER_PORT= \
DATABASE_LOCATION= \
INVENTORY_LOCATION=/u01/app/oraInventory \
UNIX_GROUP_NAME=oinstall

[...]
Finish Setup successful.
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2018-01-04_06-37-02AM.log' for more details.
Successfully Setup Software.

 

Step 3 – install the last version of XAG (recommended by Oracle) in all nodes

# creation of the XAG home on all nodes as root
[root@rac001 ~]# mkdir /u01/app/xag
[root@rac001 ~]# chown oracle. /u01/app/xag

# back as oracle
[oracle@rac001 ~]$ cd /u01/install/
[oracle@rac001 install]$ unzip xagpack.zip
[oracle@rac001 xag]$ cd xag

[oracle@rac001 xag]$ export XAG_HOME=/u01/app/xag

[oracle@rac001 xag]$ ./xagsetup.sh --install --directory $XAG_HOME --all_nodes
Installing Oracle Grid Infrastructure Agents on: rac001
Installing Oracle Grid Infrastructure Agents on: rac002
Done.

[oracle@rac001 xag]$ /u01/app/12.2.0/grid/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 7.1.0

[oracle@rac001 xag]$ $XAG_HOME/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 8.1.0

 

Step 4 – configure GoldenGate instance and the manager

[oracle@rac002 acfsgg]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1
[oracle@rac002 acfsgg]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib
[oracle@rac002 acfsgg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.2.2 OGGCORE_12.2.0.2.0_PLATFORMS_170630.0419_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jun 30 2017 16:12:28
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


GGSCI (rac001) 1> CREATE SUBDIRS

Creating subdirectories under current directory /acfsgg

Parameter files                /acfsgg/dirprm: created
Report files                   /acfsgg/dirrpt: created
Checkpoint files               /acfsgg/dirchk: created
Process status files           /acfsgg/dirpcs: created
SQL script files               /acfsgg/dirsql: created
Database definitions files     /acfsgg/dirdef: created
Extract data files             /acfsgg/dirdat: created
Temporary files                /acfsgg/dirtmp: created
Credential store files         /acfsgg/dircrd: created
Masterkey wallet files         /acfsgg/dirwlt: created
Dump files                     /acfsgg/dirdmp: created

GGSCI (rac001) 2> edit params mgr

PORT 7809
AUTORESTART ER *, RETRIES 3, WAITMINUTES 3, RESETMINUTES 10
AUTOSTART ER *
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS
LAGCRITICALMINUTES 5
LAGREPORTMINUTES 60
LAGINFOMINUTES 0

 

Step 5 – Add the GoldenGate resource to the cluster

[root@rac001 ~]# export XAG_HOME=/u01/app/xag
[root@rac001 ~]# $XAG_HOME/bin/agctl add goldengate GGAPP01 --gg_home /acfsgg --instance_type source --oracle_home /u01/app/oracle/product/12.2.0/dbhome_1 --ip 192.168.179.15 --network 1 --user oracle --filesystems ora.dgfra.acfsgg.acfs
Calling POSIX::isdigit() is deprecated at /u01/app/xag/agcommon.pm line 809.

## all parameters in above command are required except the FS specification

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl start goldengate GGAPP01

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl status goldengate GGAPP01
Goldengate  instance 'GGAPP01' is running on rac001

[oracle@rac001 ~]$ /u01/app/12.2.0/grid/bin/crsctl stat res -t
[...]
Cluster Resources
--------------------------------------------------------------------------------
[...]
xag.GGAPP01-vip.vip
      1        ONLINE  ONLINE       rac001                   STABLE
xag.GGAPP01.goldengate
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

At this stage we have an operational GoldenGate fully managed by the cluster and ready to be configured for replication.

In case of failure of the node running the Manager, the cluster will restart the Manger process on first remaining available node. As soon as the Manager start, it will automatically restart all Extract and Replicate processes as we instruct it in the Manager configuration file (parameter AUTOSTART). The restarting of Extraction and Replication processes will be also done by the Manager and not by the cluster (so far with this release).

We can add Extract and Replicate processes as cluster resources but in this case it will only be monitored by the cluster and the cluster will update their states with ONLINE, OFFLINE, INTERMEDIATE or UNKNOWN depending the scenario.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2 est apparu en premier sur Blog dbi services.

Docker-Swarm: How to run a mysql database...

Dietrich Schroff - Thu, 2018-01-04 06:44
After several tests with docker swarm (setting up a swarm, running with more than on master, running a webserver in a swarm) i am thinking about running a mysql database in a swarm.

If you are running a mysql database on one docker host, you have to set up a docker volume, because otherwise data inside the container is gone, if you restart your database container. You can follow this tutorial, which says, you have to run
docker run --name=mysql1 -d mysql/mysql-server:tagbut there is the persistent volume missing. If you try this one, you are on the right way:
docker run --name mysqldb --volumes-from mysql_data -v /var/lib/mysql:/var/l
ib/mysql -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL
_ROOT_PASSWORD=supersecret -it -p 3306:3306 mysql
The problem is, that you have to create a container, mysql_data which contains a volume.

What about running a mysql database just with a volume?

First idea is to create a docker volume:

alpine:~#  docker volume create mysql_data
mysql_data
alpine:~# docker volume ls
DRIVER              VOLUME NAME
local               mysql_data And then start the mysql database:
 alpine:~# docker run --name mysqldb  -v mysql_data:/var/lib/mysql -e MYSQL_USER=mysql
-e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -i
t -p 3306:3306 mysql

Initializing database
2017-12-10T12:25:38.890958Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-12-10T12:25:39.114910Z 0 [Warning] InnoDB: New log files created, LSN=45790
2017-12-10T12:25:39.192006Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2017-12-10T12:25:39.269224Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 3b5276c6-dda5-11e7-847f-0242ac110002.
2017-12-10T12:25:39.293895Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2017-12-10T12:25:39.296705Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2017-12-10T12:25:39.734384Z 1 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.734823Z 1 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.735089Z 1 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.735325Z 1 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.735530Z 1 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.735734Z 1 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.735969Z 1 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:39.736171Z 1 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
Database initialized
Initializing certificates
Generating a 2048 bit RSA private key
............+++
...........................................................................................................................................................................................................+++
unable to write 'random state'
writing new private key to 'ca-key.pem'
-----
Generating a 2048 bit RSA private key
.......................+++
.......+++
unable to write 'random state'
writing new private key to 'server-key.pem'
-----
Generating a 2048 bit RSA private key
.......................+++
.............+++
unable to write 'random state'
writing new private key to 'client-key.pem'
-----
Certificates initialized
MySQL init process in progress...
2017-12-10T12:25:42.948452Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-12-10T12:25:42.949739Z 0 [Note] mysqld (mysqld 5.7.20) starting as process 87 ...
2017-12-10T12:25:42.952799Z 0 [Note] InnoDB: PUNCH HOLE support available
2017-12-10T12:25:42.953188Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-12-10T12:25:42.953550Z 0 [Note] InnoDB: Uses event mutexes
2017-12-10T12:25:42.953828Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-12-10T12:25:42.954120Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2017-12-10T12:25:42.954451Z 0 [Note] InnoDB: Using Linux native AIO
2017-12-10T12:25:42.954900Z 0 [Note] InnoDB: Number of pools: 1
2017-12-10T12:25:42.955173Z 0 [Note] InnoDB: Using CPU crc32 instructions
2017-12-10T12:25:42.956841Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2017-12-10T12:25:42.965639Z 0 [Note] InnoDB: Completed initialization of buffer pool
2017-12-10T12:25:42.969568Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2017-12-10T12:25:42.982838Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2017-12-10T12:25:43.001695Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2017-12-10T12:25:43.002427Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2017-12-10T12:25:43.028474Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2017-12-10T12:25:43.029519Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2017-12-10T12:25:43.029785Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2017-12-10T12:25:43.030165Z 0 [Note] InnoDB: Waiting for purge to start
2017-12-10T12:25:43.080873Z 0 [Note] InnoDB: 5.7.20 started; log sequence number 2565377
2017-12-10T12:25:43.085057Z 0 [Note] Plugin 'FEDERATED' is disabled.
2017-12-10T12:25:43.097783Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2017-12-10T12:25:43.111653Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2017-12-10T12:25:43.112695Z 0 [Warning] CA certificate ca.pem is self signed.
2017-12-10T12:25:43.112387Z 0 [Note] InnoDB: Buffer pool(s) load completed at 171210 12:25:43
2017-12-10T12:25:43.122126Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.122654Z 0 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.122922Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.123172Z 0 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.123424Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.123655Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.125278Z 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.125625Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:43.130811Z 0 [Note] Event Scheduler: Loaded 0 events
2017-12-10T12:25:43.131463Z 0 [Note] mysqld: ready for connections.
Version: '5.7.20'  socket: '/var/run/mysqld/mysqld.sock'  port: 0  MySQL Community Server (GPL)
2017-12-10T12:25:43.131789Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
2017-12-10T12:25:43.132034Z 0 [Note] Beginning of list of non-natively partitioned tables
2017-12-10T12:25:43.141674Z 0 [Note] End of list of non-natively partitioned tables
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
2017-12-10T12:25:45.325153Z 5 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.326169Z 5 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.326766Z 5 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.327430Z 5 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.328439Z 5 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.329236Z 5 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.330139Z 5 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.330625Z 5 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql: [Warning] Using a password on the command line interface can be insecure.
2017-12-10T12:25:45.355944Z 9 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.356363Z 9 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.356743Z 9 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.357080Z 9 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.357407Z 9 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.358138Z 9 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.358579Z 9 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:45.358825Z 9 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.

2017-12-10T12:25:45.360088Z 0 [Note] Giving 0 client threads a chance to die gracefully
2017-12-10T12:25:45.360365Z 0 [Note] Shutting down slave threads
2017-12-10T12:25:45.360562Z 0 [Note] Forcefully disconnecting 0 remaining clients
2017-12-10T12:25:45.360757Z 0 [Note] Event Scheduler: Purging the queue. 0 events
2017-12-10T12:25:45.360982Z 0 [Note] Binlog end
2017-12-10T12:25:45.361681Z 0 [Note] Shutting down plugin 'ngram'
2017-12-10T12:25:45.361897Z 0 [Note] Shutting down plugin 'BLACKHOLE'
2017-12-10T12:25:45.362098Z 0 [Note] Shutting down plugin 'partition'
2017-12-10T12:25:45.362285Z 0 [Note] Shutting down plugin 'ARCHIVE'
2017-12-10T12:25:45.362470Z 0 [Note] Shutting down plugin 'INNODB_SYS_VIRTUAL'
2017-12-10T12:25:45.362653Z 0 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2017-12-10T12:25:45.362844Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2017-12-10T12:25:45.363019Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2017-12-10T12:25:45.363206Z 0 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2017-12-10T12:25:45.363455Z 0 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2017-12-10T12:25:45.363740Z 0 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2017-12-10T12:25:45.364038Z 0 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2017-12-10T12:25:45.364313Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2017-12-10T12:25:45.364603Z 0 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2017-12-10T12:25:45.364886Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2017-12-10T12:25:45.365129Z 0 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2017-12-10T12:25:45.365333Z 0 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2017-12-10T12:25:45.365507Z 0 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2017-12-10T12:25:45.365678Z 0 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2017-12-10T12:25:45.365846Z 0 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2017-12-10T12:25:45.366026Z 0 [Note] Shutting down plugin 'INNODB_METRICS'
2017-12-10T12:25:45.366214Z 0 [Note] Shutting down plugin 'INNODB_TEMP_TABLE_INFO'
2017-12-10T12:25:45.366413Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2017-12-10T12:25:45.366616Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2017-12-10T12:25:45.366799Z 0 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2017-12-10T12:25:45.366976Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2017-12-10T12:25:45.367142Z 0 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2017-12-10T12:25:45.367305Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2017-12-10T12:25:45.367467Z 0 [Note] Shutting down plugin 'INNODB_CMPMEM'
2017-12-10T12:25:45.367628Z 0 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2017-12-10T12:25:45.367796Z 0 [Note] Shutting down plugin 'INNODB_CMP'
2017-12-10T12:25:45.367978Z 0 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2017-12-10T12:25:45.368141Z 0 [Note] Shutting down plugin 'INNODB_LOCKS'
2017-12-10T12:25:45.368307Z 0 [Note] Shutting down plugin 'INNODB_TRX'
2017-12-10T12:25:45.368469Z 0 [Note] Shutting down plugin 'InnoDB'
2017-12-10T12:25:45.368926Z 0 [Note] InnoDB: FTS optimize thread exiting.
2017-12-10T12:25:45.369212Z 0 [Note] InnoDB: Starting shutdown...
2017-12-10T12:25:45.469927Z 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2017-12-10T12:25:45.471353Z 0 [Note] InnoDB: Buffer pool(s) dump completed at 171210 12:25:45
2017-12-10T12:25:47.118138Z 0 [Note] InnoDB: Shutdown completed; log sequence number 12169663
2017-12-10T12:25:47.125477Z 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2017-12-10T12:25:47.126721Z 0 [Note] Shutting down plugin 'MRG_MYISAM'
2017-12-10T12:25:47.127496Z 0 [Note] Shutting down plugin 'MyISAM'
2017-12-10T12:25:47.128253Z 0 [Note] Shutting down plugin 'CSV'
2017-12-10T12:25:47.128958Z 0 [Note] Shutting down plugin 'MEMORY'
2017-12-10T12:25:47.129665Z 0 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2017-12-10T12:25:47.130679Z 0 [Note] Shutting down plugin 'sha256_password'
2017-12-10T12:25:47.131807Z 0 [Note] Shutting down plugin 'mysql_native_password'
2017-12-10T12:25:47.132756Z 0 [Note] Shutting down plugin 'binlog'
2017-12-10T12:25:47.135910Z 0 [Note] mysqld: Shutdown complete


MySQL init process done. Ready for start up.

2017-12-10T12:25:47.379637Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-12-10T12:25:47.381062Z 0 [Note] mysqld (mysqld 5.7.20) starting as process 1 ...
2017-12-10T12:25:47.383987Z 0 [Note] InnoDB: PUNCH HOLE support available
2017-12-10T12:25:47.384373Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-12-10T12:25:47.384612Z 0 [Note] InnoDB: Uses event mutexes
2017-12-10T12:25:47.384828Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-12-10T12:25:47.385024Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2017-12-10T12:25:47.385229Z 0 [Note] InnoDB: Using Linux native AIO
2017-12-10T12:25:47.385591Z 0 [Note] InnoDB: Number of pools: 1
2017-12-10T12:25:47.385887Z 0 [Note] InnoDB: Using CPU crc32 instructions
2017-12-10T12:25:47.387049Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2017-12-10T12:25:47.393069Z 0 [Note] InnoDB: Completed initialization of buffer pool
2017-12-10T12:25:47.395207Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2017-12-10T12:25:47.406539Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
2017-12-10T12:25:47.417533Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2017-12-10T12:25:47.418058Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2017-12-10T12:25:47.433202Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2017-12-10T12:25:47.434200Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
2017-12-10T12:25:47.434456Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
2017-12-10T12:25:47.434855Z 0 [Note] InnoDB: Waiting for purge to start
2017-12-10T12:25:47.485468Z 0 [Note] InnoDB: 5.7.20 started; log sequence number 12169663
2017-12-10T12:25:47.487084Z 0 [Note] Plugin 'FEDERATED' is disabled.
2017-12-10T12:25:47.500417Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2017-12-10T12:25:47.502089Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2017-12-10T12:25:47.511981Z 0 [Warning] CA certificate ca.pem is self signed.
2017-12-10T12:25:47.516700Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2017-12-10T12:25:47.517640Z 0 [Note] IPv6 is available.
2017-12-10T12:25:47.518010Z 0 [Note]   - '::' resolves to '::';
2017-12-10T12:25:47.518467Z 0 [Note] Server socket created on IP: '::'.
2017-12-10T12:25:47.523590Z 0 [Warning] 'user' entry 'root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.524079Z 0 [Warning] 'user' entry 'mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.524350Z 0 [Warning] 'user' entry 'mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.524623Z 0 [Warning] 'db' entry 'performance_schema mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.524854Z 0 [Warning] 'db' entry 'sys mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.525118Z 0 [Warning] 'proxies_priv' entry '@ root@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.527817Z 0 [Warning] 'tables_priv' entry 'user mysql.session@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.528270Z 0 [Note] InnoDB: Buffer pool(s) load completed at 171210 12:25:47
2017-12-10T12:25:47.528542Z 0 [Warning] 'tables_priv' entry 'sys_config mysql.sys@localhost' ignored in --skip-name-resolve mode.
2017-12-10T12:25:47.532739Z 0 [Note] Event Scheduler: Loaded 0 events
2017-12-10T12:25:47.533402Z 0 [Note] mysqld: ready for connections.
Version: '5.7.20'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
2017-12-10T12:25:47.533772Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
2017-12-10T12:25:47.534079Z 0 [Note] Beginning of list of non-natively partitioned tables
2017-12-10T12:25:47.541968Z 0 [Note] End of list of non-natively partitioned tables
and everything works like expected:
alpine:~# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
a6d0d908f034        mysql               "docker-entrypoint..."   3 minutes ago       Up 3 minutes        0.0.0.0:3306->3306/tcp   mysqldb
alpine:~# docker exec -it a6d0d908f034  mysql -u root -psupersecret
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.20 MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit
ByeIf i create the following:
mysql> create database mydb;Query OK, 1 row affected (0.00 sec)
mysql> use mydb
Database changed
mysql> create table mytable ( id int);
Query OK, 0 rows affected (0.03 sec)and inside the volume:
alpine:/# docker volume inspect mysql_data
[
    {
        "CreatedAt": "2017-12-10T12:30:53Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/mysql_data/_data",
        "Name": "mysql_data",
        "Options": {},
        "Scope": "local"
    }
]
alpine:/# ls -l /var/lib/docker/volumes/mysql_data/_data/mydb/
total 112
-rw-r-----    1 999      ping            65 Dec 10 12:30 db.opt
-rw-r-----    1 999      ping          8556 Dec 10 12:31 mytable.frm
-rw-r-----    1 999      ping         98304 Dec 10 12:31 mytable.ibdBut after stopping the mysql database, i got the following on the next start:

alpine:~# docker run --name mysqldb  -v mysql_data:/var/lib/mysql -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -it -p 3306:3306 mysq
l
docker: Error response from daemon: Conflict. The container name "/mysqldb" is already in use by container "a6d0d908f03466181682666affb390650179c322f2848d32f2c72fab828f980c". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'. OK. I have to change the "--name" (--> mysqldb2):
alpine:/# docker exec -it mysqldb2  mysql -u root -psupersecret
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.20 MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases
    -> ;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| sample             |
| sys                |
+--------------------+
6 rows in set (0.01 sec)Yippie! The data is still there!







Shortcomings of this approach:
  • Each time you start the mysql database on a new host, you have to run "docker volume create..."
    If you create a container with this volume (s. here oder there) you can skip this "docker volume create".
  • Both approaches will not work on docker swarm. The data is local to a node and not shared. One idea: Use a NFS mountpoint to redistribute the volume to all nodes.

Hey!! My Plans For 2018 What’s Yours?

Online Apps DBA - Thu, 2018-01-04 05:40

Hi It’s already 4 Days in 2018 and I can’t wait to hear your plans for 2018 and share mine but before that Wish you a very Happy New Year from me and from my entire team at K21Academy. Today’s Blog is going to be slightly different than my usual weekly blog (going forward, every first Thursday, I’ll share behind […]

The post Hey!! My Plans For 2018 What’s Yours? appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Promoting a simple Oracle JET input text component to a Visual Country Selection component

Amis Blog - Thu, 2018-01-04 00:49

To select a country (or any other geographical entity), we can offer our end users a dropdown list, a list of values in a popup window, an auto suggest input text component or a more visual, map based approach. In this article, I describe the latter: Open a modal popup that presents a map of the world and allows the user to inspect and select a country. Note: this article is a logical extrapolation from previous articles on OpenLayers and Oracle JET, specifically this article.

This article demonstrates how we go from a simple input text component and extend it: with a background image (a little globe) that the user can click.

 

image

When the user clicks the icon, a popup opens and shows the world map. If the input component contains a valid country name, this country is selected in the map. The user can click on any country and in doing so, selects the country.

 

image

The name of the country is written back to the input text component (Mali in the next figure).

image

The main challenges discussed in this article:

Starting with the situation from the previous article (the third bullet overhead), this article describes how to create an input text component with clickable icon that opens a popup window that shows the map.

The final code is found here on GitHub: https://github.com/lucasjellema/jet-and-openlayers.

Embed the OpenLayers map in an Oracle JET Popup Component

This step is a simple one. I used the entry in the JET Cookbook on Popup Component: http://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=popup&demo=modal. I copied the styles used in this example to the file src/css/app.css:

.demo-popup {
    width: 80vw;  
    height: 80vh; 
    display: none;
  } 
  .demo-popup-body {
    width: 75vw;  
    height: 75vh; 
    display: flex; 
    flex-direction: column;
    align-items: stretch;
  } 
  .demo-popup-header {
    align-self: flex-start;
    margin-bottom: 10px;
  } 
  .demo-popup-content {
    align-self: stretch;
    overflow: auto;
    flex-basis: 60vh;
  } 
  .demo-popup-footer {
    align-self: flex-end;
    margin-top: 10px; 
  } 

Next I added the popup component in the mapArea.html file, with the DIV that acts as the map container in

<div id="popupWrapper">
    <oj-popup class="demo-popup" id="countrySelectionPopup" tail="none" position.my.horizontal="center" position.my.vertical="bottom" position.at.horizontal="center"
        position.at.vertical="bottom" position.of="window" position.offset.y="-10" modality="modal" data-bind="event:{'ojAnimateStart': startAnimationListener}">
        <div class="demo-popup-body">
            <div class="demo-popup-header">
                <h5>Select a country by clicking on it</h5>
            </div>
            <div class="demo-popup-content">
                <div id="countryInfo"></div>
                <div id="mapContainer" class="map"></div>
            </div>
            <div class="demo-popup-footer">
                <oj-button id="btnClose" data-bind="click: function()
                            {
                              var popup = document.querySelector('#countrySelectionPopup');
                              popup.close();
                            }">
                    Close
                </oj-button>
            </div>
        </div>
    </oj-popup>
</div>

I added the function startAnimationListener to the ViewModel in mapArea.js:


            self.startAnimationListener = function (data, event) {
                var ui = event.detail;
                if (!$(event.target).is("#countrySelectionPopup"))
                    return;

                if ("open" === ui.action) {
                    event.preventDefault();
                    var options = { "direction": "top" };
                    oj.AnimationUtils.slideIn(ui.element, options).then(ui.endCallback);
                    // if the map has not yet been initialized, then do the initialization now (this is the case the first time the popup opens)
                    if (!self.map) initMap();
                }
                else if ("close" === ui.action) {
                    event.preventDefault();
                    ui.endCallback();
                }
            }

The initialization of the map is taken care of in open stage for the popup this startAnimationListener – on the first occasion this popup is opened.

 

Embed a clickable icon in an input text component to allow the end user to open the popup with a mouse click

It is important that the user has a convenient way of opening the country selection popup. Just like the date and time input components provide a little clickable icon, inline in the input component, I want the country selector to consist of a regular JET input text component with an inline icon that the user can click on to bring up the popup.

image

The HTML code to make this happen is added to mapArea.html:

    <div id="countryInput">
        <oj-label for="country-input">Country</oj-label>
        <oj-input-text id="country-input" value="{{selectedCountry}}" on-value-changed="[[countryChangedListener]]"
        ></oj-input-text>
        <img id="countrySelectorOpen" src="css/images/icon-world.png" class='iconbtnintext' title ="Click to open World Map in Popup" data-bind="click: openPopup" /> 
    </div>

Note how the click event on the image is bound to the KnockOut ViewModel’s openPopup function using the data-bind notation,

The CSS style iconbtnintext takes care of the positioning of the icon. This style is added to app.css:

.iconbtnintext {	
    position:absolute;
    cursor:pointer;
    width:22px;
    height:23px;
    margin-left:-24px;
    padding-top:4px;
}

Finally the icon to be used – icon-world.png – is copied to src/css/images

image

 

 

References

JET Cookbook on Popup Component: http://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=popup&demo=modal

The post Promoting a simple Oracle JET input text component to a Visual Country Selection component appeared first on AMIS Oracle and Java Blog.

Windows 10 Update Killed Essbase On My Laptop!

Tim Tow - Thu, 2018-01-04 00:32

Like many Essbase consultants and developers, I run Essbase server on my Windows 10 laptop. It was a lengthy ‘Creator’s Update’ Windows update and, once it was complete, Essbase was dead on my machine. So, what do I do? First, I didn’t panic; us pilots have a way of not panicking when things don’t go as planned. We have several people internally who had this happen to them over the past several months and we fixed it each time, so there was nothing to worry about.

The root cause was that my OPMN service, which runs Essbase, was gone. This happened on the other machines we have that experienced that in the past, so I went to talk with one of our resident infrastructure gurus, Jay Zuercher. I remembered there was a command that I could run to recreate the service; Jay had the command filed away somewhere and within a couple of minutes, he sent it to me:

SC CREATE "OracleProcessManager_epmsystem1" binPath="C:\oracle\middleware\epmsystem11r1\opmn\bin\opmn.exe -S -I c:\oracle\middleware\user_projects\epmsystem1”
I ran this command – as an administrator – and then went into services to set the service to start automatically and start the service running. That did not, however, result in Essbase coming back to life. Next, I looked at the Essbase logs and noted several issues having to do with security. Initially, I thought there may have been due to an issue with Shared Services, but then I remembered about the fairly common Essbase issue regarding a corrupted essbase.sec file. I don’t know if the corruption was related to the Windows Update, but the timing sure was suspect. I replaced the essbase.sec file with a backup copy and I was back in business.

Hopefully this doesn’t happen to you when you update Windows but, if it does, perhaps this blog post will make your recovery quick and painless.
Categories: BI & Warehousing

Experiencing update statistics on a big table with circuitous ways

Yann Neuhaus - Wed, 2018-01-03 14:56

This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft’s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at customers shops and I decided to write about one  of them to begin this new year 2018.

blog 124 - 0 - stats

In this blog post, I will highlight a distinctive approach, according to me, to optimize an update statistics operation for one particular and big table. I already had to manage such maintenance tasks in one of my previous jobs as DBA and I continue to learn more about it but from a different position now.  The fact is as consultant, I usually try to provide to my customer the best approach regarding both the issue and the context. In reality, from my experience, the latter is much more important than you may think and sometimes we have to consider different other ways to get the expected outcome. I think this is not a necessarily a bad thing because following a different path (not the best) may reveal different other interesting options we may consider to make our final recipe.

Let’s go back to my customer case and let set the context. One big database (1.7TB) on SQL Server 2014 SP2 and an update statistics operation that is part of a global database maintenance strategy and takes a very long time (roughly 30h in the best-case scenario). We identified the biggest part of the execution time is related to one big and non-partitioned table (let’s say dbo.bigtable) with the following figures: 148 millions of rows / 222GB in size / 119 columns / 94 statistics / 14 indexes. Regarding the two last figures, the majority of statistics we re in fact auto-generated by the SQL Server engine over the time and from different application release campaigns. Furthermore, it is worth mentioning that the DBA had to use a customized sample value (between 10 and 50 percent) to minimize the impact of update statistics operation for this particular table regarding the table size and the configured maintenance windows timeframe.

 

My first and obvious approach

My first approach consisted in warning the database administrator about the number of statistics on this table (probably a lot of them are not in use anymore?) as well as the development team about the bad designed table. In addition, the number of rows in this table may also indicate that it contains a lot of archive data and we may reduce the global size by using archiving processes (why not built-in partitioning features and incremental statistics because we’re running with enterprise edition?). However, reviewing the model was not an option for my customer because it will require a lot of work and the DBA wasn’t keen on the idea of archiving data (business constraints) or removing auto generated statistics on this specific table. So, what I considered a good approach was not a success for adoption and the human aspect was definitely a big part of it. In a nutshell, at this stage a standby situation …

 

When a high-performance storage comes into rescue …

In parallel my customer was considering to replace his old storage by a new one and Pure Storage was in the loop. Pure Storage is one of the flash storage providers on the market and the good news is I already was in touch with @scrimi78 (Systems Engineer at Pure Storage) in Switzerland. During this project, we had a lot of interesting discussions and interaction s about Pure Storage products and I appreciated his availability to provide technical documentation and explanation. At the same time, they lent us generously a Pure Storage to play with snapshot volumes that will be used for database refresh between a production and dev environments.

In the context of my customer, we already were aware of the poor performance capabilities of the old storage and the replacement by a Pure Storage // M20 model was very beneficial for the database environments as shown by the following figures:

I only put the IO-related statistics of the production environment we had during the last year and we may notice a significant drop of average time after moving the production database files on the new storage layout. Figures are by year and month.

blog 124 - 1 - wait stats

We noticed the same from file IO statistics figures about the concerned database.

blog 124 - 3 - file io stats

Very impressive isn’t it? But what about our update statistics here? We naturally observed a significant drop in execution time to 6 hours (80% of improvement) because generally speaking this an IO-bound operation and especially in our case. The yellow columns represent operations we had to stop manually to avoid impacting the current application workload (> 35hours of execution time). You may also notice we changed the sample value to 100 percent after installing the Pure Storage // 20 model compared to previous runs with a sample value of 50 percent.

blog 124 - 3 - update stats execution time

 

Changing the initial way to update statistics for this table …

The point here is we know that we may now rely on the storage performance to update statistics efficiently and why not to push the limit of the storage by changing the way of running our update statistics operation – basically sequentially by default with one statistic at time. So, we decided to write a custom script to carry out the update operation in parallel to boost the overall execution performance. Since SQL Server 2014 SP1 CU6 we may benefit from an improved support for parallel statistics by using the trace flag 7471 that changes the locking behavior such that SQL Server engine no longer acquires X LOCK on UPDSTATS resource on this table. The script consists in creating a pool of parallel SQL jobs that update one particular statistic on a single table. I put it below if you want to use it but in a meantime, let’s say it is also possible to go through an interesting smart alternative solution based on the service broker capabilities here. What is certain is we will integrate one or other version – with some adjustments – to you DMK management kit tool. Here my proposal based on concurrent SQL jobs (feel free to comment):

SET NOCOUNT ON;

DECLARE 
	@database_name sysname = N'AdventureWorks2012', -- Target database
	@table_name sysname = N'bigTransactionHistory', -- Target table
	@schema_name sysname = N'dbo',                  -- Target schema
	@batch_upd_stats INT = 3,                       -- Number of simultaneous jobs
	@stat_sample INT = 0, 					        -- 0 = default sample rate value | 100 = FULLSCAN | Otherwise WITH SAMPLE @stat_sample PERCENT
	@debug BIT = 1									-- 0 = debug mode disabled -| 1 - Debug mode | 99 - Verbose mode
	

-- @stats_sample variable table
-- Will contains statistics with custom sampling rate value
-- Otherwise will use the default sample rate value from SQL Server
-- You may also use an user table for more flexibility that that will 
-- be used from this script 
DECLARE @stats_sample TABLE
(
	column_name sysname,
	stats_name sysname,
	sample_stat tinyint
);

INSERT @stats_sample VALUES ('TransactionID', '_WA_Sys_00000001_4CC05EF3', 100),
						    ('ProductID', '_WA_Sys_00000002_4CC05EF3', 100);



-- working parameters
DECLARE
	@nb_jobs_running INT = 0,
	@count INT = 0,
	@i INT,
	@j INT,
	@max INT,
	@sqlcmd NVARCHAR(MAX) = N'',
	@job_name_pattern sysname,
	@start_date DATETIME2(0),
	@end_date DATETIME2(0),
	@stats_name sysname,
	@sample_stat int
	;


-- Creation of update stat jobs
IF OBJECT_ID('tempdb..#update_stats_tt', 'U') IS NOT NULL
	DROP TABLE #update_stats_tt;

SELECT 
	id = IDENTITY(INT, 1,1),
	s.name AS [schema_name],
	t.name AS table_name,
	st.name AS stats_name,
	sample_stat,
	'USE [msdb]
	
	DECLARE @jobId BINARY(16);
	
	EXEC  msdb.dbo.sp_add_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', @job_id = @jobId OUTPUT
	--select @jobId
	
	EXEC msdb.dbo.sp_add_jobserver @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @server_name = N''' + @@SERVERNAME + '''
	
	EXEC msdb.dbo.sp_add_jobstep @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @step_name=N''UPDATE STATS'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_fail_action=2, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''UPDATE STATISTICS ' + QUOTENAME(s.name) + '.' + QUOTENAME(t.name) + ' ' + st.name + CASE COALESCE(sample_stat, @stat_sample, 0)
																											WHEN 0 THEN ' '
																											WHEN 100 THEN ' WITH FULLSCAN'
																											ELSE ' WITH SAMPLE ' + CAST(COALESCE(sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																										END + ''', 
		@database_name=N''' + @database_name + ''', 
		@flags=0

	EXEC msdb.dbo.sp_update_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@start_step_id=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@description=N'''', 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', 
			@notify_email_operator_name=N'''', 
			@notify_netsend_operator_name=N'''', 
			@notify_page_operator_name=N''''
	' AS upd_stats_cmd
INTO  #update_stats_tt
FROM 
	sys.stats AS st
JOIN 
	sys.tables AS t ON st.object_id = t.object_id
JOIN
	sys.schemas AS s ON s.schema_id = t.schema_id
LEFT JOIN 
	@stats_sample AS ss ON ss.stats_name = st.name
WHERE 
	t.name =  @table_name
	AND s.name = @schema_name
ORDER BY 
	stats_id;

IF @debug = 99
	SELECT * FROM #update_stats_tt;

-- Enable traceflag 7471 to allow U lock while stat is updating
PRINT '--> Enable trace flag 7471 during update stats operation';

SET @sqlcmd = N'DBCC TRACEON(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

PRINT '-----------------------------------';

SET @start_date = CURRENT_TIMESTAMP;

SET @max = (SELECT MAX(id) FROM #update_stats_tt);
SET @i = 0;

IF @debug = 99
	SELECT 'DEBUG -->  @max (nb stats) : ' + CAST(@max AS VARCHAR(15));

-- Entering to the loop ...
WHILE (@i <= @max OR @nb_jobs_running <> 0)
BEGIN

	SET @j = @i + 1;

	IF @debug = 99
	BEGIN
		SELECT 'DEBUG -->  @i : ' + CAST(@i AS VARCHAR(15));
		SELECT 'DEBUG --  @j = @i + 1 : ' + CAST(@j AS VARCHAR(15));
	END

	-- Computing number of update stats jobs to create
	-- regarding both the max configured of simulataneous jobs and current running jobs
	SET @count = @batch_upd_stats - @nb_jobs_running;

	IF @debug = 99
		SELECT 'DEBUG --  @count : ' + CAST(@count AS VARCHAR(15));

	-- Here we go ... creating update stats sql_jobs
	WHILE (@j <= @i + @count)
	BEGIN

		SET @sqlcmd = '';
		SET @stats_name = NULL;
		SET @sample_stat = NULL;
		SET @sqlcmd = NULL;

		SELECT 
			@stats_name = stats_name,
			@sample_stat = sample_stat,
			@sqlcmd = upd_stats_cmd + CHAR(13) + '---------------------' + CHAR(13)
		FROM 
			#update_stats_tt
		WHERE 
			id = @j;

		IF @debug = 99
		BEGIN
			SELECT 'DEBUG --  @j loop : ' + CAST(@j AS VARCHAR(15));
			SELECT @stats_name, @sample_stat
		END

		IF @debug = 1
			PRINT 'UPDATE STATISTICS ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' ' + @stats_name + CASE COALESCE(@sample_stat, @stat_sample, 0)
																															WHEN 0 THEN ' '
																															WHEN 100 THEN ' WITH FULLSCAN'
																															ELSE ' WITH SAMPLE ' + CAST(COALESCE(@sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																														  END + '';
		IF @debug IN (0,1) 
		BEGIN
			PRINT '--> Create SQL job UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + @stats_name + '';
			
			EXEC sp_executesql @sqlcmd;
		END

		SET @j += 1;

	END

	-- We need to rewind by 1 to target the next stat to update
	SET @j -= 1;

	PRINT '-----------------------------------';

	-- Start all related update stats jobs 
	SET @sqlcmd = N'';

	SELECT @sqlcmd += 'EXEC msdb.dbo.sp_start_job @job_name = ''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + stats_name + ''';' + CHAR(13)
	FROM #update_stats_tt
	WHERE id BETWEEN (@i + 1) AND (@i + @count);

	IF @debug = 1
		PRINT @sqlcmd;
	
	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Starting UPDATE_STATS_' + @schema_name + '_' + @table_name + ' jobs';
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	-- Waiting 10 seconds before checking running jobs
	WAITFOR DELAY '00:00:10';

	-- Construction job pattern to the next steps - check running jobs and stop terminated jobs
	SET @job_name_pattern = 'UPDATE_STATS_' + @schema_name + '_' + @table_name + '_';
	
	IF @debug = 99
		SELECT 'DEBUG - @job_name_pattern = ' + @job_name_pattern

	SELECT 
		@nb_jobs_running = COUNT(*)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	JOIN 
		msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
	WHERE 
		ja.session_id = (
							SELECT TOP 1 session_id 
							FROM msdb.dbo.syssessions  (NOLOCK) 
							ORDER BY agent_start_date DESC)
		AND start_execution_date is not null
		AND stop_execution_date is NULL
		AND j.name LIKE @job_name_pattern + '%';

	IF @debug = 99
		SELECT 'DEBUG --  @nb_jobs_running : ' + CAST(@nb_jobs_running AS VARCHAR(15));
	
	IF @nb_jobs_running = @batch_upd_stats
		PRINT '--> All SQL jobs are running. Waiting for 5s ...';

	-- Waiting until at least one job is terminated ...
	WHILE (@nb_jobs_running = @batch_upd_stats)
	BEGIN

		-- Count nb of running jobs only
		SELECT 
			@nb_jobs_running = COUNT(*)
		FROM 
			msdb.dbo.sysjobactivity ja (NOLOCK)
		LEFT JOIN 
			msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
		JOIN 
			msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
		JOIN 
			msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
		WHERE ja.session_id = (
								SELECT TOP 1 session_id 
								FROM msdb.dbo.syssessions  (NOLOCK) 
								ORDER BY agent_start_date DESC)
			AND start_execution_date is not null
			AND stop_execution_date is NULL
			AND j.name LIKE @job_name_pattern + '%'

		WAITFOR DELAY '00:00:05';

	END

	PRINT '-----------------------------------';

	-- Delete terminated SQL jobs 
	SET @sqlcmd = '';

	SELECT 
		@sqlcmd += 'EXEC msdb.dbo.sp_delete_job  @job_name = ''' + j.name + ''';' + CHAR(13)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	WHERE 
		j.name LIKE @job_name_pattern + '%'
		AND start_execution_date is not null
		AND stop_execution_date is not null

	IF @debug = 1
		PRINT @sqlcmd;

	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Removing terminated UPDATE_STATS_' + @schema_name + '_' + @table_name + '_XXX jobs'
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	SET @i = @j;

	IF @debug = 99
		SELECT 'DEBUG --  @i = @j : ' + CAST(@j AS VARCHAR(15));
	
END

-- Disable traceflag 7471 (only part of the update stats maintenance
PRINT '-----------------------------------';
PRINT '--> Disable trace flag 7471';
SET @sqlcmd = N'DBCC TRACEOFF(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

SET @end_date = CURRENT_TIMESTAMP;

-- Display execution time in seconds
SELECT DATEDIFF(SECOND, @start_date, @end_date) AS duration_S;

 

We initially run the test on a QA environment with 4 VCPUs and 22GB of RAM – that was pretty close to the production environment. We noticed when we began to increase the number of parallel jobs over 3 we encountered RESOURCE_SEMAPHORE waits. This is because of memory grants required for each update statistics command . Unfortunately, no chance here to increase the amount of memory to push the limit further but we noticed a factor improvement of 1.5 in average (with still a sample of 100 percent).

blog 124 - 5 - update stats execution time

At this point I asked myself if we may rely only on the storage layout performance to update statistics. After all, we managed to reduce the execution time below to the maximum windows maintenance timeframe – fixed to 8 hours in your context.

Analyzing further the data distribution

Relying on the storage performance and the new SQL Server capabilities was a good thing for us but however I kept in mind that updating 94 statistics was probably not a smart idea because I was convicted a big part of them was pretty useless. There is no easy way to verify it because we had a mix of stored procedures and ad-hoc statements from different applications that refer to this database (let’s say we also have super users who run queries directly from SSMS). So I decided to put the question differently: If we may not remove some auto-generated statistics, do we have necessarily to update all of them with FULLSCAN for this specific table? What about data distribution for columns involved by auto generated statistics? In the context of the dbo.bigTable and regarding the number of rows we may easily run into cardinality estimation issues if the data distribution is not correctly represented especially in case of skewed data. Nevertheless, analyzing manually histogram steps of each statistic may be cumbersome and we decided to go through the stored procedures provided by Kimberly Tripp here. However, the version we got did not support analyzing columns not involved in an index as mentioned below:

-- Considering for v2
--	, @slowanalyze	char(5) = 'FALSE'		
-- No index that exists with this column as the 
-- high-order element...analyze anyway. 
-- NOTE: This might require MULTIPLE table scans. 
-- I would NOT RECOMMEND THIS. I'm not even sure why I'm allowing this...

 

We had to update a little bit of code to fix it but don’t get me wrong, the Kimberly’s recommendation still makes sense because in the background SQL Server order data from the concerned column to get a picture of the data distribution. Without any indexes on this concerned column, analyzing data distribution may be a very time and resource-consuming (including tempdb for sorting data) task especially when the table becomes big in size as illustrated by the following sample of code executed by SQL Server while updating statistics.

SELECT StatMan([SC0]) 
FROM 
(
	SELECT TOP 100 PERCENT [TransactionID] AS [SC0] 
	FROM [dbo].[bigTransactionHistory] WITH (READUNCOMMITTED)  
	ORDER BY [SC0] 
) AS _MS_UPDSTATS_TBL  OPTION (MAXDOP 16)

This was the case for my dbo.bigTable and I had to cut the operation into small pieces spread within several windows maintenance timeframes.

So, we have to add  to the sp_SQLskills_AnalyzeColumnSkew stored procedure the creation of temporary index on the concerned column before analyzing data. This compromise allowed us to gain a lot of time and was validated on the QA environment.

We tried different  combinations of parameters and we finally finished by using the following ones (@difference = 10000, @numofsteps = 5) according to our background when we faced query performance and cardinality estimation issues on this specific table. We got an interesting output as shown below:

blog 124 - 6 - update stats execution time

The number of columns with skewed data are low compared to those with data distributed uniformly. [Partial skew] value means we got only differences from the column [Nb steps (factor)] => 2.5 by default.

That was a pretty good news because for columns with no skew data we were able to consider updating them by either using the default sampling rate used by SQL Server (nonlinear algorithm under the control of the optimizer) or to specify a custom sampling rate value to make sure we are not scanning too much data. Regarding my context, mixing parallel jobs and a default sampling rate value for column statistics with no skewed data seems to be good enough (no query plan regression at this moment) but we will have probably to change in the future. Anyway, we managed to reduce the execution time to one hour as shown below:

blog 124 - 7 - update stats execution time

For columns with skewed data we are still keeping the FULLSCAN option and we plan to investigate filtered statistics to enhance further cardinality estimations in the future.

The bottom line of this story is that I probably never thought to go through all the aforementioned options if the customer accepted to follow my first proposal (who knows?). Technically and humanly speaking it was a good learning experience. Obviously, you would think it was not the best or the simplest approach and you would be right. Indeed, there are drawbacks here as adding overhead and complexity to write custom scripts and maintaining ndividual statistics over the time as well. But from my experience in a consulting world everything is not often black or white and we also have to compose with a plenty of customer’s context variations to achieve not necessarily what we consider the best but one satisfactory outcome for the business.

Happy new year 2018!

 

 

 

 

 

 

 

 

Cet article Experiencing update statistics on a big table with circuitous ways est apparu en premier sur Blog dbi services.

Oracle Database 18c is NOT an Autonomous Database!

Tim Hall - Wed, 2018-01-03 04:01

RANT WARNING!

From the get-go Oracle has been talking about the Autonomous Database as a cloud service based on Oracle Database 18c, but I can’t remember them once saying Oracle Database 18c is an Autonomous Database. There is a reason for that. It’s because it isn’t. It’s not meant to be. It’s basically a big patchset on what we already have. We currently have 12.2.0.1 and Oracle 18c is 12.2.0.2, with a different name because of the new yearly release cycle.

I am not disappointed by this, but I am disappointed at how lazy the Oracle blog-sphere has been in reporting this. I keep reading posts where people mention that 18c is autonomous and how 18c will be the death of DBAs. Come on people, we can do better than this. I know what’s going to happen. Oracle 18c will be released and when it doesn’t contain all the autonomous goodness people will lose their minds and say, I told you it wouldn’t work!

Let me say this loud and clear!

  • Autonomous Database : It’s a cloud service that happens to use the Oracle 18c database.
  • Oracle Database 18c : It’s not autonomous!

This is not the first time I’ve said this. I mentioned it here when I wrote about the Autonomous Database announcement. I even mentioned using the cloud service here when I did the hands-on lab at OOW17.

Can we please have a little integrity and stop this junk reporting, or have we also entered the post-truth, fake-news world too? If you’ve written a blog post describing Oracle 18c as autonomous, please go back and correct it because you are making yourself look foolish and misleading people.

Sorry for the rant, but I felt it had to be said!

Cheers

Tim…

Update: At least one person seems not to have understood this post. I am not saying the Autonomous Database Cloud Service isn’t autonomous. I’ve used it briefly and from what I can see it seems pretty cool. This post is totally about the misreporting of the Oracle Database 18c product by bloggers (and some news outlets), which is not and doesn’t claim to be autonomous.

Update 2: So now I’m getting people asking me if I’m sure about this. As I said, people are being mislead by rubbish reporting. Come on people, fix your stuff! If you don’t believe me read what Maria Colgan said when she retweeted my post here.

Update 3: Some extra bullet points above to make things even more explicit.

  • If you buy an Autonomous Database Cloud Service on Oracle Public Cloud or Cloud@Customer, that’s a service that specifically includes the words “Autonomous Database” in the name of the service, you are getting an Autonomous Database.
  • If you buy regular 18c DBaaS on Oracle Public Cloud or Cloud@Customer you are not getting an Autonomous Database.
  • If you install 18c yourself on any cloud provider, including Oracle Public Cloud or Cloud@Customer, you are not getting an Autonomous Database.
  • If you install 18c yourself on-prem you are not getting an Autonomous Database.
Oracle Database 18c is NOT an Autonomous Database! was first posted on January 3, 2018 at 11:01 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

DBMCLI Utility

Syed Jaffar - Wed, 2018-01-03 03:45
Exadata comes with two types of servers, cell (storage) and compute (db). Its important for Database Machine Administrators (DMAs) to manage and configure servers for various purposes. For cell server administration and server settings, we use the cellcli utility. With X6 and higher, the DBMCLI utility can be used to administrate and configure server settings on compute nodes.

The DBMCLI utility is the command-line administration tool for configuring database servers, and managing server environment. Like cellcli, DBMCLI also runs on each compute server to enable you to configure an individual database server. The DBMCLI can be used to start and stop the server, to manage server configuration information, and to enable or disable servers. The utility is preconfigured when Oracle Exadata Database Machine is shipped.

Note : If you have enabled Capacity on Demand (CoD) during Exadata implementation, using this utility you can increase the CPU count on demand.

 To invoke/start the utility on the $ prompt, just execute dbmcli command

$ dbmcli [port_number] [-n] [-m] [-xml] [-v | -vv | -vvv] [-x] [-e command]










 With dbmcli utility, you can retireve the server configuration details, metric information and configure SMTP server details, start / stop the server etc.




Fore more details, read the Oracle documentation:

https://docs.oracle.com/cd/E80920_01/DBMMN/exadata-dbmcli.htm#DBMMN22063

 

Oracle MOOC: Introduction to NodeJS Using Oracle Cloud (2018)

 Oracle Application Container Cloud Service lets you deploy Java SE, Node.js, PHP, Phyton, Ruby and Go applications to the Oracle Cloud. You can also Java EE web...

We share our skills to maximize your revenue!
Categories: DBA Blogs

No More Content Chaos – Fishbowl Helps Manufacturer Integrate Oracle WebCenter with Agility for Enterprise Asset Management and Multichannel Delivery

This post is a recap of a recent Oracle WebCenter Content digital asset management project we completed for a manufacturing customer. They have thousands of products, and each has its own set of images, illustrations, videos, and manuals. Over 20,000 different asset types like these supported multiple processes and are distributed across a variety of mediums.

Google Search Appliance (GSA) Replacement, Alternatives, and Migration
The Business Problem: Disparate Content Storage

The company was storing some of its assets within Oracle Universal Content Management (UCM). Others, however, were stored in employee laptops, file servers, and various other locations. The assets were also stored with multiple naming conventions. These factors made it difficult to find relevant assets for products and repurpose them across channels and teams, as well as collaborate on the assets internally or with partners. Employees involved in asset creation and management described it as “content chaos”.

Illustration 1: Content Creation Lifecycle
The company’s view on how content will be created and consumed. This includes the various audiences for the content illustrating the content types necessary (electronic and print) – technical specifications, price and availability, manuals, etc.

This disparate content storage architecture and inconsistent file naming also meant that there was limited visibility into when and where purchased assets like image libraries were being used. This made it difficult to share such investments throughout the company.

Selecting Oracle WebCenter Content for Asset Storage and Management

With these issues identified, the company set out to develop a strategy to implement a permissions-based, centralized document and digital asset management system.  They had been using Oracle UCM for some time, but their initial implementation was not architected in a way to best suit the needs to search, access, use, and share assets across the company.

The company had also recently deployed the Agility product information management (PIM) system. Agility is used to manage the fully lifecycle of the company’s products, including where and how they can be marketed and sold, as well as what they should cost. The company wanted to integrate Agility with Oracle UCM, so that they could relate products in Agility with associated assets in UCM. This would make it easier to support commerce channels, including the company’s website, print catalogs, and the future desire to enable the sales team to access the company’s full product line “binder” from their mobile devices. The company needed the content management system to be able to output multiple rendition types including videos and PDFs.

For these reasons, the company decided to upgrade Oracle UCM to WebCenter Content 12c. An added benefit of 12c was that it provided the ability to integrate with third-party audio and video transcoding systems. This enabled the company to leverage FFmpeg software, which records, converts, and streams audio and video data. This video rendition engine is an open source product and therefore free, which helped keep implementation costs down as no additional licenses needed to be purchased.

The company partnered with Fishbowl Solutions to perform the upgrade to WebCenter 12c and integrate the product with Agility. Fishbowl had engaged with this company previously to scope out the future use of Oracle UCM (WebCenter), as well as design how its digital asset management component could be architected to achieve the following:

  • Simplified Image Access and Control
    • Provide an easy-to-use system that minimizes non-value add tasks. Users should have simple input mechanisms, assets should be transformed transparently to the user, and end up distributed to the right channel based on product name and metadata values.
  • Simple Scalable Solution for More Effective Collateral Control Across All Brands
    • Simplified solution architecture that is scalable to the needs of the company’s brands and shared functions and clarifies/enforces the defined “fit for purpose” solutions.
  • Image and Referential Content are Appropriately Managed throughout the Lifecycle
    • Knowledge workers want a solution that manages data according to organizational policy and standards within the systems they use on a day-to-day basis and does not require significant additional effort.
Oracle WebCenter Content 12c Implementation and Agility Integration

The company created a “Portfolio Initiative Roadmap” identifying these 9 key initiatives required to build out their Enterprise Digital Asset Management Solution and Program:

  1. Establish Foundation
  2. Marketing and Creative Services Enablement
  3. Sales Enablement
  4. Training and Certification Enablement
  5. Engineering Enablement
  6. Archive and Access AP Images To/From Oracle WebCenter Content
  7. Creative Services Enablement
  8. Ecommerce Enablement
  9. Evolve Foundation

Fishbowl worked with the company to deliver roadmap item number one: Establish Foundation. With this initiative Fishbowl needed to first upgrade Oracle UCM to Oracle WebCenter 12c. This included migrating the creative assets into the new instance. In parallel with the content migration and upgrade, the Fishbowl team designed an integration between Agility and Oracle WebCenter. This integration would enable Agility’s product taxonomy and data model to be associated with WebCenter’s metadata fields. This essentially would enable metadata between the systems to be shared and synced, making it easier to standardize on how assets were named.

The Fishbowl team did an outstanding job researching, planning, troubleshooting and creating the migration schedule for the Oracle Universal Content Management to WebCenter Content 12c upgrade. We did encounter one issue the night of the release, but the team developed a resolution and was ready to attempt the upgrade within 3 days.  I had the utmost confidence their plan was solid and we would attempt the upgrade mid-week.  The next attempt went very smoothly and users were in WebCenter Content the next morning.

Manager, Digital Strategy, Manufacturing Company

This integration would also provide renditions of the assets stored in Oracle WebCenter to display in Agility. For example, photographs and images of the company’s products are rendered via Oracle WebCenter’s digital asset management component to produce various formats – high and low resolution, 1200 x 1200, etc. Since Agility would be used by many departments at the company, including marketing, creative services, sales, and engineering; it was important that various formats of the assets could be easily found. This would help accelerate the execution of campaigns through web and email channels, as well as when print-ready images were needed to create or update product installation manuals and catalogs that were stored as PDFs. Additionally, associating the assets with the products in Agility would enable them to be paired so that they could be part of the product’s lifecycle – when products were discontinued, associated assets could be removed from the system. The following graphic illustrates the Oracle WebCenter and Agility integration:

Illustration 2: WebCenter Content and Agility Integration  Fishbowl designed a filter to capture all of the IDs of content items checked into the PIM Asset profile and added these to the Send To PIM Table. A system event was then added in WebCenter which runs about every 5 minutes, and this checks assets that are in the Send To PIM Table and adds them to the PIM Table. After it is added to the PIM Table, an API call is made to the PIM system to pull the assets from that table and add them to the appropriate product with any relations associated to it (i.e. the high resolution image for a specific product). After it is added into the PIM system, an API call is made to WebCenter with the updated path of the asset.

Results

This company first invested in Oracle WebCenter about five years ago. Although the system was being used to store some technical and business documents, the feeling amongst senior leadership was that the return on that investment was minimal. It hadn’t reached widespread adoption, and it was viewed as a system that wasn’t streamlining business processes and therefore wasn’t saving the company money.

This company, like most WebCenter Content customers, wasn’t fully aware of all that the system had to offer. Oracle WebCenter Content’s capabilities had to be reimagined, and in this case it was the Manager of Digital Strategy at the company that found an ideal use case. He was aware of the “content chaos” that the marketing and creative services teams were experiencing, which was part of the reason campaigns for specific channels took a long time to launch, as well as how the assets for such campaigns couldn’t be repurposed easily.

With the implementation of Oracle WebCenter Content 12c, and its integration with Agility, that has all changed. WebCenter has been identified as the system to store and manage assets, and as those assets are checked in they are automatically linked to products in Agility. This means that employees across divisions and geographic locations can easily locate the acceptable product assets for use in marketing, sales, engineering, training, or installation purposes. Outdated assets can easily be removed from the system, either manually or based on a content lifecycle schedule. Furthermore, having a standardized repository for the assets will better ensure they are reused across divisions and geographic locations, and no longer does the unknown location of assets impede internal or external collaboration. Furthermore, the open-source FFMpeg video rendition engine didn’t require the purchase of additional licenses, reducing future license and maintenance costs. Overall, the WebCenter and Agility-based digital asset management system has provided the foundation to effectively and efficiently control and deliver assets to the company’s targeted commerce channels.

The Fishbowl consultants on this project have all proved to be a very valuable part of my digital content strategy. I am very happy with the level of support I have received and amazed by how quickly they are able to provide conceptual ideas, working POC’s and final deliverables. They have been instrumental in developing and implementing the integration between Oracle WebCenter’s digital asset management component and our new Agility PIM application. I view them as members of my team, and they were all key in the successful implementation of our digital content management system.

Manager, Digital Strategy, Manufacturing Company

Time running out on your GSA?

Our expert team knows both GSA and Mindbreeze. We’ll help you understand your options and design a migration plan to fit your needs.

The post No More Content Chaos – Fishbowl Helps Manufacturer Integrate Oracle WebCenter with Agility for Enterprise Asset Management and Multichannel Delivery appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Using an OpenLayers map to select countries in an Oracle JET application

Amis Blog - Tue, 2018-01-02 07:03

In a recent article I discussed how the open source geo, GIS and map library OpenLayers can be used in an Oracle JET application. That article shows how countries selected in a standard Oracle JET Checkbox Set component are represented on a world map. In this article, we take this one step further – or actually several steps. By adding interactivity to the map, we allow users to select a country on the world map and we notify JET components of this selection. The synchronization works both ways: if the user types the name of a country in a JET input text component, this country is highlighted on the world map. Note that the map has full zooming and panning capabilities.

Webp.net-gifmaker (2)

 

The Gif demonstrates how the map is first shown with France highlighted. Next, the user types Chile into the input-text component and when that change is complete, the map is synchronized: Chile is highlighted. The user then hovers over Morocco – and the informational DIV element shows that fact. No selection is made yet. Then the user mouse clicks on Morocco. The country is selected on the map and the JET input-text component is synchronized with the name of the country. Subsequently, the same is done with India: hover and then select. Note: the map can easily support multi-country selection; I had to turn off that option explicitly (default behavior is to allow it).

The challenges I faced when implementing this functionality:

  • add vector layer with countries (features)
  • highlight country when mouse is hovering over it
  • add select interaction to allow user to select a country
  • communicate country selection event in map to “regular” JET component
  • synchronize map with country name typed into JET component

The steps (assuming that the steps in this article are performed first):

  1. Create mapArea.html with input-text, informational DIV and map container (DIV)
  2. Create mapArea.js for the mapArea module
  3. Add a div with data bind for mapArea module to index.html
  4. Download a file in GEOJSON format with annotated geo-json geometries for the countries of the world
  5. Initialize the map with two layers – raster OSM world map and vector country shapes
  6. Add overlay to highlight countries that are hovered over
  7. Add Select Interaction – to allow selection of a country – applying a bold style to the selected country
  8. Update JET component from country selection
  9. Set country selection on map based on value [change]in JET component

And here is the code used to implement this: https://github.com/lucasjellema/jet-and-openlayers .

 

Create mapArea.html with input-text, informational DIV and map container (DIV)
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.6.4/ol-debug.css" />
<h2>Select Country on Map</h2>
<div id="componentDemoContent" style="width: 1px; min-width: 100%;">
    <div id="div1">

        <oj-label for="text-input">Country</oj-label>
        <oj-input-text id="text-input" value="{{selectedCountry}}" on-value-changed="[[countryChangedListener]]"></oj-input-text>
    </div>
</div>
<div id="info"></div>
<div id="map2" class="map"></div>
Create ViewModel mapArea.js for the mapArea module

 

define(
    ['ojs/ojcore', 'knockout', 'jquery', 'ol', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojlabel'],
    function (oj, ko, $, ol) {
        'use strict';
        function MapAreaViewModel() {
            var self = this;

            self.selectedCountry = ko.observable("France");
            self.countryChangedListener = function(event) {
            }
...
      }
        return new MapAreaViewModel();
    }
);
Add a DIV with data bind for mapArea module to index.html
    ...</header>
    <div role="main" class="oj-web-applayout-max-width oj-web-applayout-content">
    <div data-bind="ojModule:'mapArea'" /> 
    </div>
    <footer class="oj-web-applayout-footer" role="contentinfo">
    ...
Download a file in GEOJSON format with annotated geo-json geometries for the countries of the world

I downloaded a GEOJSON file with country data from GitHub: https://github.com/johan/world.geo.json and placed the file in the directory src\js\viewModels of my JET application:

image

 

Initialize the map with two layers – raster OSM world map and vector country shapes
        function MapAreaViewModel() {
            var self = this;

            self.selectedCountry = ko.observable("France");
            self.countryChangedListener = function(event) {
                // self.selectInteraction.getFeatures().clear();
                // self.setSelectedCountry(self.selectedCountry())                
            }


            $(document).ready
                (
                // when the document is fully loaded and the DOM has been initialized
                // then instantiate the map
                function () {
                    initMap();
                })


            function initMap() {
                var style = new ol.style.Style({
                    fill: new ol.style.Fill({
                        color: 'rgba(255, 255, 255, 0.6)'
                    }),
                    stroke: new ol.style.Stroke({
                        color: '#319FD3',
                        width: 1
                    }),
                    text: new ol.style.Text()
                });

                self.countriesVector = new ol.source.Vector({
                    url: 'js/viewModels/countries.geo.json',
                    format: new ol.format.GeoJSON()
                });
               self.map2 = new ol.Map({
                    layers: [
                        new ol.layer.Vector({
                            id: "countries",
                            renderMode: 'image',
                            source: self.countriesVector,
                            style: function (feature) {
                                style.getText().setText(feature.get('name'));
                                return style;
                            }
                        })
                        , new ol.layer.Tile({
                            id: "world",
                            source: new ol.source.OSM()
                        })
                    ],
                    target: 'map2',
                    view: new ol.View({
                        center: [0, 0],
                        zoom: 2
                    })
                });
         }//initMap

 

Add overlay to highlight countries that are hovered over

Note: this code is added to the initMap function:

                // layer to hold (and highlight) currently selected feature(s) 
                var featureOverlay = new ol.layer.Vector({
                    source: new ol.source.Vector(),
                    map: self.map2,
                    style: new ol.style.Style({
                        stroke: new ol.style.Stroke({
                            color: '#f00',
                            width: 1
                        }),
                        fill: new ol.style.Fill({
                            color: 'rgba(255,0,0,0.1)'
                        })
                    })
                });

                var highlight;
                var displayFeatureInfo = function (pixel) {

                    var feature = self.map2.forEachFeatureAtPixel(pixel, function (feature) {
                        return feature;
                    });

                    var info = document.getElementById('info');
                    if (feature) {
                        info.innerHTML = feature.getId() + ': ' + feature.get('name');
                    } else {
                        info.innerHTML = '&nbsp;';
                    }

                    if (feature !== highlight) {
                        if (highlight) {
                            featureOverlay.getSource().removeFeature(highlight);
                        }
                        if (feature) {
                            featureOverlay.getSource().addFeature(feature);
                        }
                        highlight = feature;
                    }

                };

                self.map2.on('pointermove', function (evt) {
                    if (evt.dragging) {
                        return;
                    }
                    var pixel = self.map2.getEventPixel(evt.originalEvent);
                    displayFeatureInfo(pixel);
                });

 

Add Select Interaction – to allow selection of a country – applying a bold style to the selected country

This code is based on this example: http://openlayers.org/en/latest/examples/select-features.html .

                // define the style to apply to selected countries
                var selectCountryStyle = new ol.style.Style({
                    stroke: new ol.style.Stroke({
                        color: '#ff0000',
                        width: 2
                    })
                    , fill: new ol.style.Fill({
                        color: 'red'
                    })
                });
                self.selectInteraction = new ol.interaction.Select({
                    condition: ol.events.condition.singleClick,
                    toggleCondition: ol.events.condition.shiftKeyOnly,
                    layers: function (layer) {
                        return layer.get('id') == 'countries';
                    },
                    style: selectCountryStyle

                });
                // add an event handler to the interaction
                self.selectInteraction.on('select', function (e) {
                    //to ensure only a single country can be selected at any given time
                    // find the most recently selected feature, clear the set of selected features and add the selected the feature (as the only one)
                    var f = self.selectInteraction.getFeatures()
                    var selectedFeature = f.getArray()[f.getLength() - 1]
                    self.selectInteraction.getFeatures().clear();
                    self.selectInteraction.getFeatures().push(selectedFeature);
                });

and just after the declaration of self.map2:

...
    self.map2.getInteractions().extend([self.selectInteraction]);

Update JET component from country selection

Add to the end of the select event handler of the selectInteraction:

                    var selectedCountry = { "code": selectedFeature.id_, "name": selectedFeature.values_.name };
                    // set name of selected country on Knock Out Observable
                   self.selectedCountry(selectedCountry.name);

Create

                self.setSelectedCountry = function (country) {
                    //programmatic selection of a feature
                    var countryFeatures = self.countriesVector.getFeatures();
                    var c = self.countriesVector.getFeatures().filter(function (feature) { return feature.values_.name == country });
                    self.selectInteraction.getFeatures().push(c[0]);
                }
Set country selection on map based on value [change]in JET component

Implement the self.countryChangedListener that is refered to in the mapArea.html file in the input-text componentL:

            self.countryChangedListener = function(event) {
                self.selectInteraction.getFeatures().clear();
                self.setSelectedCountry(self.selectedCountry())                
            }

Create the following listener (for the end of loading the GeoJSON data in the countriesVector); when loading is ready, the current country value in the selectedCountry observable backing the input-text component is used to select the initial country:

                var listenerKey = self.countriesVector.on('change', function (e) {
                    if (self.countriesVector.getState() == 'ready') {
                        console.log("loading dione");
                        // and unregister the "change" listener 
                        ol.Observable.unByKey(listenerKey);
                        self.setSelectedCountry(self.selectedCountry())
                    }
                });

 

References

GitHub Repo with the code (JET Application) : https://github.com/lucasjellema/jet-and-openlayers .

Countries GeoJSON file – https://github.com/johan/world.geo.json

Open Layers Example of Select Interaction – http://openlayers.org/en/latest/examples/select-features.html

Open Layers API – Vector: http://openlayers.org/en/latest/apidoc/ol.source.Vector.html

Event Listener for OpenLayers Vector with GEOJSON source – https://gis.stackexchange.com/questions/123149/layer-loadstart-loadend-events-in-openlayers-3/123302#123302

Animated Gif maker – http://gifmaker.me/

OpenLayers 3 : Beginner’s Guideby Thomas Gratier; Erik Hazzard; Paul Spencer, Published by Packt Publishing, 2015

OpenLayers Book – Handling Selection Events –  http://openlayersbook.github.io/ch11-creating-web-map-apps/example-08.html

The post Using an OpenLayers map to select countries in an Oracle JET application appeared first on AMIS Oracle and Java Blog.

Wrong Cell Server name on X6-2 Elastic Rack - Bug 25317550

Syed Jaffar - Tue, 2018-01-02 05:55
Two X6-2 Elastic Full capacity Exadata systems were deployed recently. Due to the following BUG, cell names were not properly updated with the client provided names after executing the applyElasticConfig.sh.

Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

Though this doesn't impact the operations, but, certainly will create confusion when multiple Exadata systems are deployed in the same data center, due to exact name of cell, cell disks, grid disks.

Note : Its highly recommended to validate the cell names after executing the applyElasticConfig.sh, before running the onecommand. If you encounter the similar problem, simply change the cell name with alter cell name=[correctname] and proceed with onecommand execution to avoid the BUG.

The default names looks like the below :

# dcli -g cell_group -l root 'cellcli -e list cell attributes name'
                        celadm1: ru02
                        celadm1: ru04
                        celadm3: ru06
                        celadm4: ru08
                        celadm5: ru10
                        celadm6: ru12



Changing the cell name to reflect the cell disk, grid disk names, you need to follow the below procedure:

The procedure below must be performed on all cells separately and sequentially(to avoid full downtime);

1) Change the cell name:
    cellcli> alter cell name=celadm5

 
2) Confirm griddisks can be taken offline.

    cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
            ASMDeactivationOutcome - Should be YES for all griddisks


3) Inactivate griddisk on that cell
    cellcli> alter griddisk all inactive

 
            Observation - IF any votesiks are in the storage server will relocate to any surviving storage servers.


4) Change cell disk name
            alter celldisk CD_00_ru10 name=CD_00_celadm5;    
            alter celldisk CD_01_ru10 name=CD_01_
celadm5;
            alter celldisk CD_02_ru10 name=CD_02_
celadm5;
            alter celldisk CD_03_ru10 name=CD_03_
celadm5;
            alter celldisk CD_04_ru10 name=CD_04_
celadm5;
            alter celldisk CD_05_ru10 name=CD_05_
celadm5;
            alter celldisk CD_06_ru10 name=CD_06_
celadm5;
            alter celldisk CD_07_ru10 name=CD_07_
celadm5;
            alter celldisk CD_08_ru10 name=CD_08_
celadm5;
            alter celldisk CD_09_ru10 name=CD_09_
celadm5;
            alter celldisk CD_10_ru10 name=CD_10_
celadm5;
            alter celldisk CD_11_ru10 name=CD_11_
celadm5;

5) Change Griddisk name using the below examples (do it for all grid disks, DATAC1, DBFS & RECOC1)
            alter GRIDDISK DATAC1_CD_00_ru10  name=DATAC1_CD_00_
celadm5;
            alter GRIDDISK DBFS_DG_CD_02_ru10 name=DBFS_DG_CD_02_
celadm5;
            alter GRIDDISK RECOC1_CD_11_ru10  name=RECOC1_CD_11_
celadm5;

6) Activate griddisk on that cell
            cellcli> ALTER GRIDDISK ALL ACTIVE;
       
        There are some important points to be noted after activating griddisks.


      a) asm disks path and name
       * griddisk name change is automatically getting relflected in asm disk path.
       * asm logical name is still referring old name.
      b) failgroup
       * failgroup name is changed and using the same old name.

7) Changing ASM logical name and failgroup name.

    * This can be achived by dropping asmdisk and adding back with correct name. The observation is failgroup name will get automatically changed when we adding
      back asm disks with correct name.
    * ASMCA is the best tool to drop and add back asm disks with 250+ rebalancing power limit.
   
    a) Drop asm disks and observations.
        * We need to make sure asmdisks can be dropped
             cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
                ASMDeactivationOutcome - Should be YES for all griddisks
        * Drop asmdisks using asmca or alter diskgroup 


        We can see asmdisk state will be dropping and there will be an ongoing rebalance operation.
   
        * ASM rebalance operation.
            We can see ongoing asm rebalance operation using below command and change the power to finish it fast.


                sqlplus / as asm


                sql> select * from v$asm_operation;
                sql> alter diskgroup DATAC1 rebalance power 256;


        * Once rebalance operation completed we can asm disk state as changed to noraml, name will become empty failgroup also changed with corret name.
       
a) ADD back asm disks and observations.

Adding back as well can be done by using asmca or asm diskgroup alter commands.
We need to make sure we are adding back with correct name in this case DATAC1_CD_00_RU10 should be added back DATAC1_CD_00_arb02celadm19

We can see ongoing asm rebalance operation using below command and change the power to finish it fast.


        sqlplus / as asm
        sql> select * from v$asm_operation;
   
8) Remaining cells

We can continue same operation for remaining cells and entire operation can be completed with out any downtime at database level.
Once we have completed we can see all votedisks as well relocated or renamed with new name.


References:
Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIX

I appreciate and thank my team member Khalid Kizhakkethil for doing this wonderful job and preparing the documentation.


 



New OA Framework 12.2.5 Update 18 Now Available

Steven Chan - Tue, 2018-01-02 05:33

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes (TBS) fixes in total, including all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • Tip type is not displayed in 'Screen Reader Optimized' accessibility mode.
  • Null Pointer Exception is thrown with a specific Configurator Developer flow that uses the query bean.

Related Articles

Categories: APPS Blogs

Quick SQL: from Packaged App to built-in feature in Oracle APEX 5.2

Dimitri Gielis - Tue, 2018-01-02 04:26
I blogged about Quick SQL already a few times as I saw not many developers knew about it.

In Oracle APEX 5.1 you can install Quick SQL by going to the Packaged Apps section and install it from there:



I really love Quick SQL as it allows me to build my data model very fast, but also shows me the structure of the tables in a very efficient way. That is why I created a script that can reverse engineer existing tables into the Quick SQL format.

From Oracle APEX 5.2 onwards you won't find Quick SQL in the packaged app section anymore... but no worries, it's not gone, it's now built-in the APEX framework itself :)

Go to SQL Workshop - SQL Scripts:


Hit the Quick SQL button:


Here you have Quick SQL :)


You can run your script after you save, straight from this interface.

Note: the screenshots are taken from Oracle APEX 5.2 Early Adopter, so things might change in the final release of APEX 5.2.
Categories: Development

Defaults

Jonathan Lewis - Tue, 2018-01-02 02:43

Following on from a Twitter reference and an update to an old posting about a side effect of  constraints on the work done inserting data, I decided to have a closer look at the more general picture of default values and inserts. Here’s a script that I’ve tested against 11.2.0.4, 12.1.0.2, and 12.2.0.1 (original install, no patches applied in all cases):


rem
rem     Script:         defaults_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2017
rem

create table t1 (
        column1  varchar2(10),
        column2  varchar2(10),
        column3  varchar2(10),
        column4  varchar2(10),
        column32 varchar2(32)   default 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        column33 varchar2(33)   default 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
        virtual1      generated always as (
                column4 || column3 || column2 || column1
        ) virtual
)
segment creation immediate
;

execute dbms_output.put_line(dbms_stats.create_extended_stats(user,'t1','(column1 || column2 || column3 || column4)'))
execute dbms_output.put_line(dbms_stats.create_extended_stats(user,'t1','(column1,column2,column3)'))

create or replace function plsqlfunction_with_30char_name(
        i_in varchar2
)
return varchar
deterministic
is
begin
        return initcap(i_in);
end;
/

create index t1_i1 on t1(substr(plsqlfunction_with_30char_name(column1),1,10));

When you create a function-based index you get a hidden, virtual column supporting the index expression; when you create extended stats (of either type) you get a hidden virtual column holding the extension definition, when you create any type of virtual column, including a “real” virtual column you get a data dictionary entry holding the column name and the expression definition: all these options use the “data_default” column from user_tab_cols to display the defining information – as we can see when we the following query:


select  column_name, data_default
from    user_tab_cols
where   table_name = 'T1'
order by
         column_id
;

COLUMN_NAME                      DATA_DEFAULT
-------------------------------- --------------------------------------------------------------------------------
COLUMN1
COLUMN2
COLUMN3
COLUMN4
COLUMN32                         'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
COLUMN33                         'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
VIRTUAL1                         "COLUMN4"||"COLUMN3"||"COLUMN2"||"COLUMN1"
SYS_STUDAN97VB_XDKRTR_NPFAB80P   "COLUMN1"||"COLUMN2"||"COLUMN3"||"COLUMN4"
SYS_STUTCBJ8OFFSY1D9ZXRYZ0W3W#   SYS_OP_COMBINED_HASH("COLUMN1","COLUMN2","COLUMN3")
SYS_NC00010$                     SUBSTR("TEST_USER"."PLSQLFUNCTION_WITH_30CHAR_NAME"("COLUMN1"),1,10)

Apart from the special cases I’ve just listed, you’ll also see the “default values” I specified for column32 and column33 – you’ll notice that I’ve supplied a 30 character string as the default for column32, and a 31 character string as the default for column33 – this is a convenience that means the used space in the data_default (which is a long column) corresponds to the name of the column once you include the single quotes in the their character count.

Having set my data up I’m going to emulate a bad application that uses lots of literal string SQL and leaves Oracle to fill in the default values (and, of course, derive the various virtual values it might need).


alter session set events '10046 trace name context forever, level 4';

begin
        for i in 1..10 loop
                execute immediate '
                        insert into t1 (column1, column2, column3, column4)
                        values( ' || i || ', ' || i || ', ' || i || ', ' || i || ')'
                ;
                commit;
        end loop;
end;
/

alter session set events '10046 trace name context off';

This code generates 10 strings that populate column1 through to column4 only. But you’ll notice the call to enable SQL tracing – and here’s the interesting bit of the output from applying tkprof to the trace file:


  SQL ID: 47r1y8yn34jmj Plan Hash: 2191121161

select default$
from
 col$ where rowid=:1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       50      0.00       0.00          0          0          0           0
Execute     50      0.00       0.00          0          0          0           0
Fetch       50      0.00       0.00          0        100          0          50
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      150      0.00       0.00          0        100          0          50

The summary is the same for all three versions of Oracle I tested*** – we’ve queried for a default value from col$ 5 times for each row we insert! (Technically that’s 5 times for each insert statement we’ve had to (hard-)parse; this anomaly wouldn’t appear if we have been using a bind-variable method and reusing the insert statement.) *** There is one difference in 12.2 – the number of parse calls reported for this statement was 1 rather than 50 but, judging by the various cursor cache hit stats, that may be due to a change in accounting rather than a change in workload.

Check the table definition: there are two “real defaults” and 4 expressions due to the various virtual columns – so why 5 calls per insert and not 6 ? The answer lies in the length of the actual value involved – if the text that appears in the (long) data_default column is 32 characters or shorter it will be stored in the the dictionary cache (rowcache), but only one of our 6 values is that short, so Oracle looks up the other five on each insert (hard parse).

This is a little strange on two counts: first, given the enormous memories that have been available for years and the extreme usefulness of virtual columns and extended stats it seems something of an oversight to limit the memory available to the cache that holds such critical definitions; secondly, we don’t need to evaluate the virtual columns (unless they are indexed) or extended stats on inserts so why is Oracle querying the definitions anyway ? [Possible answer: perhaps Oracle is using generic code that allows for column check constraints – which might exist on users’ virtual columns – and hasn’t catered for bypassing system-generated virtual columns.]

A key point to remember before you start worrying too much about the impact of the execution count for this query against col$ is that it’s only relevant to “hard” parses – so you’re only likely to notice it if you have a large number of different “literal string” inserts that should be using bind variables; and that means you’ve probably got an important coding defect to address before you worry too much about the extra impact caused by this particular call. Nevertheless there are a couple of bug reports on MoS that have been raised against this query and, after writing this note, I did a quick Google search for the critical SQL_ID and found (among others) this production example from Franck Pachot.

 


Embedding OpenLayers in Oracle JET for Advanced Maps and GIS style User Interfaces

Amis Blog - Mon, 2018-01-01 10:49

Oracle JET is a toolkit for the creation of rich web applications. Many applications will have location-related aspects. Such applications can benefit from advanced map capabilities – for presenting data in maps, allowing users to interact with maps in order to formulate queries, navigate to relevant details or manipulate data. OpenLayers is one of the most prominent open source JavaScript libraries for working with maps in web applications. It provides an API for building rich web-based geographic applications similar to Google Maps and Bing Maps. One of the geographic data providers that OpenLayers works well with is Open Street Map (OSM) – also fully open source.

In this article, I will report on my first steps with OpenLayers and OSM integrated in Oracle JET. In a few simple steps, I will create the JET application illustrated below –a  mix of a JET Checkbox Set where countries can be selected and an OpenLayers map that is manipulated from JavaScript to show (and hide) markers for the countries that are selected (and deselected).

Webp.net-gifmaker

This article should provide you with a starting point for working with OpenLayers in JET yourself. Source code for this article can be downloaded from GitHub: https://github.com/lucasjellema/jet-and-openlayers .

Steps:

  • create new JET application (for example with JET CLI)
  • download OpenLayers distribution and add to JET application’s folders
  • configure the JET application for the OpenLayers library
  • add a DIV as map container to the HTML file
  • add the JavaScript code to initialize and manipulate the map to the ViewModel JS file

In more detail:

1. Create a new Oracle JET application

Follow for example the steps described on the Oracle JET Web Pages: http://www.oracle.com/webfolder/technetwork/jet/globalGetStarted.html 

Use

ojet create projectname

to create the new JET application

2. Download OpenLayers Distribution and Add to the JET Application

Download the OpenLayers distribution – a zip-file with the combined JavaScript and CSS files for OpenLayers (4.x) from https://github.com/openlayers/openlayers/releases/ 

In the JET application’s js/libs directory, create a new directory openlayers and add the new library and any accompanying files to it.

image

3. Configure the JET application for the OpenLayers library

In the js directory update the js/main-release-paths.json file to include the new library.

  "ol": "libs/openlayers/ol-debug",
  "olcss": "libs/openlayers/ol.css"
}

In your RequireJS bootstrap file, typically main.js, add a link to the new file in the path mapping section and include the new library in the require() definition.

  paths:
  //injector:mainReleasePaths
  {
    ...
    'ol': 'libs/openlayers/ol-debug'
  }
  //endinjector

In the same file add a Shim Configuration for OpenLayers

// Shim configurations for modules that do not expose AMD
  shim:
  {
    'jquery':
    {
      exports: ['jQuery', '$']
    }
    ,'ol':
    {
      exports: ['ol']
    }
  }
}

Finally, add module ‘ol’ to the call to require ad as parameter in the callback function (if you want to perform special initialization on this module):

require(['ojs/ojcore', 'knockout', 'appController','ol', 'ojs/ojknockout', 'ojs/ojbutton', 'ojs/ojtoolbar', 'ojs/ojmenu','ojs/ojmodule'],
  function (oj, ko, app, ol) { // this callback gets executed when all required modules are loaded
    ...

Now to actually include  map in a View in the JET application:

4. Add a DIV as map container to the HTML file

The View contains a DIV that will act as the container for the map. It also contains a Checkbox Set with checkboxes for five different countries. The checkbox set is data bound to the ViewModel; any change in selection status will trigger an event listener. Additionally, the currentCountries variable in the ViewModel is updated with any change by the user.

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/openlayers/4.6.4/ol-debug.css" />
<h2>Workarea with Map - Select Countries</h2>
<div id="div1">
        <oj-checkboxset id="countriesCheckboxSetId" labelled-by="mainlabelid" class="oj-choice-direction-row" value="{{currentCountries}}"
                on-value-changed="[[selectionListener]]">
                <oj-option id="uruopt" value="uy">Uruguay</oj-option>
                <oj-option id="romopt" value="ro">Romania</oj-option>
                <oj-option id="moropt" value="ma">Morocco</oj-option>
                <oj-option id="spaopt" value="es">Spain</oj-option>
                <oj-option id="indopt" value="in">India</oj-option>
        </oj-checkboxset>
        <br/>
</div>
<div id="map2" class="map"></div>

5. Add JavaScript code to initialize and manipulate the map to the ViewModel JS file

Add OpenLayers dependency in workArea.js:

define(
    ['ojs/ojcore', 'knockout', 'jquery', 'ol', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojbutton', 'ojs/ojlabel', 'ojs/ojcheckboxset'],
    function (oj, ko, $, ol) {
        'use strict';
        function WorkAreaViewModel() {
            var self = this;

The following code defines a countryMap – a collection of five elements (one for each of five countries) that hold longitude and lattitude for each country, as well as a display name and country code (also the key in the map). Subsequenty, an OpenLayers feature is created for each country, and referenced from the countryMap element for later use.

            self.currentCountries = ko.observableArray([]);

            self.countryMap = {};
            self.countryMap['in'] = { "place_id": "177729185", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "304716", "boundingbox": ["6.5546079", "35.6745457", "68.1113787", "97.395561"], "lat": "22.3511148", "lon": "78.6677428", "display_name": "India", "class": "boundary", "type": "administrative", "importance": 0.3133568788165, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "India", "country_code": "in" } };
            self.countryMap['es'] = { "place_id": "179962651", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "1311341", "boundingbox": ["27.4335426", "43.9933088", "-18.3936845", "4.5918885"], "lat": "40.0028028", "lon": "-4.003104", "display_name": "Spain", "class": "boundary", "type": "administrative", "importance": 0.22447060272487, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Spain", "country_code": "es" } };
            self.countryMap['ma'] = { "place_id": "217466685", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "3630439", "boundingbox": ["21.3365321", "36.0505269", "-17.2551456", "-0.998429"], "lat": "31.1728192", "lon": "-7.3366043", "display_name": "Morocco", "class": "boundary", "type": "administrative", "importance": 0.19300832455819, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Morocco", "country_code": "ma" } }
            self.countryMap['ro'] = { "place_id": "177563889", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "90689", "boundingbox": ["43.618682", "48.2653964", "20.2619773", "30.0454257"], "lat": "45.9852129", "lon": "24.6859225", "display_name": "Romania", "class": "boundary", "type": "administrative", "importance": 0.30982735099944, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Romania", "country_code": "ro" } };
            self.countryMap['uy'] = { "place_id": "179428864", "licence": "Data © OpenStreetMap contributors, ODbL 1.0. http://www.openstreetmap.org/copyright", "osm_type": "relation", "osm_id": "287072", "boundingbox": ["-35.7824481", "-30.0853962", "-58.4948438", "-53.0755833"], "lat": "-32.8755548", "lon": "-56.0201525", "display_name": "Uruguay", "class": "boundary", "type": "administrative", "importance": 0.18848351906936, "icon": "http://nominatim.openstreetmap.org/images/mapicons/poi_boundary_administrative.p.20.png", "address": { "country": "Uruguay", "country_code": "uy" } };

            for (const c in self.countryMap) {
                // create a feature for each country in the map 
                var coordinates = ol.proj.transform([1 * self.countryMap.lon, 1 * self.countryMap.lat], 'EPSG:4326', 'EPSG:3857');
                var featurething = new ol.Feature({
                    name: self.countryMap.display_name,
                    geometry: new ol.geom.Point(coordinates)
                });
                self.countryMap.feature = featurething;
            }

Then add the code to do the initialization of the Map itself – to be performed when the DOM is ready

            $(document).ready
                (
                // when the document is fully loaded and the DOM has been initialized
                // then instantiate the map
                function () {
                    initMap();
                })

            function initMap() {
                self.elem = document.getElementById("text-input");
                self.map = new ol.Map({
                    target: 'map2',
                    layers: [
                        new ol.layer.Tile({
                            source: new ol.source.OSM()
                        })
                    ],
                    view: new ol.View({
                        center: ol.proj.fromLonLat([-2, -5]),
                        zoom: 3
                    })
                });
            }

and the DIV target container is available:

Also add the code for the selectionListener to be executed whenever countries are selected or deselected.
This code adds OpenLayers features for each of the currently selected countries. Next, construct a layer which contains these features and has a specific style (red circle with big X) associated with it. Finally, add this layer to the map – to have the features displayed in the web page.

            // triggered whenever a checkbox is selected or deselected
            self.selectionListener = function (event) {
                console.log("Country Selection Changed");

                var vectorSource = new ol.source.Vector({}); // to hold features for currently selected countries
                for (var i = 0; i < self.currentCountries().length; i++) {
                    // add the feature to the map for each currently selected country
                    vectorSource.addFeature(self.countryMap[self.currentCountries()[i]].feature);
                }//for

                var layers = self.map.getLayers();
                // remove the feature layer from the map if it already was added
                if (layers.getLength() > 1) {
                    self.map.removeLayer(layers.item(1));
                }
                //Create and add the vector layer with features to the map
                // define the style to apply to these features: bright red, circle with radius 10 and a X as (text) content
                var vector_layer = new ol.layer.Vector({
                    source: vectorSource
                    ,style: function(feature) {
                        var style = new ol.style.Style({
                            image: new ol.style.Circle({
                              radius: 10,
                              stroke: new ol.style.Stroke({
                                color: '#fff'
                              }),
                              fill: new ol.style.Fill({
                                //color: '#3399CC' // light blue
                                color: 'red' // light blue
                            })
                            }),
                            text: new ol.style.Text({
                              text: "X",
                              fill: new ol.style.Fill({
                                color: '#fff'
                              })
                            })
                          });
                          return style;
                        }
                 } )
                self.map.addLayer(vector_layer);

            }//selectionListener
        }

References

Source code in GitHub Repo: https://github.com/lucasjellema/jet-and-openlayers 

Blog article by Enno Schulte (Virtual7) on adding Socket.io as third part library to a JET 3.x application: http://www.virtual7.de/blog/2017/07/oracle-jet-3-add-third-party-libraries-example-socket-io/ 

Documentation on adding 3rd party libraries to JET 4.0: https://docs.oracle.com/middleware/jet410/jet/developer/GUID-EC40DF3C-57FB-4919-A066-73E573D66B67.htm#JETDG-GUID-EC40DF3C-57FB-4919-A066-73E573D66B67 

OJET Docs Checkbox Set – http://www.oracle.com/webfolder/technetwork/jet/jsdocs/oj.ojCheckboxset.html

The post Embedding OpenLayers in Oracle JET for Advanced Maps and GIS style User Interfaces appeared first on AMIS Oracle and Java Blog.

Happy New Year 2018

Senthil Rajendran - Sun, 2017-12-31 21:46

Happy New Year 2018

Data Access Layer vs Table APIs

Andrew Clarke - Sun, 2017-12-31 11:59
One of the underlying benefits of PL/SQL APIs is the enabling of data governance. Table owners can shield their tables behind a layer of PL/SQL. Other users have no access to the tables directly but only through stored procedures. This confers many benefits:
  • Calling programs code against a programmatic interface. This frees the table owner to change the table's structure whenever it's necessary without affecting its consumers.
  • Likewise the calling programs get access to the data they need without having to know the details of the table structure, such as technical keys.
  • The table owner can use code to enforce complicated business rules when data is changed.
  • The table owner can enforce sophisticated data access policies (especially for applications using Standard Edition without DBMS_RLS).
So naturally the question arises, is this the same as Table APIs?

Table APIs used to be a popular approach to encapsulating tables. The typical Table API comprised two packages per table; one package provided methods for inserting, updating and deleting records, and the other package provided query methods. The big attraction of Table APIs was that they could be 100% generated from the data dictionary - both Oracle Designer and Steven Feuerstein's QNXO library provided TAPI generators. And they felt like good practice because, y'know, access to the tables was shielded by a PL/SQL layer.

But there are several problems with Table APIs.

The first is that they entrench row-by-agonising-row processing. Table APIs have their roots in early versions of Oracle so the DML methods only worked with a single record. Even after Oracle 8 introduced PL/SQL collection types TAPI code in the wild tended to be RBAR: there seems to something in the brain of the average programmer which predisposes them to prefer loops executing procedural code rather than set operations.

The second is that they prevent SQL joins. Individual records have to be selected from one table to provide keys for looking up records in a second table. Quite often this leads to loops within loops. So-called PL/SQL joins prevent the optimizer from choosing good access paths when handling larger amounts of data.

The third issue is that it is pretty hard to generate methods for all conceivable access paths. Consequently the generated packages had a few standard access paths (primary key, indexed columns) and provided an dynamic SQL method which accepted a free text WHERE clause. Besides opening the package to SQL injection this also broke the Law of Demeter: in order to pass a dynamic WHERE clause the calling program needed to know the structure of the underlying table, which defeats the whole objective of encapsulation.

Which leads on to the fourth, more philosophical problem with Table APIs: there is minimal abstraction. Each package is generated so it fits very closely to the structure of the Table. If the table structure changes we have to regenerate the TAPI packages: the fact that this can be done automatically is scant recompense for the tight coupling between the Table and the API.

So although Table APIs could be mistaken for good practice in actuality they provide no real benefit. The interface is 1:1 with the table structure so it has no advantage over granting privileges on the table. Combined with the impact of RBAR processing and PL/SQL joins on performance and the net effect of Table APIs is disastrous.

We cannot generate good Data Access APIs: we need to write them. This is because the APIs should be built around business functions rather than tables. The API packages granted to other users should comprise procedures for executing transactions. A Unit Of Work is likely to touch more than one table. These have to be written by domain experts who understand the data model and the business rules.

Part of the Designing PL/SQL Programs series

Been There

Jonathan Lewis - Sun, 2017-12-31 04:51

It’s the end of the year and time for a retrospective of some sort so I was thinking of listing the top 10 most popular pages on my blog, but Connor McDonald beat me to it, so I decided to see if I could remember all the countries I’d visited since starting to work with the Oracle software, and here’s the list in alphabetical order:

Antigua
Australia
Austria
Belgium
Bosnia
Brunei
Bulgaria
Canada
China
Croatia
Czech Republic
Denmark
Dubai
Egypt
Estonia
Finland
France
Germany
Greece
Hungary
Iceland
India
Indonesia
Ireland
Israel
Italy
Japan
Latvia
Lithuania
Malaysia

A few of these were holidays rather than work, and I may have forgotten a couple, so if you’ve seen me in your country and it’s not on the list let me know.

The list can be a bit of a nuisance, I had to list “all the countries you’ve visited in the last 10 years” for both the US and Russian visas: the US form only allowed for 5 countries and the Russian one for 40; and the US expected me to list EVERY visit, with dates and city!

 


12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS

Yann Neuhaus - Sat, 2017-12-30 13:54

In the previous post https://blog.dbi-services.com/12c-multitenant-internals-pdb-replay-ddl-for-common-users I’ve done some DDL on a common user to show how this is replayed later for PDBs that were not opened at that time. But what happens when one of the DDL fails on one PDB?

PDB$LASTREPLAY

In the last post, the C##USER1 common user was created and all pluggable databases (PDB1 with con_id=3 and PDB2 with con_id=4) were opened and synchronized:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where opcode=-1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

When REPLAY# in the PDB is equal to the CDB$ROOT one, this means that there are no additional statements to replicate on the PDB.

I have PDB1 opened read write and PDB2 in read only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ ONLY NO

For the demo my user’s default tablespace is SYSTEM:

SQL> select con_id,username,common,default_tablespace from cdb_users where username='C##USER1' order by 1;
 
CON_ID USERNAME COMMON DEFAULT_TABLESPACE
------ -------- ------ ------------------
1 C##USER1 YES SYSTEM
3 C##USER1 YES SYSTEM
4 C##USER1 YES SYSTEM

Failure in opened containers

I want to change the default tablespace for C##USER1 and I have a USERS tablespace in CDB$ROOT (but not in the PDBs):

SQL> alter user C##USER1 default tablespace USERS;
 
Error starting at line : 50 File @ common-users-pdb-sync.sql
In command -
alter user C##USER1 default tablespace USERS
Error report -
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

As we have seen in the last post, the DDL is executed on all containers that are opened read write. Here it is fine on CDB$ROOT but fails on PDB1.

Then I create the USERS tablespace in PDB1:

SQL> alter session set container=PDB1;
Session altered.
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB1/users.dbf' size 5M;
Tablespace USERS created.
SQL> alter session set container=CDB$ROOT;
Session altered.

And now, the statement is successful in CDB$ROOT, replicated on PDB1:

SQL> alter user C##USER1 default tablespace USERS;
User C##USER1 altered.

This is nice: the statement is successful in all containers or fails. When it is successful, statements are recorded in PDB_SYNC$:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 1467010 29-dec-17 09:27:01 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@

Failure in replay at open for closed containers

But PDB2 is not synchronized because it was not opened read write:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

But I don’t have a USERS tablespace in PDB2, so the replay will fail:

SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
 
SQL> alter pluggable database PDB2 open;
ORA-24344: success with compilation error
Pluggable database PDB2 altered.

This is a warning only. The SQlcl feedback is a bit misleading, mentioning a compilation error because this is where we used to have warnings, but the SQl*Plus message is more clear:

SQL> alter pluggable database PDB2 open;
Warning: PDB altered with errors.

The PDB2 cannot be left closed, because you need to create a tablespace here. But it cannot be opened to everyone, because it is not in sync with CDB$ROOT. So what happens is that the PDB is opened in restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE YES

Note that if you look at PDB_SYNC$ in the PDB at that time, it looks like REPLAY#=11 has increased but you also see rows for the statement that has to be run. You have to connect to the PDB because containers() do not run in restricted session containers:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 1469022 29-dec-17 09:27:02 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@
 
SQL> alter session set container=CDB$ROOT;
Session altered.

Actually, the attempt to sync has inserted the statements and pushed the last replay indicator. Now, the PDB has all information to do a sync without the need to go to CDB$ROOT. The DDL was not replayed, but has been stored locally. When the sync will be successful, statements will be removed from the local PDB_SYNC$ leaving only the LASTREPLAY indicator.

PDB_PLUG_IN_VIOLATIONS

More info about the warning is stored in PDB_ALERT$ which you query from PDB_PLUG_IN_VIOLATIONS (the strange name reminds the TRANSPORT_SET_VIOLATIONS view used by DBMS_TTS):

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR PENDING Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.03.266780000 PM

Here you have the statement that failed and the error number, but no recommended ACTION. However, ORA-959 is “tablespace ‘%s’ does not exist” which gives a clue about the problem encountered.

As the PDB is opened a DBA (with RESTRICTED SESSION privilege) can add the tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB2/users.dbf' size 5M;
Tablespace USERS created.

But you cannot simply disable restricted session:

SQL> alter system disable restricted session;
SQL> alter system disable restricted session;
 
Error starting at line : 74 File @ common-users-pdb-sync.sql
In command -
alter system disable restricted session
Error report -
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
65144. 00000 - "ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted"
*Cause: An attempt was made to disable a restricted session while an unresolved error existed in PDB_PLUG_IN_VIOLATIONS.
*Action: Resolve all of the errors before trying to disable a restricted session.

One solution is to close and open the PDB to get the DDL replay:
SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
SQL> alter pluggable database PDB2 open;
Pluggable database PDB2 altered.

The other solution is to call DBMS_PDB.SYNC_PDB and disable restricted mode:

SQL> exec dbms_pdb.sync_pdb; commit;
PL/SQL procedure successfully completed.
 
SQL> alter system disable restricted session;
System DISABLE altered.

In both case, no warning here, and no restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

and the PDB_PLUG_IN_VIOLATIONS is updated to flag the issue as resolved:

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR RESOLVED Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.04.093659000 PM

At that time, the local PDB_SYNC$ table in PDB2 contains only the PDB$LASTREPLAY row, with the same value as in the CDB$ROOT table. The rows with the statements have been deleted once the DDL has been successfully replayed:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11

So what?

The mechanism is simple: record what is done in CDB$ROOT, replicate it in PDBs when possible (opened read-write) and try to replay it, mark the last replay step. For containers that were not writeable, at open, the DDL is replicated on the PDBs that lag being CDB$ROOT and replay step is updated. Then the DDL is replayed. When sucessful, the statement is removed from the replicated DDL. When it fails, you get a warning, and a message in PDB_PLUG_IN_VIOLATIONS, and the PDB is opened in restricted session mode to let you solve the problem.
If you can fix the issue so that the DDL to be replayed is successful, then you can just sync and disable restricted session, or simply close and re-open the PDB.
If you can’t fix it I suppose you need to hack the statements in the local PDB_SYNC$, with Oracle Support agreement of course, and make sure that you arrive to a state which is consistent with the other containers, especially CDB$ROOT.

 

Cet article 12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator