Feed aggregator

Oracle OpenWorld 2018 : My Sessions

Tim Hall - Mon, 2018-09-10 03:52

Oracle OpenWorld 2018 and Oracle Code One 2018 are just around the corner. The session catalogs have been around for a while, but they now include the date, time and location of the sessions.

This year I have three individual sessions, one group session and at least one shift in the Oracle Developer Community demo ground. If you are interested, here are my session details.

Monday

Conference: Oracle Code One
Session Type: Developer Session
Session ID: DEV6244
Session Title: Analytic Functions: A Developer’s Best Friend
Room: Moscone West – Room 2003
Date: 10/22/18
Start Time: 09:00:00 AM
End Time: 09:45:00 AM

Tuesday

Conference: Oracle Code One
Session Type: Developer Session
Session ID: DEV6243
Session Title: Cool New Features for Developers in Oracle Database 18c and Oracle Database 12c
Room: Moscone West – Room 2003
Date: 10/23/18
Start Time: 11:30:00 AM
End Time: 12:15:00 PM

Conference: Oracle OpenWorld
Session Type: Tips and Tricks Session
Session ID: TIP1989
Session Title: Even the ACEs make Mistakes – What did they learn?”
Room: Moscone West – Room 3001
Date: 10/23/18
Start Time: 03:45:00 PM
End Time: 04:30:00 PM

Wednesday

Blockchain Beer Demo
Start Time: 02:00:00 PM
End Time: 04:00:00 PM

Thursday

Conference: Oracle OpenWorld, Oracle Code One
Session Type: Business Use Case Session
Session ID: BUS1224
Session Title: DBA Does Docker
Room: Moscone West – Room 3018
Date: 10/25/18
Start Time: 12:00:00 PM
End Time: 12:45:00 PM

I’ll be there a few days early as I have the Oracle Developer Champion and Oracle ACE Director briefings before it.

See you there!

Cheers

Tim…

Oracle OpenWorld 2018 : My Sessions was first posted on September 10, 2018 at 9:52 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Convert function not working with CLOB datatype giving garbage value

Tom Kyte - Sun, 2018-09-09 17:46
We have some data of type lob and varchar2 which we need to export to another application which is CSV supported i.e. it reads data in form of CSV only. Its legacy application so we need to export data from oracle database in CSV format and need to s...
Categories: DBA Blogs

Partitioning -- 4 : Row Movement

Hemant K Chitale - Sun, 2018-09-09 10:06
Do you expect Primary Keys to be updatable ?  Some argue that Primary Key values should be immutable.  The argument is that a Primary Key should not be modified.

What about Partition Keys ?  Would you allow a Partition Key to be updated ?

Let me take the SALES_DATA table again :

SQL> desc sales_data
Name Null? Type
----------------------------------------- -------- ----------------------------
SALE_ID NOT NULL NUMBER
SALE_DATE DATE
INVOICE_NUMBER VARCHAR2(21)
CUSTOMER_ID NUMBER
PRODUCT_ID NUMBER
SALE_VALUE NUMBER

SQL> insert into sales_data
2 values (sales_data_seq.nextval,
3 to_date('09-SEP-2019','DD-MON-YYYY'),
4 'INV320001X',
5 45,
6 52,
7 10000)
8 /

1 row created.

SQL> commit;

Commit complete.

SQL>


After the INSERT, I realise that the year in the SALE_DATE is wrong -- it is 2019 instead of 2018.  I need to update the row to set the year to 2018.
(Since the SALES_DATA table is partitioned to have a separate Partition for each year, this row has gone into the P_2019 Partition).

SQL> select * from sales_data
2 where invoice_number='INV320001X' and customer_id=45;

SALE_ID SALE_DATE INVOICE_NUMBER CUSTOMER_ID PRODUCT_ID SALE_VALUE
---------- --------- --------------------- ----------- ---------- ----------
320001 09-SEP-19 INV320001X 45 52 10000

SQL> select * from sales_data partition (P_2019);

SALE_ID SALE_DATE INVOICE_NUMBER CUSTOMER_ID PRODUCT_ID SALE_VALUE
---------- --------- --------------------- ----------- ---------- ----------
320001 09-SEP-19 INV320001X 45 52 10000

SQL>
SQL> update sales_data
2 set sale_date = to_date('09-SEP-2018','DD-MON-YYYY')
3 where sale_id=320001
4 /
update sales_data
*
ERROR at line 1:
ORA-14402: updating partition key column would cause a partition change


SQL>


I encounter an error.  Oracle does not like updating a Partition Key value such that the row would have to move to a different Partition --- from the P_2019 Partition to the P_2018 Partition.

How would I allow updates that result in a row moving to a different Partition ?

SQL> alter table sales_data enable row movement;

Table altered.

SQL> update sales_data
2 set sale_date = to_date('09-SEP-2018','DD-MON-YYYY')
3 where sale_id=320001
4 /

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from sales_data partition (P_2019);

no rows selected

SQL> select * from sales_data partition (P_2018)
2 where sale_id=320001
3 /

SALE_ID SALE_DATE INVOICE_NUMBER CUSTOMER_ID PRODUCT_ID SALE_VALUE
---------- --------- --------------------- ----------- ---------- ----------
320001 09-SEP-18 INV320001X 45 52 10000

SQL>


The ALTER TABLE ... ENABLE ROW MOVEMENT is a DDL command (needs to be issued only once to allow any number of subsequent updates to the tables rows) that allows a row to move from one Partition to another Partition.  In this case, the row moved from P_2019 to P_2018.

Moving rows from one Partition to another Partition is expensive.  Each row moved in such a manner results in
(a) marking deletion of the row from the original Partition
(b) physically inserting the *entire* rows (irrespective of length of the row) into the new Partition -- not just the SALE_DATE value but every column has to be written into a block in the new Partition
(c) updating *every* index (Global or Local) on the Table

That is why it is not a good design to have frequently updated Partition Keys resulting in a row moving from one Partition to another.  You may have to reconsider the Partitioning definition or data and transaction flow in the application.

(Do you know where else ENABLE ROW MOVEMENT is required ?  There are other cases, not related to Partitioning, where you may have to ENABLE ROW MOVEMENT for a table.  By default when you CREATE a Table, ROW MOVEMENT is not enabled unless you explicitly enable it).



Categories: DBA Blogs

Oracle 18c: Cluster With Oracle ASM Filter Driver

Yann Neuhaus - Sat, 2018-09-08 17:32

During the installation of Oracle Grid Infrastructure, you can optionally enable automated installation and configuration of Oracle ASM Filter Driver for your system with the Configure ASM Filter Driver check box on the Create ASM Disk Group wizard page. When you enable the Configure ASM Filter Driver box, an automated process for Oracle ASMFD is launched during Oracle Grid Infrastructure installation.

If Oracle ASMLIB exists on your Linux system, then deinstall Oracle ASMLIB before installing Oracle Grid Infrastructure, so that you can choose to install and configure Oracle ASMFD during an Oracle Grid Infrastructure installation.
In this blog I do install a 2 nodes cluster of Oracle 18c using Oracle ASMFD. Below the disks we will use.

[root@rac18ca ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:09 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:09 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:09 /dev/sdf
[root@rac18ca ~]#

[root@rac18cb ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:46 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:46 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:46 /dev/sdf
[root@rac18cb ~]#

We suppose that all prerequisites are done (public IP, private IP, scan,shared disks ….). Also we will not show all print screens.
The first step is to unzip the Oracle software in the ORACLE_HOME for the grid infrastructure.

unzip -d /u01/app/grid/18.0.0.0 LINUX.X64_180000_grid_home.zip

After we have to use the ASMCMD afd_label command to provision disk devices for use with Oracle ASM Filter Driver as follows.

[root@rac18ca ~]# export ORACLE_HOME=/u01/app/oracle/18.0.0.0/grid
[root@rac18ca ~]# export ORACLE_BASE=/tmp                                       
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label VOTOCR /dev/sde --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DATA /dev/sdd --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DIVERS /dev/sdf --init
[root@rac18ca ~]#

And then we can use the ASMCMD afd_lslbl command to verify the device has been marked for use with Oracle ASMFD.

[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lsl                              bl /dev/sde
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
VOTOCR                                /dev/sde
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdd
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA                                  /dev/sdd
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdf
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DIVERS                                /dev/sdf
[root@rac18ca network-scripts]#

Now that disks are initialized for ASMFD, we can start the installation.

[oracle@rac18ca grid]$ ./gridSetup.sh

We will not show all the pictures.

imag1

imag2

imag3

imag4

imag5

imag6

imag7

And in next window, we can choose the disks for the OCR and Voting files. We will also check Configure Oracle ASM Filter Driver.

imag8

And then continue the installation. We will have to run the orainstRoot.sh and the root.sh scripts. All these steps are not shown here.
At the end of the installation we can verify the status of the cluster

[oracle@rac18cb ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[oracle@rac18ca ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_VOTOCR.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.net1.network
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.ons
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac18ca                  Started,STABLE
      2        ONLINE  ONLINE       rac18cb                  Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18ca.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18cb.vip
      1        ONLINE  ONLINE       rac18cb                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
--------------------------------------------------------------------------------
[oracle@rac18ca ~]$

We also can check that ASMFD is enabled.

[oracle@rac18ca ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DATA                        ENABLED   /dev/sdd
DIVERS                      ENABLED   /dev/sdf
VOTOCR                      ENABLED   /dev/sde
[oracle@rac18ca ~]$


[oracle@rac18ca ~]$ asmcmd dsget
parameter:/dev/sd*, AFD:*
profile:/dev/sd*,AFD:*
[oracle@rac18ca ~]$

[oracle@rac18ca ~]$ asmcmd lsdsk
Path
AFD:DATA
AFD:DIVERS
AFD:VOTOCR
[oracle@rac18ca ~]$

Conclusion
In this blog we have seen how we can install a cluster using ASMFD

 

Cet article Oracle 18c: Cluster With Oracle ASM Filter Driver est apparu en premier sur Blog dbi services.

Partitioning -- 3d : Partial Indexing (in 11g)

Hemant K Chitale - Sat, 2018-09-08 10:24
Oracle 12c has introduced a new feature called "Partial Index" whereby selective partitions of a Table are indexed.  This is useful, for example, where you have a large historical table and you know that older Partitions are infrequently accessed and no longer need to be indexed.  For such tables, you can afford to "lose" the index for these older Partitions.

How would you do this in 11.2 ?

Let me go back to the SALES_DATA table with data from 2016 to 2018 populated.  This is the status of the index partition segments :

SQL> l
1 select segment_name, partition_name, bytes/1024
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4 and segment_name in
5 (select index_name
6 from user_indexes
7 where table_name = 'SALES_DATA')
8* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024
------------------------------ ------------ ----------
SALES_DATA_LCL_NDX_1 P_2016 3072
SALES_DATA_LCL_NDX_1 P_2017 3072
SALES_DATA_LCL_NDX_1 P_2018 3072
SALES_DATA_LCL_NDX_2 P_2016 64
SALES_DATA_LCL_NDX_2 P_2017 64
SALES_DATA_LCL_NDX_2 P_2018 64

6 rows selected.

SQL>


So, if I now want to "unindex" the year 2016 partition (P_2016) of the SALES_DATA table, I can :

SQL> show parameter deferred_segment_creation

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE

SQL> alter index SALES_DATA_LCL_NDX_1 modify partition P_2016 unusable;

Index altered.

SQL> alter index SALES_DATA_LCL_NDX_2 modify partition P_2016 unusable;

Index altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024
2 from user_segments
3 where segment_type = 'INDEX PARTITION'
4 and segment_name in
5 (select index_name
6 from user_indexes
7 where table_name = 'SALES_DATA')
8* order by 1,2
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024
------------------------------ ------------ ----------
SALES_DATA_LCL_NDX_1 P_2017 3072
SALES_DATA_LCL_NDX_1 P_2018 3072
SALES_DATA_LCL_NDX_2 P_2017 64
SALES_DATA_LCL_NDX_2 P_2018 64

SQL>
SQL> select count(*) from sales_data partition (P_2016);

COUNT(*)
----------
100000

SQL>



You will notice that although the P_2016 Partition in the Table has data, the corresponding Index Partition no longer has a segment -- no space is allocated to it  (although the logical definition of the index exists).  This is possible with the "deferred_segment_creation" parameter set to TRUE in 11g.

In fact, you will notice that although the table has Partitions for 2019 and 2020 and MAXVALUE, corresponding Index Partition Segments do not exist (because no data has been inserted into those Table Partitions yet) !

SQL> select partition_name           
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NA
------------
P_2016
P_2017
P_2018
P_2019
P_2020
P_MAXVALUE

6 rows selected.

SQL>
SQL> select index_name, partition_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7 order by index_name, partition_position
8 /

INDEX_NAME PARTITION_NA STATUS
------------------------------ ------------ --------
SALES_DATA_LCL_NDX_1 P_2016 UNUSABLE
SALES_DATA_LCL_NDX_1 P_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USABLE
SALES_DATA_LCL_NDX_2 P_2016 UNUSABLE
SALES_DATA_LCL_NDX_2 P_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USABLE

12 rows selected.

SQL>


This behaviour is a consequence of "deferred_segment_creation".

Note : If a Partitioned Index is a Unique / Primary Key Index, do NOT attempt to set an Index Partition to UNUSABLE.  UNUSABLE status would prevent INSERTs into the table.



Categories: DBA Blogs

Documentum – Silent Install – xPlore IndexAgent

Yann Neuhaus - Sat, 2018-09-08 08:00

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here), D2 and finally the xPlore binaries & Dsearch. This blog will be the last one of this series related to silent installation on Documentum and it will be about how to install an xPlore IndexAgent on the existing docbase/repository created previously.

So let’s start, as always, with the preparation of the properties file:

[xplore@full_text_server_01 ~]$ vi /tmp/xplore_install/FT_IA_Installation.properties
[xplore@full_text_server_01 ~]$ cat /tmp/xplore_install/FT_IA_Installation.properties
### Silent installation response file for Indexagent
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
indexagent.configMode.create=1
indexagent.configMode.upgrade=0
indexagent.configMode.delete=0
indexagent.configMode.create.migration=0

### Other configurations
indexagent.ess.host=full_text_server_01.dbi-services.com
indexagent.ess.port=9300

indexagent.name=Indexagent_Docbase1
indexagent.FQDN=full_text_server_01.dbi-services.com
indexagent.instance.port=9200
indexagent.instance.password=ind3x4g3ntAdm1nP4ssw0rd

indexagent.docbase.name=Docbase1
indexagent.docbase.user=dmadmin
indexagent.docbase.password=dm4dm1nP4ssw0rd

indexagent.connectionBroker.host=content_server_01.dbi-services.com
indexagent.connectionBroker.port=1489

indexagent.globalRegistryRepository.name=gr_docbase
indexagent.globalRegistryRepository.user=dm_bof_registry
indexagent.globalRegistryRepository.password=dm_b0f_reg1s7ryP4ssw0rd

indexagent.storage.name=default
indexagent.local_content_area=/opt/xPlore/wildfly9.0.1/server/DctmServer_Indexagent_Docbase1/data/Indexagent_Docbase1/export

common.installOwner.password=ind3x4g3ntAdm1nP4ssw0rd

[xplore@full_text_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the below mentioned java is a 32 or 64 bits
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • indexagent.configMode.create: Whether or not you want to install an IndexAgent (binary value)
  • indexagent.configMode.upgrade: Whether or not you want to upgrade an IndexAgent (binary value)
  • indexagent.configMode.delete: Whether or not you want to delete an IndexAgent (binary value)
  • indexagent.configMode.create.migration: This isn’t used anymore in recent installer versions but I still don’t know what was its usage before… In any cases, set this to 0 ;)
  • indexagent.ess.host: The Fully Qualified Domain Name of the primary Dsearch this new IndexAgent will be linked to
  • indexagent.ess.port: The port that the primary Dsearch is using
  • indexagent.name: The name of the IndexAgent to be installed. The default name is usually Indexagent_<docbase_name>
  • indexagent.FQDN: The Fully Qualified Domain Name of the current host the IndexAgent is being installed on
  • indexagent.instance.port: The port that the IndexAgent is/will be using (HTTP)
  • indexagent.instance.password: The password to be used for the new IndexAgent JBoss admin
  • indexagent.docbase.name: The name of the docbase/repository that this IndexAgent is being installed for
  • indexagent.docbase.user: The name of an account on the target docbase/repository to be used to configure the objects (updating the dm_server_config, dm_ftindex_agent_config, aso…) and that has the needed permissions for that
  • indexagent.docbase.password: The password of the above-mentioned account
  • indexagent.connectionBroker.host: The Fully Qualified Domain Name of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.connectionBroker.port: The port of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.globalRegistryRepository.name: The name of the GR repository
  • indexagent.globalRegistryRepository.user: The name of the BOF Registry account created on the CS inside the GR repository. This is usually something like “dm_bof_registry”
  • indexagent.globalRegistryRepository.password: The password used by the BOF Registry account
  • indexagent.storage.name: The name of the storage location to be created. The default one is “default”. If you intend to create new collections, you might want to give it a more meaningful name
  • indexagent.local_content_area: The path to be used to store the content temporarily on the file system. The value I used above is the default one but you can put it wherever you want. If you are using a multi-node, this path needs to be accessible from all nodes of the multi-node so you can put it under the “ess.data_dir” folder for example
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue

 

Once the properties file is ready, make sure that the Dsearch this IndexAgent is linked to is currently running (http(s)://<indexagent.ess.host>:<indexagent.ess.port>/dsearchadmin), make sure that the Global Registry repository (gr_docbase) as well as the target repository (Docbase1) are running and then you can install the Documentum IndexAgent in silent using the following command:

[xplore@full_text_server_01 ~]$ /opt/xPlore/setup/indexagent/iaConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f /tmp/xplore_install/FT_IA_Installation.properties

 

This now concludes the series about Documentum silent installation. There are other components that support the silent installation like the Process Engine for example but usually they require only a few parameters (or even none) so that’s why I’m not including them here.

 

Cet article Documentum – Silent Install – xPlore IndexAgent est apparu en premier sur Blog dbi services.

[BLOG] Oracle Cloud Database Patching Issue: Precheck failed for patch PSU Update On Cloud

Online Apps DBA - Sat, 2018-09-08 06:05

Do you know how to handle the errors like Patch application – Precheck Failed for patch PSU Update? [BLOG] Oracle Cloud Database Patching Issue: Precheck failed for patch PSU Update On Cloud Visi: https://k21academy.com/clouddba37 & learn all about: ✔ Patching Types ✔ Patch Application Error ✔ Things Good to Know before patching Database on Cloud […]

The post [BLOG] Oracle Cloud Database Patching Issue: Precheck failed for patch PSU Update On Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Differences in charts in IRs, IGs, and Charts

Tom Kyte - Sat, 2018-09-08 05:26
Hi, can you please explain what is the architectural reason for such a difference between charts in IR, IG and chart regions. For example UI, series ( it can be specified in IG chart and not in IR chart ), type of charts ( in IR we have only 4 char...
Categories: DBA Blogs

[BLOG] Oracle Cloud Infrastructure: Steps to Resize a Volume (Block/Boot)

Online Apps DBA - Fri, 2018-09-07 13:12

[BLOG] Oracle Cloud Infrastructure: Steps to Resize a Volume (Block/Boot) To know, How the latest changes (Block Volume) in Oracle Cloud Infrastructure provides an upgrade to technology, Visit: https://k21academy.com/oci15 Clear Your Thoughts & enhance your skills by learning: ✔ Block Volume Resizing ✔ Backup restore resizing ✔ Block Volume Clone Resizing & much more… [BLOG] […]

The post [BLOG] Oracle Cloud Infrastructure: Steps to Resize a Volume (Block/Boot) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to use partial indexes for PK/UK keys

Tom Kyte - Fri, 2018-09-07 11:06
LiveSQL link: not working post link into LiveSQL link, but link works https://livesql.oracle.com/apex/livesql/s/g78ij8jxinq4409pp8dbz3owo Hello all, we have a BI solution (Oracle 12c + ODI11). We have a lot of partitioned tables (daily and mont...
Categories: DBA Blogs

[BLOG] BigData Hadoop: Apache Sqoop vs Apache Flume

Online Apps DBA - Fri, 2018-09-07 05:32

Apache Sqoop is a lifesaver in moving data from the data warehouse into the Hadoop environment whereas for streaming logs into Hadoop environment, Apache Flume is the best service designed. Visit: https://k21academy.com/hadoop20 if you want to know more about Apache Sqoop and Apache Flume. Apache Sqoop is a lifesaver in moving data from the data […]

The post [BLOG] BigData Hadoop: Apache Sqoop vs Apache Flume appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[VLOG] Oracle Fusion Middleware High Availability: Enterprise Deployment Overview [Part III]

Online Apps DBA - Fri, 2018-09-07 05:05

If you are working in an organization as an FMW Administrator then you should know about Enterprise Deployment Architecture as it considers various business service level agreements (SLA) to make high-availability best practices and security technologies for Oracle Fusion Middleware. Visit: https://k21academy.com/fmw20 to learn more about Enterprise Deployment Architecture. If you are working in an […]

The post [VLOG] Oracle Fusion Middleware High Availability: Enterprise Deployment Overview [Part III] appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Step by Step: Ansible Role To Setup Oracle ACFS On Multiple Nodes

Pakistan's First Oracle Blog - Thu, 2018-09-06 17:46
This post contains step by step instructions for creating an Ansible role acfssetup to setup Oracle ASM Cluster Filesystem (ACFS) on multiple nodes of a cluster. This assumes that Grid Infrastructure 12.1.0.2.0 is already installed on the nodes, and ASM is working fine. This also assumes that there already is Ansible installed on some controller host with ssh equivalency setup between root and Oracle users.




Step 1: Create directory structure for the role acfssetup:


$ cd /etc/ansible/roles
$ mkdir acfssetup
$ mkdir files handlers meta templates tasks vars

Step 2: Create the Tasks (/etc/ansible/roles/acfssetup/tasks/main.yml):

---

- name: Install ACFS/ADVM modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsroot install"
  tags:acfs

- name: Start and enable the ACFS modules on the nodes
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/acfsload start"
  shell:"{ gi_home_path }}/bin/acfsroot enable"
  tags:acfs
 
- name: As oracle user, create an ASM volume for ACFS on first node
  when: inventory_hostname in groups['node1']
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/asmcmd volcreate -G {{ asm_dg_name }} -s {{ acfs_vol_size }} {{ acfs_vol_name }}"
  shell:"{ gi_home_path }}/bin/asmcmd volinfo -G {{ asm_dg_name }} {{ acfs_vol_name }} | grep Device | sed 's/.*://'"
  register: {{ device_name }}
  tags:acfs

- name: As oracle user, create the filesystem on the volume which was just created
  become_user: "{{ gi_owner }}"
  environment: "{{ asm_instance }}"
  shell:"/sbin/mkfs -t acfs {{ device_name }}.stdout"
  tags:acfs

- name: As root, create an empty directory which will house the file system
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"mkdir -p /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chown root:oinstall /{{ acfs_mount_name }}; chmod 770 /{{ acfs_mount_name }}; chown -R oracle:oinstall /{{ acfs_mount_name }}/{{ acfs_vol_name }}; chmod 775 /{{ acfs_mount_name }}/{{ acfs_vol_name }}"
  tags:acfs

- name: As root, setup the file system to be auto mounted by clusterware
  become_user: "{{ superuser }}"
  environment: "{{ asm_instance }}"
  shell:"{ gi_home_path }}/bin/srvctl add volume -volume {{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -device {{ device_name }}.stdout; { gi_home_path }}/bin/srvctl add filesystem -device {{ device_name }}.stdout -path {{ acfs_mount_name }}/{{ acfs_vol_name }} -diskgroup {{ asm_dg_name }} -user {{ gi_owner }} -fstype ACFS -description \"ACFS General Purpose Mount\""
  tags:acfs

  Step 3: Create the Variables (/etc/ansible/roles/acfssetup/vars/main.yml):

ver: "12.1.0.2.0"
superuser: root
asm_instance: +ASM
asm_dg_name: DATA
acfs_vol_name: ACFSVOL1
acfs_vol_size: 10G
acfs_mount_name: acfsmounts
device_name: default([])
gi_owner: oracle
gi_group: oinstall
gi_base_path: "/u01/app/oracle"
gi_home_path: "{{ gi_base_path }}/product/{{ ver |
regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1.\\2.\\3') }}/grid"
gi_home_name: "OraGI{{ ver | regex_replace('^(.*)\\.(.*)\\.(.*)\\.(.*)$', '\\1\\2')}}"

Step 4: Configure Ansible host file (/etc/ansible/hosts)

node1 ansible_host=node1.foo.com
node2 ansible_host=node2.foo.com

Step 5: Create the skeleton Playbook (/etc/ansible/acfs.yml):

---
- hosts: all
become: true
roles:
- acfssetup

Step 6: Run the playbook

$ ansible-playbook acfs.yml
Categories: DBA Blogs

How to sort in alphabetical order and get the 2nd ranked rows

Tom Kyte - Thu, 2018-09-06 17:06
Hi Tom, How to do sorting the data in alphabetical order and assign the rank of them like I have data and assign the rank(rank must be assign alphabetical order) like this. for that, I used to order by, <code>A ADAMS-1 ALLEN-2 B B...
Categories: DBA Blogs

DBMS_AQ.DEQUEUE

Tom Kyte - Thu, 2018-09-06 17:06
Hi Tom, Can you please explain me difference between dequeue options dbms_aq.remove and dbms_aq.remove_nodata using in dbms_aq.dequeue. 2. Does dbms_aq.dequeue generate more redolog log that normal delete statement. Is it advisable to hav...
Categories: DBA Blogs

connecting database in network

Tom Kyte - Thu, 2018-09-06 17:06
Hi, I have installed oracle 11g and created database name XYZ in machine A Two systems are connected in same network which is reachable by ping ip address port is also open How B machine will access the DB XYZ on A machine ?? Thanks ...
Categories: DBA Blogs

Heavy swapping in database

Tom Kyte - Thu, 2018-09-06 17:06
Hi Oracle professionals, Please advice us and give right direction. We have database on 12c on OS Red Hat Enterprise Linux Server release 6.10 with memory settings: SGA: 4928M PGA: 1250M OS memory: 8GB CPU q-ty: 2 Huge Pages disabled. I...
Categories: DBA Blogs

v$bh@oracle vs innodb-buffer-page@mysql

Tom Kyte - Thu, 2018-09-06 17:06
From https://docs.oracle.com/cd/E17952_01/mysql-5.6-en/innodb-information-schema-buffer-pool-tables.html warning that Querying the INNODB_BUFFER_PAGE or INNODB_BUFFER_PAGE_LRU table can can affect performance. From the source code i can find th...
Categories: DBA Blogs

STF GROUP, Owner of Brands STUDIO F and ELA, Implements Oracle Cloud to Optimize Inventory Planning Across Omnichannel Footprint in Latin America

Oracle Press Releases - Thu, 2018-09-06 07:00
Press Release
STF GROUP, Owner of Brands STUDIO F and ELA, Implements Oracle Cloud to Optimize Inventory Planning Across Omnichannel Footprint in Latin America Trending Colombian Fashion Retailer Introduces Science to Inform Inventory Mix and Levels across 317 Physical Stores in Colombia, Mexico, Panama and Chile

Redwood Shores, Calif.—Sep 6, 2018

Colombian Fashion Retailer STF GROUP has deployed Oracle Retail Planning and Optimization Cloud Service to support the growth of channels and operations by introducing best of breed, advanced planning solutions with supporting retail science to optimize inventory and financial plans. STF GROUP curates a unique fashion shopping experience of high quality apparel, footwear, handbags, accessories through an online shop, boutiques and certified wholesalers that serve customers in Colombia, Costa Rica, Chile, Ecuador, Guatemala and Peru. After experiencing strong growth in recent years, STF GROUP looked to the cloud for a single source of performance, market and vendor data that will be critical to improving the accuracy and depth of inventory plans while maximizing margin potential and mitigating unnecessary markdowns.

With a footprint including 317 physical stores across Latin America, 4,500 direct employees and producing more than 6 million pieces per year STF GROUP understood the need to invest in cloud technology to support continued growth and provide better service for their customers.

“We knew we needed a modern solution that could scale and grow with our business to support our plans to close the year optimizing resources,” said Polo Marin, Operations Director, STF GROUP. “Oracle Retail Consulting helped us implement Oracle Retail Merchandise Financial Planning Cloud Service and Oracle Retail Assortment Planning Cloud Service.”

“According to Euromonitor International, the middle-class of Colombia is consolidating and changing its relationship with retail brands and expecting more personalized service and inventory,” said Carlos Victoria, Vice President of Latin America, Oracle Retail. “Oracle is poised to help retailers like STF GROUP empower their teams to take advantage of new digital opportunities by enabling omnichannel strategies and support alliances with international competitors.”

With Oracle Retail Planning and Optimization Cloud Service, STF GROUP can now take advantage of a proactive planning process that creates transparency of cost in the buying process and a consolidated view of key performance indications in one place. Merchandising planners can manage and organize their assortment plan in units with correct time to market and a structured process based on science that is key to evaluating what to replace first and keep on top of the latest fashion trends and consumer preferences.

“STF GROUP is a dynamic fashion retailer. Throughout the implementation process, we appreciated the importance of the user experience and a robust solution that met their needs to better serve a sophisticated consumer,” said Lou Frio, Vice President Consulting, Oracle Retail. “Our teams worked closely together to achieve a common goal, better customer service.”

 

STF Group implemented Oracle Retail Planning and Optimization Cloud Service in November 2017.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

How Not to Collect Optimizer Statistics in an Application Engine Program

David Kurtz - Thu, 2018-09-06 05:24
I was told about a PeopleSoft customer experiencing an Oracle error when collecting statistics during an Application Engine.
ORA-06533: Subscript beyond count 
ORA-06512: at "SYS.DBMS_STATS", line 36873
It is possibly a manifestation of a database bug. The workaround was not to use AUTO_SAMPLE_SIZE, so instead, this customer initially coded an explicit call to DBMS_STATS.GATHER_TABLE_STATS with a specific sample size.
This blog is not about the bug, but how to manage the workaround.
DO NOT TRY THIS AT HOME!

I think that there are a number of problems with this approach
  1. Using a fixed sample size rather than AUTO_SAMPLE_SIZE should only be considered as a temporary workaround. The new hash-based number-of-distinct-values (NDV) algorithm in Oracle 12c only works with AUTO_SAMPLE_SIZE, and it produces more accurate statistics and runs faster because saves a large sort operation. Coding a fixed sample size into an Application Engine requires a managed change to be made, tested and released into production, and then when the underlying problem is resolved the customisation needs to be removed by the same managed process. 
  2. DBMS_STATS.GATHER_TABLE_STATS implies a commit. That can lead to problems that PeopleSoft avoids by only calling statistics via the %UpdateStats macro and controlling when that macro does and does not execute. 
Committing and Restart Checkpointing in Application EngineIf a restartable Application Engine program fails it rolls back to the last commit point and it can then be restarted from that point. Committing is managed by Application Engine at section and step levels where the program state record is updated accordingly. If an error occurs in a step after the implicit commit in DBMS_STATS, it can result in the data in the application tables being different to where the state record indicates the program can be restarted. The program may not restart, or it could conceivably execute but produce erroneous results.
Committing inside a do while loop, including any other Application Engine program called from inside the loop is suppressed at Application Engine section/step level and therefore the execution of %UpdateStats macro is also suppressed. Otherwise, you could get rogue ORA-01555 Snapshot Too Old errors. Suppression of %UpdateStats is reported in the Application Engine step trace.
-- 19:07:37.199 .(AEMINITEST.MAIN.LOOP) (Do Select)
%Select(recname) SELECT recname FROM psrecdefn WHERE recname = 'JOB'
/
-- Buffers:
-- 1) JOB
-- 19:07:37.204 Iteration 1 of .(AEMINITEST.MAIN.LOOP) (Do Select) loop

-- 19:07:37.208 .(AEMINITEST.MAIN.LOOP) (SQL)
RECSTATS PS_JOB LOW
/
-- 19:07:37.211 UpdateStats ignored - COMMIT required

-- 19:07:37.212 .(AEMINITEST.MAIN.LOOP) (Do Fetch)
Even a previously uncommitted SQL step can lead to %UpdateStats being suppressed.
-- 19:07:35.205 .(AEMINITEST.MAIN.Step01) (SQL)
UPDATE PS_AETEMPTBLMGR SET AE_DEDICATED = 1 WHERE 0 = 1
/
-- Row(s) affected: 0

-- 19:07:35.213 .(AEMINITEST.MAIN.Step02) (PeopleCode)

-- 19:07:35.220 .(AEMINITEST.MAIN.US1) (SQL)
RECSTATS PS_JOB LOW
/
-- 19:07:35.227 UpdateStats ignored - COMMIT required
If you code DBMS_STATS.GATHER_TABLE_STATS explicitly, Application Engine will not recognise the step as having committed. In the following example. you can see the %UpdateStats on the last step has been suppressed because it Application Engine does not recognise that the update in the first step has been committed by the call to DBMS_STATS.
-- 19:12:06.690 .(AEMINITEST.MAIN.Step01) (SQL)
UPDATE PS_AETEMPTBLMGR SET AE_DEDICATED = 1 WHERE 0 = 1
/
-- Row(s) affected: 0
-- 19:12:06.696 .(AEMINITEST.MAIN.Step02) (PeopleCode)
-- 19:12:06.700 .(AEMINITEST.MAIN.GTS) (SQL)
%Execute(/) BEGIN
dbms_stats.gather_table_stats('SYSADM','PS_JOB',estimate_percent=>1); END; /
/
-- Row(s) affected: 1

-- 19:12:09.871 .(AEMINITEST.MAIN.US1) (SQL)
RECSTATS PS_JOB LOW
/
-- 19:12:09.877 UpdateStats ignored - COMMIT required
/
Perhaps, the safest form of this workaround would be to have the step with DBMS_STATS and the immediately preceding step explicitly commit as in the following example.  I have also made the program restartable.  Now restart data is checkpointed, and the %UpdateStats macro executes at step US1.
Restart Data CheckPointed
/
COMMIT
/

-- 19:20:24.792 .(AEMINITEST.MAIN.GTS) (SQL)
%Execute(/) BEGIN
dbms_stats.gather_table_stats('SYSADM','PS_JOB',estimate_percent=>1); END; /
/

-- Row(s) affected: 1
/
/
Restart Data CheckPointed
/
COMMIT
/

-- 19:20:29.153 .(AEMINITEST.MAIN.US1) (SQL)
RECSTATS PS_JOB LOW
/
-- Row(s) affected: 1
/
However, you have to consider the state the application data after an error, whether you wish to restart or cancel the Application Engine because you can no longer rollback.
Doing the Right Thing
I recommend that:
  • You should only ever collect stats in Application Engine with the %UpdateStats macro that in turn executes the command in the DDL model.
  • From Oracle 11g both PeopleSoft statistics gathering DDL models should be the same and should ultimately call DBMS_STATS without any parameters other than the table name. The default value of ESTIMATE_PERCENT is AUTO_SAMPLE_SIZE.
There are two DDL models in PeopleSoft because %UpdateStats can be invoked with a second parameter to collect the statistics HIGH or LOW. This dates back to Oracle's ANALYZE command that could either compute or estimate statistics (and other database platforms had similar options). Collecting optimizer statistics with ANALYZE has been deprecated for many years, but the command still has other valid uses.  It was superceded by DBMS_STATS in Oracle 8i (released in 1998).
Automatic sample size was introduced in Oracle 9i.  In Oracle 9i and 10g, it was usual to use automatic sample size in the high statistics gathering model and a small fixed sample size in the low model for use on very large tables.  The LOW parameter was specified on %Updatestats in Application Engine programs as necessary.
This approach became redundant from Oracle 11g with the introduction of table preferences. If you need to collect statistics with a specific rather than the automatic sample size or specify any other parameters, then a table preference should be defined. It applies whenever statistics are gathered on that table, during the statistics maintenance window, and any places where statistics are explicitly collected, rather than just a single call.
From Oracle 12c, as the new NDV algorithm only works with AUTO_SAMPLE_SIZE, you should always use the default unless you have an overarching reason to the contrary, and then you should use a table preference. This approach does not require any application code change because the preference is an attribute of a table in the database.
I recommend using GFCPSSTATS package, calling it from the DDL model (see previous blog Managing Cost-Based Optimizer Statistics for PeopleSoft). The package also includes a mechanism to specify table preferences in metadata, so that they are automatically instantiated when a table is created or altered by Application Designer, and are applied to every instance of a temporary record.
gfcpsstats11.ps_stats(p_ownname=>[DBNAME], p_tabname=>[TBNAME]);
Alternatively, just call DBMS_STATS with no additional parameters (other than FORCE, in case you lock statistics on temporary tables) but then you must manage table preferences manually.
DBMS_STATS.GATHER_TABLE_STATS(ownname=>[DBNAME], tabname=>[TBNAME], force=>TRUE);
Further Reading

Pages

Subscribe to Oracle FAQ aggregator