Feed aggregator

query for report generation in oracle 11g

Tom Kyte - Sun, 2017-06-11 06:06
how to generate a month wise,year wise report for banking?
Categories: DBA Blogs

How solve incident Error?

Tom Kyte - Sun, 2017-06-11 06:06
I search incident error suddenly and my database immediate down and show error message client connect could not hand-off. And I see alter log file and search on internet also but i can't able to find information for first argument mention in the or...
Categories: DBA Blogs

Unable to install Oracle 11g on Windows 10

Tom Kyte - Sun, 2017-06-11 06:06
Hello Oracle Experts, I am unable to install Oracle 11g on Windows 10 (64 Bit). Error Encountered : ORA 12631 UserName retrieval failed Database instance is created but the .DBF files are not created including the control files. Please note...
Categories: DBA Blogs

Why I can't get the user role when I logon the database?

Tom Kyte - Sun, 2017-06-11 06:06
Hello everyone I'm trying to solve a little problem that I have everytime I connect as another user on the database. First of all let me explain what I want to do. When the user connect on the database I want to know his role and just show on o...
Categories: DBA Blogs

Oracle 12C Database Running on 11G Grid/ASM

Tom Kyte - Sun, 2017-06-11 06:06
Hello all, I have a client who is currently running on an 11.2.0.3 DB and Grid/ASM. They want me to stand up a new database as 12.2.0.1 to export into while the current database is still running. I've gone over the documentation for upgrading but ...
Categories: DBA Blogs

12c How to UN Expire Password for CDB Users

Michael Dinh - Sat, 2017-06-10 15:11

Use dbms_metadata.get_ddl to extract user and replace create with alter.

oracle@medintdbl01 ~ $ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jun 10 15:31:13 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select username, expiry_date, account_status from dba_users where username like '%MONITOR%';

USERNAME
--------------------------------------------------------------------------------
EXPIRY_DA ACCOUNT_STATUS
--------- --------------------------------
C##MONITOR
10-JUN-17 EXPIRED

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> set long 1000000 longchunksize 32000 linesize 32000 pages 0 newpage none
SQL> set heading off tab off echo off define off sqlprefix off blockterminator off timing off verify off feedb off
SQL> set sqlblanklines on embedded on trimspool on  
SQL> select dbms_metadata.get_ddl('USER','C##MONITOR') from dual;

   CREATE USER "C##MONITOR" IDENTIFIED BY VALUES 'S:***'
      DEFAULT TABLESPACE "USERS"
      TEMPORARY TABLESPACE "TEMP"
      PASSWORD EXPIRE

SQL> ALTER USER "C##MONITOR" IDENTIFIED BY VALUES 'S:***'
SQL> select username, expiry_date, account_status from dba_users where username like '%MONITOR%';
C##MONITOR                                                                                       07-DEC-17 OPEN
SQL> 

Ubuntu: mouse jumps to trash icon / move to lower left corner / bottom left

Dietrich Schroff - Sat, 2017-06-10 14:02
There are many questions about jumping mouse cursors on ubuntu.
And there are often answers like
  • clean your touchpad
  • your touchpad is broken
  • send your laptop back
But there are so many people suffering this, so it is not reasonable that all these touchpads do not work well.

One thing to solve this is to install
gpointing-device-settingsbut this does not really help.
Next thing:
add-apt-repository ppa:atareao/atareao
apt-get update
apt-get install touchpad-indicator
/opt/extras.ubuntu.com/touchpad-indicator/bin/touchpad-indicator  But this is not really the solution. But you can disable your touchpad while typing, so that the window focus does not jump to the trash icon and your typing is interrupted.

After searching i found the following discussion on a mailing list:
https://lists.freedesktop.org/archives/xorg-devel/2014-June/042790.html

And the follwing did the job:
apt-get install xserver-xorg-core apt-get install xserver-xorg-input-libinput
apt-get remove --purge xserver-xorg-input-synaptics and then a logoff and login into the x window system...

GoldenGate Debugging

Michael Dinh - Fri, 2017-06-09 21:40

I was working on automating debug information to submit to Oracle Support and thought I share implementation for what was requested.

OGG_GROUP_NAME is from ggsci info all (case sensitive)
./debug_gg.sh: line 3: 1: —> USAGE: /debug_gg.sh OGG_GROUP_NAME
./debug_gg.sh E_LAX

#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
OGG_GROUP_NAME=${1:?"---> USAGE: $DN/$BN "OGG_GROUP_NAME""}
set -x
ps -o pid,uname,cmd `pgrep -f "extract.*${OGG_GROUP_NAME}"`
pstack `pgrep -f "extract.*${OGG_GROUP_NAME}"`
pstack `pgrep -f "extract.*${OGG_GROUP_NAME}"`
pstack `pgrep -f "extract.*${OGG_GROUP_NAME}"`
set +x
$GG_HOME/ggsci << EOF
info ${OGG_GROUP_NAME} detail
send ${OGG_GROUP_NAME} status
sh sleep 2m
send ${OGG_GROUP_NAME} status
sh sleep 2m
send ${OGG_GROUP_NAME} status
send ${OGG_GROUP_NAME} report
exit
EOF
echo " "
ls -alrt $GG_HOME/dirrpt/${OGG_GROUP_NAME}*.rpt|tail -3
exit

Example output from pgrep and pstack

++ pgrep -f 'extract.*E_LAX'
+ ps -o pid,uname,cmd 40882
  PID USER     CMD
40882 ggsuser  /u01/gg/12.2.0/extract PARAMFILE /u01/gg/12.2.0/dirprm/e_lax.prm REPORTFILE /u01/gg/12.2.0/dirrpt/E_LAX.rpt PROCESSID E_LAX USESUBDIRS
++ pgrep -f 'extract.*E_LAX'
+ pstack 40882

Just realized does not scale since extract is hard coded.

Next step, learn how to read pstack output – YIKES!


Unable to run the job which is already created

Tom Kyte - Fri, 2017-06-09 17:06
Hi Tom, I need your help in fixing this. I have created the job which should call a shell script when triggered. But unfortunately I am getting error as test_job must be a job. Below are the scripts which I have used to create the job. Let ...
Categories: DBA Blogs

Query to get the count of records every two hours

Tom Kyte - Fri, 2017-06-09 17:06
I have a record creation time stamp in my table . I need to write a query to get the count of records every two hours between two specific dates. Starting from 1st January 2017 till today. I have the following table USER USER_ID CREATION_D...
Categories: DBA Blogs

PeopleTools 8.56 is Now Available

PeopleSoft Technology Blog - Fri, 2017-06-09 16:32

Great news from PeopleSoft!  We are happy to announce that PeopleTools 8.56 is now generally available for install and upgrade.  As is the case with all major PeopleTools releases, it's loaded with new features.  In this release you will find:

  • Ability to have a common look to the user interface in each application, regardless of the underlying Fluid or Classic technology
  • An open source search technology with proven success, that will be easier to install and manage
  • Lifecycle management process that lets change be selected directly from analytics
  • Ability to create test scripts that use queries to provide test data
  • More platform support for the PeopleSoft Deployment Framework
  • And it’s easier than ever to lift and run your PeopleSoft applications in the Oracle Cloud

Of course there’s more.  Where can you go to get all the details on 8.56?  The first place to start is always the PeopleSoft Information Portal, www.peoplesoftinfo.com.    Once there, select the PeopleTools tab.  There’s a great informational video that runs about ½ hour in length covering some of the new features.  For a bit more detail, check out PeopleTools section in the Cumulative Feature Overview tool (also linked to from the PeopleSoft Information Portal) or PeopleBooks online help.  To make it easy to find everything, here’s all the links:

Even though this is the first major PeopleTools release since 8.55 in Dec. 2015, you've already seen some of the new features we've been working on.  Selective Adoption lets us release new Enterprise Components like Approval Page Composer, Form Builder and Page and Field Configurator in PUM Images.  We made Elasticsearch available in PeopleTools 8.55, and of course there’s PeopleSoft Cloud Manager, the application available on the Oracle Cloud Marketplace that helps you run PeopleSoft Application on the Oracle Cloud.  Information on all of these features can be found in the PeopleSoft Information Portal.

PeopleTools 8.56 is a major release for PeopleSoft that we are sure will be beneficial.  Remember to keep up to date with the PeopleTools direction by visiting Planned Features and Enhancement page (https://support.oracle.com/epmos/faces/DocumentDisplay?id=1966243.2) on Oracle Support and we look forward to hearing your ideas about how we can improve PeopleTools by posting them on Idea Space (https://community.oracle.com/community/support/peoplesoft).

Bash: The most useless command (2)

Dietrich Schroff - Fri, 2017-06-09 14:58
The last posting about useless commands for bash discussed the command
rev. One nice comment pointed out, that the rev command can be used for searching inside logfiles (Thanks to phoxis.org).

Like it was said there, the command
yeslooks even more useless:
NAME
       yes - output a string repeatedly until killed
SYNOPSIS
       yes [STRING]...
Ok - it can be used for scripts to answer call backs. But which kind of script needs always the same answer? (and commands like "apt-get" support this via the option "-y".

Any other suggestions about useless bash commands?

Install Apache Kafka on Linux

Yann Neuhaus - Fri, 2017-06-09 06:37

download

What is Apache Kafka ?

No, Kafka is not only the famous author (en.wikipedia.org/wiki/Franz_Kafka), it’s an open-source distributed pub-sub messaging system with powerful skills like scalability and fault tolerance. It’s also a stream processing platform (near real-time) for the streaming datasources. The design of Apache Kafka is strongly influenced by the commit logs. Apache Kafka was originally developed by Linkedin and was subsequently open sourced in early 2011.

The installation is pretty simple but need to be rigorous .

Binaries installation

    • Prerequisites
      Get a Linux server (I have chosen Centos 7.3.1611), it could run on a small config. (memory 1G min.)
      Connect as a sudo user or root
    • Update your system and reboot
      [root@osboxes ~]# yum update
      Loaded plugins: fastestmirror, langpacks
      Loading mirror speeds from cached hostfile
       * base: mirror.switch.ch
       * epel: mirror.uni-trier.de
       * extras: mirror.switch.ch
       * updates: mirror.switch.ch
      No packages marked for update
  • Install the latest openjdk and set your environment
    [root@osboxes ~]# yum install java-1.8.0-openjdk
    Loaded plugins: fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
     * base: mirror.switch.ch
     * epel: mirror.imt-systems.com
     * extras: mirror.switch.ch
     * updates: mirror.switch.ch
    Package 1:java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64 already installed and latest version
    Nothing to do
    
    #Check it:
    [root@osboxes ~]# java -version
    openjdk version "1.8.0_131"
    OpenJDK Runtime Environment (build 1.8.0_131-b12)
    OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
    
    
    #Update your bash_profile:
    export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
    export JRE_HOME=/usr/lib/jvm/jre
    # and source your profile:
    [root@osboxes ~]# . ./.bash_profile
    [root@osboxes ~]# echo $JAVA_HOME
    /usr/lib/jvm/jre-1.8.0-openjdk
    [root@osboxes ~]# echo $JRE_HOME
    /usr/lib/jvm/jre
    
  • The Confluent Platform is an open source platform that contains all the components you need
    to create a scalable data platform built around Apache Kafka.
    Confluent Open Source is freely downloadable.
    Install the public key from Confluent

    rpm --import http://packages.confluent.io/rpm/3.2/archive.key
  • Add the confluent.repo to your /etc/yum.repos.d with this content
    [Confluent.dist]
    name=Confluent repository (dist)
    baseurl=http://packages.confluent.io/rpm/3.2/7
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/3.2/archive.key
    enabled=1
    
    [Confluent]
    name=Confluent repository
    baseurl=http://packages.confluent.io/rpm/3.2
    gpgcheck=1
    gpgkey=http://packages.confluent.io/rpm/3.2/archive.key
    enabled=1
  • Clean your yum caches
    yum clean all
  • And finally install the open source version of Confluent
    yum install confluent-platform-oss-2.11
    Transaction Summary
    ============================================================================================================================================================================
    Install  1 Package (+11 Dependent packages)
    
    Total download size: 391 M
    Installed size: 446 M
    Is this ok [y/d/N]: y
    Downloading packages:
    (1/12): confluent-common-3.2.1-1.noarch.rpm                                                                                                          | 2.0 MB  00:00:06
    (2/12): confluent-camus-3.2.1-1.noarch.rpm                                                                                                           |  20 MB  00:00:28
    (3/12): confluent-kafka-connect-elasticsearch-3.2.1-1.noarch.rpm                                                                                     | 4.3 MB  00:00:06
    (4/12): confluent-kafka-2.11-0.10.2.1-1.noarch.rpm                                                                                                   |  38 MB  00:00:28
    (5/12): confluent-kafka-connect-jdbc-3.2.1-1.noarch.rpm                                                                                              | 6.0 MB  00:00:07
    (6/12): confluent-kafka-connect-hdfs-3.2.1-1.noarch.rpm                                                                                              |  91 MB  00:01:17
    (7/12): confluent-kafka-connect-s3-3.2.1-1.noarch.rpm                                                                                                |  92 MB  00:01:18
    (8/12): confluent-kafka-rest-3.2.1-1.noarch.rpm                                                                                                      |  16 MB  00:00:16
    (9/12): confluent-platform-oss-2.11-3.2.1-1.noarch.rpm                                                                                               | 6.7 kB  00:00:00
    (10/12): confluent-rest-utils-3.2.1-1.noarch.rpm                                                                                                     | 7.1 MB  00:00:06
    (11/12): confluent-schema-registry-3.2.1-1.noarch.rpm                                                                                                |  27 MB  00:00:23
    (12/12): confluent-kafka-connect-storage-common-3.2.1-1.noarch.rpm                                                                                   |  89 MB  00:01:08
    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                                                                       2.2 MB/s | 391 MB  00:03:00
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : confluent-common-3.2.1-1.noarch                                                                                                                         1/12
      Installing : confluent-kafka-connect-storage-common-3.2.1-1.noarch                                                                                                   2/12
      Installing : confluent-rest-utils-3.2.1-1.noarch                                                                                                                     3/12
      Installing : confluent-kafka-rest-3.2.1-1.noarch                                                                                                                     4/12
      Installing : confluent-schema-registry-3.2.1-1.noarch                                                                                                                5/12
      Installing : confluent-kafka-connect-s3-3.2.1-1.noarch                                                                                                               6/12
      Installing : confluent-kafka-connect-elasticsearch-3.2.1-1.noarch                                                                                                    7/12
      Installing : confluent-kafka-connect-jdbc-3.2.1-1.noarch                                                                                                             8/12
      Installing : confluent-kafka-connect-hdfs-3.2.1-1.noarch                                                                                                             9/12
      Installing : confluent-kafka-2.11-0.10.2.1-1.noarch                                                                                                                 10/12
      Installing : confluent-camus-3.2.1-1.noarch                                                                                                                         11/12
      Installing : confluent-platform-oss-2.11-3.2.1-1.noarch                                                                                                             12/12
      Verifying  : confluent-kafka-connect-storage-common-3.2.1-1.noarch                                                                                                   1/12
      Verifying  : confluent-platform-oss-2.11-3.2.1-1.noarch                                                                                                              2/12
      Verifying  : confluent-rest-utils-3.2.1-1.noarch                                                                                                                     3/12
      Verifying  : confluent-kafka-connect-elasticsearch-3.2.1-1.noarch                                                                                                    4/12
      Verifying  : confluent-kafka-connect-s3-3.2.1-1.noarch                                                                                                               5/12
      Verifying  : confluent-kafka-rest-3.2.1-1.noarch                                                                                                                     6/12
      Verifying  : confluent-camus-3.2.1-1.noarch                                                                                                                          7/12
      Verifying  : confluent-kafka-connect-jdbc-3.2.1-1.noarch                                                                                                             8/12
      Verifying  : confluent-schema-registry-3.2.1-1.noarch                                                                                                                9/12
      Verifying  : confluent-kafka-2.11-0.10.2.1-1.noarch                                                                                                                 10/12
      Verifying  : confluent-kafka-connect-hdfs-3.2.1-1.noarch                                                                                                            11/12
      Verifying  : confluent-common-3.2.1-1.noarch                                                                                                                        12/12
    
    Installed:
      confluent-platform-oss-2.11.noarch 0:3.2.1-1
    
    Dependency Installed:
      confluent-camus.noarch 0:3.2.1-1                           confluent-common.noarch 0:3.2.1-1                           confluent-kafka-2.11.noarch 0:0.10.2.1-1
      confluent-kafka-connect-elasticsearch.noarch 0:3.2.1-1     confluent-kafka-connect-hdfs.noarch 0:3.2.1-1               confluent-kafka-connect-jdbc.noarch 0:3.2.1-1
      confluent-kafka-connect-s3.noarch 0:3.2.1-1                confluent-kafka-connect-storage-common.noarch 0:3.2.1-1     confluent-kafka-rest.noarch 0:3.2.1-1
      confluent-rest-utils.noarch 0:3.2.1-1                      confluent-schema-registry.noarch 0:3.2.1-1
    
    Complete!

Ok , the binaries are installed now. The next operation will be to configure and launch Zookeeper and Kafka itself !

    • First , take a look at the Zookeeper configuration :
      [root@osboxes kafka]# cat /etc/kafka/zookeeper.properties
      # Licensed to the Apache Software Foundation (ASF) under one or more
      # contributor license agreements.  See the NOTICE file distributed with
      # this work for additional information regarding copyright ownership.
      # The ASF licenses this file to You under the Apache License, Version 2.0
      # (the "License"); you may not use this file except in compliance with
      # the License.  You may obtain a copy of the License at
      #
      #    http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
      # the directory where the snapshot is stored.
      dataDir=/var/lib/zookeeper
      # the port at which the clients will connect
      clientPort=2181
      # disable the per-ip limit on the number of connections since this is a non-production config
      maxClientCnxns=0
    • Don’t change the configuration file (the default values are okay to start with)  and launch Zookeeper
      /usr/bin/zookeeper-server-start /etc/kafka/zookeeper.properties
      ...
      [2017-06-08 14:05:02,051] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
    • Keep the session with Zookeeper and open a new terminal for the Kafka part
      /usr/bin/kafka-server-start /etc/kafka/server.properties
      ...
      [2017-06-08 14:11:31,333] INFO Kafka version : 0.10.2.1-cp1 (org.apache.kafka.common.utils.AppInfoParser)
      [2017-06-08 14:11:31,334] INFO Kafka commitId : 80ff5014b9e74a45 (org.apache.kafka.common.utils.AppInfoParser)
      [2017-06-08 14:11:31,335] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
      [2017-06-08 14:11:31,350] INFO Waiting 10062 ms for the monitored broker to finish starting up... (io.confluent.support.
      metrics.MetricsReporter)
      [2017-06-08 14:11:41,413] INFO Monitored broker is now ready (io.confluent.support.metrics.MetricsReporter)
      [2017-06-08 14:11:41,413] INFO Starting metrics collection from monitored broker... (io.confluent.support.metrics.Metric
      sReporter)
    • Like Zookeeper , let the Kafka Terminal open and launch a new session for the topic creation.

 Topic creation

  • Messages in Kafka are categorized into Topics, it’s like a db table or a directory in a file system.
    At first , we are going to create a new topic.

    [root@osboxes ~]# /usr/bin/kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic dbi
    Created topic "dbi".
  • Check if the topic has been effectively created
    [root@osboxes ~]# /usr/bin/kafka-topics --list --zookeeper localhost:2181
    dbi
  • 
    

    Nice , we can now produce some messages using the topic “dbi”

    [root@osboxes ~]# kafka-console-producer --broker-list localhost:9092 --topic dbi
    be passionate
    be successful
    be responsible
    be sharing
  • Open a new terminal and act like a consumer with the console
    /usr/bin/kafka-console-consumer --zookeeper localhost:2181 --topic dbi --from-beginning
    be passionate
    be successful
    be responsible
    be sharing
  • Et voilà ! the messages produced with the producer appeared now in the consumer windows. You can type a new message in the producer console , it will display immediately in the other terminal.If you want to stop all the consoles , you can press Ctrl-C.

Now the most difficult thing is still to be done, configure Kafka with multiple producers / consumers within a complex broker topology.

cluster_architecture

 

Cet article Install Apache Kafka on Linux est apparu en premier sur Blog dbi services.

12.2 Partitions

Jonathan Lewis - Fri, 2017-06-09 04:13

At the end of my presentation to the UKOUG Database SIG yesterday I summed up (most) of points I’d made with a slide making the claim:

In 12.2 you can: Convert a simple table to partitioned with multi-column automatic list partitions, partially indexed, with read only segments, filtering out unwanted data, online in one operation.

 

Last night I decided I ought to demonstrate the claim – so here’s a little code, first creating a simple heap table:


rem
rem     Script:         122_features.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem
rem     Last tested
rem             12.2.0.1
rem

create table t1(
        date_start      not null,
        date_end        not null,
        id              not null,
        client_id,
        resort_code,
        uk_flag,
        v1,
        padding,
        constraint t1_range_ck check ((date_end - date_start) in (7, 14, 21))
)
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(sysdate,'yyyy') + 7 *  mod(rownum, 8)                                     date_start,
        trunc(sysdate,'yyyy') + 7 * (mod(rownum, 8) + trunc(dbms_random.value(1,4)))    date_end,
        rownum                                          id,
        trunc(dbms_random.value(1e5,2e5))               client_id,
        trunc(dbms_random.value(1e4,2e4))               resort_code,
        case when mod(rownum,275) = 0 then 1 end        uk_flag,
        lpad(rownum,10,'0')                             v1,
        lpad('x',100,'x')                               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > "GT" inserted to avoid WordPress formatting issue
;

create index t1_client_idx on t1(client_id);
create index t1_resort_idx on t1(resort_code);
create index t1_ukflag_idx on t1(uk_flag);

alter table t1 add constraint t1_pk primary key(id);

I’ve got a table which models a travel company that arranges holidays that last one, two, or three weeks and (for convenience) they all start on the same day for the week. So I generate a start and end date for each row, making sure the start date is a multiple of seven days from a base date while the end date is 7, 14, or 21 days later. I’ve got a few indexes on the data, and a primary key constraint. There’s a special flag column on the table for holidays in the UK, which is a small parcentage of the holidays booked.

Eventually, when the data gets too big, I decide that I want to partition this data, and the obvious partitioning idea that springs to mind is to partition it so that holidays with the same start date and duration are all in the same partition and each partition holds a single start/duration.

I’ve also decided that I’m going to make old data read-only, and I’m not interested in the UK holidays once they gone into history so I’m going to get rid of some of them.

The index protecting the primary key will have to be global since it won’t contain the partition key; since the index on uk_flag covers a small amount of data I’m going to keep that global as well, but I want the other two indexes to be local – except for the older data I’m not really interested in keeping the index on client id.

And I don’t want to stop the application while I’m restructuring the data.

So here’s my one SQL statement:


alter table t1 modify 
partition by list (date_start, date_end) automatic (
        partition p11 values (to_date('01-Jan-2017'),to_date('08-Jan-2017')) indexing off read only,
        partition p12 values (to_date('01-Jan-2017'),to_date('15-Jan-2017')) indexing off read only,
        partition p13 values (to_date('01-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p21 values (to_date('08-Jan-2017'),to_date('15-Jan-2017')) indexing off read only,
        partition p22 values (to_date('08-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p23 values (to_date('08-Jan-2017'),to_date('29-Jan-2017')) indexing off read only,
        partition p31 values (to_date('15-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p32 values (to_date('15-Jan-2017'),to_date('29-Jan-2017')) indexing off read only,
        partition p33 values (to_date('15-Jan-2017'),to_date('05-Feb-2017')) indexing off read only
)
including rows where uk_flag is null or (date_start > to_date('01-feb-2017','dd-mon-yyyy'))
online
update indexes (
        t1_client_idx local indexing partial,
        t1_resort_idx local,
        t1_ukflag_idx indexing partial
)
;

Key Points
  • partition by list (date_start, date_end) — partitioned by a multi-column list
  • automatic — if data arrives for which there is on existing partition a new one will be created
  • indexing off — some of my partitions (the pre-defined (oldest) ones) will be subject to partial indexing
  • read only — some of my partitions (the pre-defined (oldest) ones) will be made read only
  • including rows where — some of my rows will disappear during copying [1]
  • online — Oracle will be journalling the data while I copy and apply the journey at the end
  • update indexes – specify some details about indexes [2]
  • local — some of the rebuilt indexes will be local
  • indexing partial — some of the rebuilt indexes will not hold data (viz: for the partitions declared “indexing off”)

I’ve footnoted a couple of the entries:

[1] – the copy is done read-consistently, so data inserted while the copy takes place will still appear in the final table, even if it looks as if it should have failed the including rows clause.

[2] – indexes which include the partition key will automatically be created as local indexes (and you can declare them here as global, or globally partitioned, if you want to). The manual has an error on this point; it suggests that prefixed indexes will be created as local indexes but then defines “prefixed” to mean contains the partition key” rather than the usual starts with the partition key”.

Job done – except for the exhaustive tests that it’s been done correctly, the load test to see how it behaves when lots of new holidays are being booked and current ones being modified, and a little bit of clearing up of “surprise” partitions that shouldn’t be there and changing some of the automatically generated table partitions to be “indexing off” (if and when necessary).

Here are a few queries – with results – showing the effects this one statement had:


select count(*) from t1;

/*
  COUNT(*)
----------
     99773

-- some rows (old UK) have disappeared from the original 10,000
*/


select
        index_name, partitioned, status, leaf_blocks, num_rows , indexing, orphaned_entries
from
        user_indexes
where   table_name = 'T1'
order by
        partitioned, index_name
;

/*
INDEX_NAME           PAR STATUS   LEAF_BLOCKS   NUM_ROWS INDEXIN ORP
-------------------- --- -------- ----------- ---------- ------- ---
T1_PK                NO  VALID            263      99773 FULL    NO
T1_UKFLAG_IDX        NO  VALID              1        136 PARTIAL NO
T1_CLIENT_IDX        YES N/A              149      62409 PARTIAL NO
T1_RESORT_IDX        YES N/A              239      99773 FULL    NO

-- Indexes: Local or global, full or partial.
*/

select
        segment_type, segment_name, count(*)
from
        user_segments
group by
        segment_type, segment_name
order by
        segment_type desc, segment_name
;

/*
SEGMENT_TYPE       SEGMENT_NAME                COUNT(*)
------------------ ------------------------- ----------
TABLE PARTITION    T1                                24
INDEX PARTITION    T1_CLIENT_IDX                     15
INDEX PARTITION    T1_RESORT_IDX                     24
INDEX              T1_PK                              1
INDEX              T1_UKFLAG_IDX                      1

-- One local index has fewer segments than the other
*/

set linesize 180
set trimspool on

column high_value format a85
break on index_name skip 1
set pagesize 200

select
        index_name, status, leaf_blocks, num_rows, partition_name, high_value
from
        user_ind_partitions
where
        index_name = 'T1_CLIENT_IDX'
--      index_name like 'T1%'
order by
        index_name, partition_position
;

/*
INDEX_NAME           STATUS   LEAF_BLOCKS   NUM_ROWS PARTITION_NAME         HIGH_VALUE
-------------------- -------- ----------- ---------- ---------------------- -------------------------------------------------------------------------------------
T1_CLIENT_IDX        UNUSABLE           0          0 P11                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P12                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P13                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P21                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P22                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P23                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P31                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P32                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P33                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     USABLE            10       4126 SYS_P1528              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4198 SYS_P1529              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4211 SYS_P1530              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4214 SYS_P1531              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4195 SYS_P1532              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4113 SYS_P1533              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE             9       4027 SYS_P1534              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4217 SYS_P1535              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4167 SYS_P1536              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4230 SYS_P1537              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4113 SYS_P1538              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4069 SYS_P1539              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4215 SYS_P1540              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4138 SYS_P1541              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4176 SYS_P1542              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )


*/

I’ve limited the index partition output to the index with partial indexing enabled so show that it’s the pre-defined partitions are marked as unusable and, as you can infer from the segement summary, those unusable index partition don’t have any segments space allocated to them.

Stress tests are left to the interested reader.


Is it Possible to Run Patch Wizard Offline?

Steven Chan - Fri, 2017-06-09 02:00

Patch Wizard helps you identify useful or critical missing patches for your EBS environment.

Many of the largest EBS customers run their environments where external network connectivity is extremely limited or non-existent.  The latter architectures are sometimes "air gapped" from external networks for maximum security.

Patch Wizard identifies missing patches by comparing them to the Patch Information Bundle (a.k.a. "InfoBundle"), a master repository of the latest available patches. It is still possible to run Patch Wizard even if your environment is air-gapped.  You need to download the Infobundle file or the patches you wish to analyze from Oracle's Support Portal using a system that has network access to Oracle Support.  Once you've downloaded those files, you can move them to your secure or isolated EBS environment manually.

For more details about running Patch Wizard in network-isolated EBS environments, see:

References

Related Articles

Categories: APPS Blogs

Balance sheet Basics

OracleApps Epicenter - Fri, 2017-06-09 00:47
To undestand this take a example from Individual to Business. If you want to consider your own financial situation ,then three things you will consider : How much is owns How much is owes and, these two makes how much is your net worth. net worth =owns - owes or owns =owes +net worth So […]
Categories: APPS Blogs

Failed to CREATE tablespace or pfile to NFS (or NAS who was believed correctly configured) on Windows

Tom Kyte - Thu, 2017-06-08 22:46
Hi Team, We plan to deploy 2 Oracle labs whose datafiles are resident on NFS (or NAS). And we met a strange situation about CREATING pfile/datafile to NFS(or NAS) on Windows Server: 1) on Windows, we <u>failed to CREATE pfile(or tablespace) to NFS(...
Categories: DBA Blogs

ERROR Creating a VIEW that has a FUNCTION in a WITH CLAUSE

Tom Kyte - Thu, 2017-06-08 22:46
I am able to run a query that contains a FUNCTION inside of a WITH clause, which is a new 12c feature, but I get an error when I try to use the same SQL inside a VIEW. I tried using LiveSQL, but I cannot even get the SQL below to run, which does wor...
Categories: DBA Blogs

PipeLine function in Package

Tom Kyte - Thu, 2017-06-08 22:46
Hi Tom, Good Day! I have some doubts on pipeline functions.I created pipeline function in Package. i need to declare types within package to return a pipeline function. is possible to create objects in package to return a pipeline function is ...
Categories: DBA Blogs

Compile file *.fmb, *.mmb wih unicode!

Tom Kyte - Thu, 2017-06-08 22:46
Hi Tom! I'm have been an issue with unicode when compile file *.fmb to *.fmx and *.mmb to *.mmx (OS: linxux). I set NSL_LANG=AMERICAN_AMERICA.UTF8; NLS_LENGTH_SEMANTICS=CHAR.When I execute the command below it's successful but the unicode fonts...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator