Feed aggregator

Testing new PostgreSQL features before alpha/beta/rc releases

Yann Neuhaus - Sun, 2017-05-07 04:32

A long time ago I blogged on how you can use the PostgreSQL development snapshots to test new PostgreSQL features before alpha/beta/rc releases are officially released. Another way to do this is to use git to get the latest sources and build PostgreSQL from there. Everything which was committed will be available to test. Btw: A great way to stay up to date is to subscribe to the mailing list just referenced. You’ll get a mail for each commit that happened, maybe one of those is getting your attention?

To start you’ll obviously need git. For distributions using yum this is just a matter of:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install git

For systems using apt use:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install git

Depending on how you want to configure PostgreSQL you’ll need some development packages as well. For yum based systems this is a good starting point:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip

For apt based systems you might want to start with this:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install libldap2-dev libpython-dev libreadline-dev libssl-dev bison flex libghc-zlib-dev libcrypto++-dev libxml2-dev libxslt1-dev tcl tclcl-dev bzip2 wget screen ksh libpam0g-dev libperl-dev make unzip libpam0g-dev tcl-dev python

Not all of those packages are required, they just reflect what we usually install before building PostgreSQL from source. Of course you should adjust this and remove packages that are not required for what you plan to do.

How do you then get the latest PostgreSQL sources? Quite easy, it is documented in the PostgreSQL wiki:

postgres@pgbox:/home/postgres/ [pg960final] mkdir IwantToTest
postgres@pgbox:/home/postgres/ [pg960final] cd IwantToTest/
postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] git clone git://git.postgresql.org/git/postgresql.git

The result should look similar to this:

Cloning into 'postgresql'...
remote: Counting objects: 629074, done.
remote: Compressing objects: 100% (95148/95148), done.
remote: Total 629074 (delta 534080), reused 626282 (delta 531478)
Receiving objects: 100% (629074/629074), 184.31 MiB | 26.40 MiB/s, done.
Resolving deltas: 100% (534080/534080), done.

From now on you have the complete PostgreSQL sources locally available.

postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] cd postgresql/; ls
aclocal.m4  config  configure  configure.in  contrib  COPYRIGHT  doc  GNUmakefile.in  HISTORY  Makefile  README  README.git  src

Ready to test? Yes, but what? One possible way to start is asking git for what was committed recently:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] git log
commit 0de791ed760614991e7cb8a78fddd6874ea6919d
Author: Peter Eisentraut peter_e@gmx.net
Date:   Wed May 3 21:25:01 2017 -0400

    Fix cursor_to_xml in tableforest false mode
    
    It only produced  elements but no wrapping table element.
    
    By contrast, cursor_to_xmlschema produced a schema that is now correct
    but did not previously match the XML data produced by cursor_to_xml.
    
    In passing, also fix a minor misunderstanding about moving cursors in
    the tests related to this.
    
    Reported-by: filip@jirsak.org
    Based-on-patch-by: Thomas Munro thomas.munro@enterprisedb.com
...

Usually you can find a link to the discussion in the commit message so can you read through the history of a specific commit. Another way is to read the development documentation or the upcoming release notes once available.

All you need to do then is to build PostgreSQL:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] ./configure
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] cd contrib
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/initdb -D /var/tmp/test
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/pg_ctl -D /var/tmp/test start
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/psql postgres
psql (10devel)
Type "help" for help.

pgbox/postgres MASTER (postgres@5432) # 

Happy testing …

 

Cet article Testing new PostgreSQL features before alpha/beta/rc releases est apparu en premier sur Blog dbi services.

Consequences of stopping Oracle support

Amis Blog - Sun, 2017-05-07 03:40

When buying licenses for Oracle, the first year support is mandatory. After that, a Customer may decide to stop paying for the yearly technical support of the Oracle licenses. The consequences of that decision is not always clear to customers. Most OLSA’s will contain the sentence   “If you decide not to purchase technical support, you may not update any unsupported program licenses with new versions of the program.”

This is correct, but there is more to think of.  This post will cover the elements that should be considered when deciding on stopping the support.

Unsupported actions

The Technical Support Policy of Oracle clarifies a bit more of what actions a customer is not entitled to do when stopping the support:

Customers with unsupported programs are not entitled to download, receive, or apply updates, maintenance  releases, patches, telephone assistance, or any other technical support services for unsupported programs.

This means the software instantly become legacy, AND a substantial risk. The Oracle software will not be upgraded or patched, the environment  (O.S., client software, middleware, other connected software) does. With the possible effect the application might not work in the future.

Audit-ranking

However Oracle claims that the departments Support, Accountmanagement and LMS acts more or less seperated and will not share this kind of information, it is naive to assume that the decision of stopping support of (part of) the Oracle licenses has no consequences regarding the rank of the customer on LMS’s list for submitting an audit.

 

Matching Service Levels

The support of the license to be stopped could be part of a socalled ‘subset’. Then the following rule applies according to the Support Policy:

You may desupport a subset of licenses in a license set only if you agree to terminate that subset of licenses.

The definition of a license subset is quite a definition, but here are two examples:

Oracle Database Enterprise Edition with RAC, Diagnostic and Tuning Pack.

Weblogic Suite with SOA Suite

So stopping support of the options is a ‘Matching Service Level’ – thing, what LMS will translate as incompliancy, and the chance that My Oracle Support is not willing to help when submitting a Service Request.

 

Afbeeldingsresultaat voor oracle reinstatement fee

Repricing

Support of Oracle software is related to CSI-numbers, and there may be several CSI-numbers in one contract. And a customer may have more contracts, all with ther own negotiated discounts. The following line in the Support Policy is important when stopping support of a line-item :

Pricing for support is based upon the level of support and the volume of licenses for which support is ordered. In the event that a subset of licenses on a single order is terminated or if the level of support is reduced, support for the remaining licenses on that license order will be priced at Oracle’s list price for support in effect at the time of termination or reduction minus the applicable standard discount.

This is ‘Repricing’, also called ‘Pricing following Reduction ‘. So, the updated support renewal, then, would be recalculated at a less optimal discount. Ending up being no savings – just less product on support for the same costs.

This is mostly the case of terminating a license and not for terminating support (however this is a ‘reduced level of support’), but it’s important to know.

Terminating a license within a CSI-number – in stead of stopping support – is in some cases by the way not a reason for repricing. E.g. when there has been a reorganisation of contracts in the past.

Reinstatement

When a customer decides – for what reason – to reinstate the support, there will be a reinstatement-fee.

The reinstatement fee is computed as follows:

a) if technical support lapsed, then the reinstatement fee is 150% of the last annual technical support fee you paid for the relevant program;

b) if you never acquired technical support for the relevant programs, then the reinstatement fee is 150% of the net technical support fee that would have been charged

Engineered Systems

Stopping support of a productline also has a peculiar effect on products, running on engineered systems.

The lifecycle managment of engineered systems are maintained by so-called  ‘bundle-patches’. These bundle-patches contains patches of storage-firmware, bios-updates, o.s-updates, and .. Oracle software patches.

So, when stopping Oracle support you still receive the database and middleware-patches through the bundle-patches, which is not allowed. And however it could be possible to not use these patches, it will break the life cycle managment of the engineered system. I don’t think this is advisable.

Prerequisites

The prerequisites of making such a decision:

  • An overview of all the Oracle contracts at your firm, what seems pretty obvious, but takes quite an effort sometimes.
  • An overview of what licences you are actually using, compared to what you are entitled to.
Recap

The OPEX (Operational of Operating Expenditures) can be decreased, in some cases substantially, but before jumping into action and conclusions, contact someone who understands the risks, and is able to look further ahead in the future, together with you.

Resources

Example OLSA: http://www.oracle.com/us/corporate/pricing/olsa-ire-v122304-070683.pdf

Oracle Software Technical Support Policies :  http://www.oracle.com/us/support/library/057419.pdf

The post Consequences of stopping Oracle support appeared first on AMIS Oracle and Java Blog.

VPD Policy Type and Binding

Tom Kyte - Sun, 2017-05-07 03:26
I have a couple of VPD issues that I'm trying to ensure I fully understand based on reading Chapter 7 of Oracle Database 12c Security. I've seen both examples for well respected Oracle authors. Regarding bind variable reference of the SYS_CONTEXT fun...
Categories: DBA Blogs

DBFS and XAG for Goldengate P2

Michael Dinh - Sat, 2017-05-06 11:26

In order to use agctl commands, we need to know goldengate instance_name.

Unfortunately, agctl does not work the same way as srvctl where it’s possible to determine what is configured.

ggsuser@hawk1 ~ $ $ORACLE_HOME/bin/srvctl config database
DBFS

ggsuser@hawk1 ~ $ $GRID_HOME/bin/agctl config goldengate
XAG-212: Instance '' is not yet registered.
ggsuser@hawk1 ~ $ 

How do we find out what the goldengate instance name is? IFF XAG is configured, then grep for it.

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res -t|grep -A2 xag
xag.gg_xx-vip.vip
      1        ONLINE  ONLINE       hawk1                STABLE
xag.gg_xx.goldengate
      1        ONLINE  ONLINE       hawk1                STABLE
--------------------------------------------------------------------------------

ggsuser@hawk1 ~ $ $GRID_HOME/bin/agctl status goldengate gg_xx
Goldengate  instance 'gg_xx' is running on hawk1
ggsuser@hawk1 ~ $ 

Some other useful commands to gather configurations info.

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res|grep xag
NAME=xag.gg_xx-vip.vip
NAME=xag.gg_xx.goldengate
TYPE=xag.goldengate.type
ggsuser@hawk1 ~ $ 

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res|grep -i type|sort -u
TYPE=app.appvipx.type
TYPE=local_resource
TYPE=ora.asm.type
TYPE=ora.cluster_vip_net1.type
TYPE=ora.cvu.type
TYPE=ora.database.type
TYPE=ora.diskgroup.type
TYPE=ora.listener.type
TYPE=ora.mgmtdb.type
TYPE=ora.mgmtlsnr.type
TYPE=ora.network.type
TYPE=ora.oc4j.type
TYPE=ora.ons.type
TYPE=ora.scan_listener.type
TYPE=ora.scan_vip.type
TYPE=xag.goldengate.type

ggsuser@hawk1 ~ $ $GRID_HOME/bin/crsctl stat res -w "TYPE = xag.goldengate.type" -p
NAME=xag.gg_xx.goldengate
TYPE=xag.goldengate.type
ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
ACTIONS=
ACTION_SCRIPT=%CRS_HOME%/bin/aggoldengateas
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=30
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
CRITICAL_EXTRACTS=
CRITICAL_REPLICATS=
CRS_ATTRIBUTES=
DATABASES= (No DB dependencies - User Exits)
DATAGUARD_AUTOSTART=no
DB_SERVICES=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION="Oracle GoldenGate Clusterware Resource"
ENABLED=1
ENVIRONMENT_VARS=
FAILOVER_DELAY=0
FAILURE_INTERVAL=600
FAILURE_THRESHOLD=5
FILESYSTEMS=dbfs_mount
GG_HOME=/u03/gg/12.2.0
GG_INSTANCE_TYPE=target
HOSTING_MEMBERS=hawk1 hawk2
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
JAGENT_AUTOSTART=no
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
MONITOR_EXTRACTS=
MONITOR_REPLICATS=
OFFLINE_CHECK_INTERVAL=0
ORACLE_CLIENT_HOME=
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=
START_CONCURRENCY=0
START_DEPENDENCIES=hard(xag.gg_xx-vip.vip,dbfs_mount) pullup(xag.gg_xx-vip.vip,dbfs_mount)
START_TIMEOUT=300
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(xag.gg_xx-vip.vip,intermediate:dbfs_mount)
STOP_TIMEOUT=300
UPTIME_THRESHOLD=10m
USER_WORKLOAD=no
USE_STICKINESS=0
VERSION=2
VIP_CREATED=1
VIP_NAME=xag.gg_xx-vip.vip
ggsuser@hawk1 ~ $ 

You might be thinking, if there are no dependencies for database, then why is it referencing Database Home?

ggsuser@hawk1 ::/u03/gg/12.2.0
$ ldd ggsci 
	linux-vdso.so.1 =>  (0x00007ffcaa8ff000)
	librt.so.1 => /lib64/librt.so.1 (0x00007f6a02c5b000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007f6a02a56000)
	libgglog.so => /u03/gg/12.2.0/./libgglog.so (0x00007f6a02630000)
	libggrepo.so => /u03/gg/12.2.0/./libggrepo.so (0x00007f6a023ba000)
	libdb-6.1.so => /u03/gg/12.2.0/./libdb-6.1.so (0x00007f6a01fd5000)
	libggperf.so => /u03/gg/12.2.0/./libggperf.so (0x00007f6a01da5000)
	libggparam.so => /u03/gg/12.2.0/./libggparam.so (0x00007f6a00c8d000)
	libicui18n.so.48 => /u03/gg/12.2.0/./libicui18n.so.48 (0x00007f6a0089d000)
	libicuuc.so.48 => /u03/gg/12.2.0/./libicuuc.so.48 (0x00007f6a0051c000)
	libicudata.so.48 => /u03/gg/12.2.0/./libicudata.so.48 (0x00007f69fed57000)
	libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f69feb3a000)
	libxerces-c.so.28 => /u03/gg/12.2.0/./libxerces-c.so.28 (0x00007f69fe574000)
	libantlr3c.so => /u03/gg/12.2.0/./libantlr3c.so (0x00007f69fe35b000)
	libnnz12.so => /u01/app/oracle/product/12.1.0/db_1/lib/libnnz12.so (0x00007f69fdc36000)
	libclntsh.so.12.1 => /u01/app/oracle/product/12.1.0/db_1/lib/libclntsh.so.12.1 (0x00007f69fabbf000)
	libons.so => /u01/app/oracle/product/12.1.0/db_1/lib/libons.so (0x00007f69fa97a000)
	libclntshcore.so.12.1 => /u01/app/oracle/product/12.1.0/db_1/lib/libclntshcore.so.12.1 (0x00007f69fa406000)
	libggnnzitp.so => /u03/gg/12.2.0/./libggnnzitp.so (0x00007f69f9922000)
	libm.so.6 => /lib64/libm.so.6 (0x00007f69f9620000)
	libc.so.6 => /lib64/libc.so.6 (0x00007f69f925e000)
	/lib64/ld-linux-x86-64.so.2 (0x00005624a8090000)
	libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f69f8f56000)
	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f69f8d3f000)
	libmql1.so => /u01/app/oracle/product/12.1.0/db_1/lib/libmql1.so (0x00007f69f8ac8000)
	libipc1.so => /u01/app/oracle/product/12.1.0/db_1/lib/libipc1.so (0x00007f69f8750000)
	libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f69f8537000)
	libaio.so.1 => /lib64/libaio.so.1 (0x00007f69f8335000)

Would’t be much better if Goldengate installation is self contained without having to download and install 2 components!


Error when trying to Import Schema Using IMP

Tom Kyte - Sat, 2017-05-06 09:06
Hi Tom, I am trying to import a schema that was exported using old export method (EXP), when importing the schema after some time i get below errors IMP-00019 row rejected due to ORACLE error 1400 IMP-0003 ORACLE error 1400 encountered OR...
Categories: DBA Blogs

Partitions in 11g

Tom Kyte - Sat, 2017-05-06 09:06
Hi team, Could you please help to get answer of One of my interview question - Consider a non-partition table x having date column with 1000 rows. How can we insert future rows i.e. from 1001 .. onward into partition (without modifying table st...
Categories: DBA Blogs

export error

Tom Kyte - Sat, 2017-05-06 09:06
Hi team, I have exported schema with expdp and import into to the development database all are fine but 5 tables are missing i checked on production with SELECT COUNT(*) FROM CPG_PROD.MDRT_20315$; -->>It shows 59 rows but when i try to export that...
Categories: DBA Blogs

wait event ' buffer busy wait' on sys.aud$

Tom Kyte - Sat, 2017-05-06 09:06
Hi, In our prod database we could see more buffer busy wait events on query "insert on sys.aud$" table? Can you explain why buffer busy wait occurred on sys.aud$ table and how to avoid ?
Categories: DBA Blogs

How to identify the total number of distinct blocks (LIO) read by a particular SQL? Is it possible at all?

Tom Kyte - Sat, 2017-05-06 09:06
Hi, There are various ways to easily identify the LIO for SQL execution as a primary measure for performance analysis. As many of the index and table blocks are usually read multiple times over and over again for SQL execution, is there a way t...
Categories: DBA Blogs

char vs varchar2 when end result is filxed format value

Tom Kyte - Sat, 2017-05-06 09:06
We have a temporary table with about 500 columns that is used to generate a fixed format file (.txt). If we use all char fields, we can just build the data as field1 || field2 || field3 ... field500 If we use varchar2 we have to rpad each fiel...
Categories: DBA Blogs

The Hello World of Machine Learning – with Python, Pandas, Jupyter doing Iris classification based on quintessential set of flower data

Amis Blog - Sat, 2017-05-06 01:58

imagePlenty of articles describe this hello world of Machine Learning. I will merely list some references and personal notes – primarily for my own convenience.

The objective is: get a first hands on exposure to machine learning – using a well known example (Iris classification) and using commonly used technology (Python). After this first step, a second step seems logical: doing the same thing with my own set of data.

Useful Resources:

Starting time: 6.55 AM

6.55 AM Download and install latest version of Oracle Virtual Box (5.1.22)

7.00 AM Download Fedora 64-bit ISO image (https://getfedora.org/en/workstation/download/)

7.21 AM Create Fedora VM and install Fedora Linux on it from ISO image (create users root/root and python/python); reboot, complete installation, run dnf update (updates worth 850 MB, 1348 upgrade actions – I regret this step), install Virtual Box Guest Addition (non trivial) using this article: https://fedoramagazine.org/install-fedora-virtualbox-guest/.

8.44 AM Save a Snapshot of the VM to retain its fresh, mint, new car smell  condition.

8.45 AM Install Python environment for Machine Learning (Python plus relevant libraries; possibly install Notebook server)

8.55 AM Save another snapshot of the VM in its current state

now the environment has been prepared, it is time for the real action – based on the second article in the list of resources.

10.05 AM start on machine learning notebook sample – working through Iris classification

10.15 AM done with sample; that was quick. And pretty impressive.

 

It seems the Anaconda distribution of Python may be valuable to use. I have downloaded and installed: https://www.continuum.io/downloads .

Note: to make the contents of a shared Host Directory available to all users

cd (go to home directory of current user)

mkdir share (in the home directory of the user)

sudo mount -t vboxsf Downloads  ~/share/ (this makes the shared folder called Downloads in Virtual Box Host available as directory share in guest (Fedora)

Let’s see about this thing with Jupyter Notebooks (fka as IPython). Installing the Jupyter notebook is discussed here: https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch01/README.md . Since I installed Anaconda (4.3.1 for Python 3.6) I have the Jupyter app installed already.

With the following command, I download a number of notebooks:

git clone https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects

Let’s try to run one.

cd /home/python/Data-Analysis-and-Machine-Learning-Projects/example-data-science-notebook

jupyter notebook ‘Example Machine Learning Notebook.ipynb’

And the notebook opens in my browser:

image

I can run the notebook, walk through it step by step, edit the notebook’s contents and run the changed steps. Hey mum, I’m a Data Scientist!

Oh, it’s 11.55 AM right now.

 

Some further interesting reads to get going with Python, Pandas and Jupyter Notebooks – and with data:

The post The Hello World of Machine Learning – with Python, Pandas, Jupyter doing Iris classification based on quintessential set of flower data appeared first on AMIS Oracle and Java Blog.

DBFS and XAG for Goldengate P1

Michael Dinh - Fri, 2017-05-05 16:53

What’s the difference between the 2 GoldenGate configurations below.

$ $GRID_HOME/bin/agctl config goldengate gg_xx

GoldenGate location is: /u03/gg/12.2.0
GoldenGate instance type is: target
Configured to run on Nodes: arrow1 arrow2
ORACLE_HOME location is: /u01/app/oracle/product/12.1.0/db_1
Databases needed: ora.emu1.db
File System resources needed: dbfs_mount
Extracts to monitor: 
Replicats to monitor: 
Critical extracts: 
Critical replicats: 
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no

$ $GRID_HOME/bin/agctl config goldengate gg_xx

GoldenGate location is: /u03/gg/12.2.0
GoldenGate instance type is: target
Configured to run on Nodes: hawk1 hawk2
ORACLE_HOME location is: /u01/app/oracle/product/12.1.0/db_1
File System resources needed: dbfs_mount
Extracts to monitor: 
Replicats to monitor: 
Critical extracts: 
Critical replicats: 
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no

Here are how they are added:

# $GRID_HOME/bin/agctl add goldengate gg_xx \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/db_1 \
--nodes hawk1,hawk2 \
--network 1 --ip 10.10.10.101 \
--user ggsuser --group dba \
--filesystems dbfs_mount \
--gg_home /u03/gg/12.2.0 \
--databases ora.emu1.db

# $GRID_HOME/bin/agctl add goldengate gg_xx \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/db_1 \
--nodes arrow1,arrow2 \
--network 1 --ip 10.10.10.201 \
--user ggsuser --group dba \
--filesystems dbfs_mount \
--gg_home /u03/gg/12.2.0 

One is actually source from Database and the other is target User Exits.
But, but, but –instance_type is target for both.
That’s right (meaning your observation is correct) – hehe
Bad implementation – don’t do that.

instance_type – OGG source or OGG target (source, target) (dual is bi-directional)

$XAG_HOME/bin/agctl add goldengate lax_ggate \
--gg_home /acfsmount/ggs112 \
--instance_type dual \
--nodes rac01,rac02 \
--vip_name lax-ggate1-vip \
--filesystems ora.dg_acfs.vg_acfs.acfs \
--databases ora.emu.db \
--oracle_home /u01/app/oracle/product/11.2.0.4/db_1 \
--monitor_extracts ELAX,PLAX_DEN \
--critical_extracts ELAX,PLAX_DEN \
--monitor_replicats RDEN_LAX \
--critical_replicats RDEN_LAX

Why agctl is called from $GRID_HOME in one and $XAG_HOME in another?

$ $GRID_HOME/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 3.1.0

$ $GRID_HOME/bin/agctl query deployment
The Oracle Grid Infrastructure Agents deployment is bundled

agctl from GRID_HOME is bundled with install, but older version.
agctl from XAG_HOME is standalone, downloaded and installed from

Oracle Grid Infrastructure Standalone Agents for Oracle Clusterware 11g Rel. 2, 12c Rel. 1 and 12c Rel. 2
http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/xag-agents-downloads-3636484.html


deadlock question

Tom Kyte - Fri, 2017-05-05 14:46
Hi, Forgive my pool english,I try my best to make my question clear. I think deadlock only accured in a tansaction ,etc, session 1 session 2 start transaction ...
Categories: DBA Blogs

how to concate rows into select

Tom Kyte - Fri, 2017-05-05 14:46
Hi Tom, <code>create table test3 (name varchar2(10), sub varchar2(10), marks number ); insert into test3 values('joe','maths',90); insert into test3(sub,marks) values('social',80); insert into test3(sub,marks) values('science',70); ...
Categories: DBA Blogs

Show rows where the running total is zero and all preceding rows

Tom Kyte - Fri, 2017-05-05 14:46
Hi Team, Data Setup: <code>create table test_item as with t1 as ( select 1 category, 'abc' Item_name,100 item_volume,12.3 item_price,to_date('04-may-2017 12:22:33 AM','dd-mon-yyyy hh:mi:ss AM') time from dual union select 1,'abc' ,-100 ,1...
Categories: DBA Blogs

Tables not visible based on Role

Tom Kyte - Fri, 2017-05-05 14:46
Hi In my Project, we have created two users 1) admin 2) Report One Role: Reporting Role Reporting role is granted to Report user. Now all the tables are created using admin user and select access is granted to reporting Role. N...
Categories: DBA Blogs

Audit

Tom Kyte - Fri, 2017-05-05 14:46
Hi Team, I have a requirement to generate audit report on a parent-child table. We are capturing changes to the tables using trigger. But when I am using the audit tables to reconstruct the history I am getting phantom rows which never existed in...
Categories: DBA Blogs

List of “Dashboard” Startups and Products

Nilesh Jethwa - Fri, 2017-05-05 14:14

The idea of building a dashboard application has become synonymous to the idea of a developer thinking let us build a "To-do" application.

This is evident by the sheer amount of dashboard related applications that developers/founders have submitted to Hacker News over the past several years.

The dashboards range from wide spectrum, on one side to personal dashboards, project management dashboards, network dashboards to full blown social media dashboards.

If you are looking to build the next dashboard application or just shopping, it is very interesting to look at all the "Show HN" Dashboard submissions

Read more at http://www.infocaptor.com/dashboard/list-of-dashboard-startups

I'm On a New Blog Platform!

Shay Shmeltzer - Fri, 2017-05-05 12:15

This happens every several years, our blogging platform at Oracle is switching to a new environment, and my blog is one of those moving. In the next few days I'll be testing to see if content migration did its magic and everything works.

If you run into any broken entries/links/samples please drop me a line or just comment on the specific blog entry, and I'll try to fix things.

Categories: Development

Golden Gate 12c and DIY Sequence Replication with PL/SQL

Amis Blog - Fri, 2017-05-05 10:58

Recently, while migrating AIX 11gR2 Databases to Oracle Linux 12cR1 on an ODA X5-2, our setup of Sequence Replication by Oracle Golden Gate appeared to be faulty. The target side sequences were not automatically incremented.

The problem came to light during the migration of acceptance databases, and under some time pressure it was devised to generate drop + create statements ( start with = DBA_SEQUENCES.LAST_NUMBER + DBA_SEQUENCES.INCREMENT_BY ) of all sequences in the Source, and to run these statements on the Target. Although this eventually resulted in the desired result, there were 2 side effects:

    With a total of 1270 sequences, the operation as a whole took more than an hour.
    Packages and triggers referencing these sequences became invalid.

Further research revealed that the Golden Gate Sequence Replication of Production suffered the same problem and I wondered if I could find a better solution with now a bit more time at hand. Well, I discovered that to set any desired sequence “currval” value, a one-time temporary adjustment of the increment and subsequent call to the sequence “nextval” pseudo column is sufficient. What follows is the output of a quick test, but check out what happens with “USER_SEQUENCES.LAST_NUMBER”, and what it really means in combination with the cache.

Create a test sequence

CREATE SEQUENCE TEST_SEQ_01 
START WITH 10 
INCREMENT BY 1000 
MINVALUE 10 
CACHE 20 
NOCYCLE 
NOORDER;
 
-- the sequence returns no current value yet
SELECT TEST_SEQ_01.CURRVAL from dual;
  ORA-08002: sequence TEST_SEQ_01.CURRVAL is not yet defined in this session.

-- check out last_number... it equals nextval because the cache doesn't exist yet
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01'; 
  MIN_VALUE	INCREMENT_BY CACHE_SIZE	LAST_NUMBER
  10	      1000	       20	        10

-- generate the first number and create the cache
SELECT TEST_SEQ_01.NEXTVAL from dual;
  NEXTVAL
  10

-- last_number is updated as the highest possible number of the cache
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      1000	       20	        20010

-- and now a current value is returned
SELECT TEST_SEQ_01.CURRVAL from dual;
  CURRVAL
  10

Set the current sequence value = 20000 without recreating the sequence

-- adjust the increment
ALTER SEQUENCE TEST_SEQ_01 INCREMENT BY 19990;

-- last_number equals the sequence next value
-- the last "alter sequence" command must have flushed the cache
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      19990	       20	        20000

-- generate the next value and create a new cache
SELECT TEST_SEQ_01.NEXTVAL from dual
  NEXTVAL
  20000

-- last_number is updated as the highest possible number of the cache
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      19990	       20	        419800

-- the sequence has the desired current value
SELECT TEST_SEQ_01.CURRVAL from dual
  CURRVAL
  20000

Reset the increment

-- set the increment_by value back to original
ALTER SEQUENCE TEST_SEQ_01 INCREMENT BY 1000;

-- again, the cache is flushed and last_number equals the next value
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      1000	       20	        21000

-- generate the next value and create a new cache
SELECT TEST_SEQ_01.NEXTVAL from dual
  NEXTVAL
  21000

-- last_number is updated as the highest possible number of the cache 
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      1000	       20	        41000

-- the increment is back to 1000
SELECT TEST_SEQ_01.NEXTVAL from dual
  NEXTVAL
  22000

This test shows that “USER_SEQUENCES.LAST_NUMBER”:

  • Is identical with sequence “nextval” directly after a “create sequence” or “alter sequence” command, because the cache is not there yet after first definition or gets flushed with an alter.
  • Is updated and saved to disk as the highest possible cache number after a call to “nextval”.
  • Serves as safeguard ( i.e. after a crash ) to ensure that sequence numbers do not conflict with numbers previously issued.

  • I decided to use “DBA_SEQUENCES.LAST_NUMBER” instead of the “currval” pseudo column to compare sequences in Source and Target. The reason is that “currval” is only ( and by definition ) the value returned by my sessions last call to “nextval”. If my session has not called “nextval” yet, “currval” is undefined. So I would have to “nextval” 1270 sequences in Source and also in Target before I could even start with the comparison, while last_numbers are already there to compare with. Also, this activity is unwanted during the short inactive Source and inactive Target migration stage and would take too much time. Last but not least, an exact match of sequence “currval” values is not really necessary… a guarantee of higher sequence “currval” values in Target compared to those in Source is quite enough.

    The next short piece of code is what I eventually came up with and used in the Production migration. It took less than 3 minutes processing time, did not render any Oracle object invalid, and contributed highly to a very limited migration inactivity time.

    -- Code assumes:
    --   1. "nocycle" sequences with positive "increment_by" values
    --   2. identical number of sequences and sequence DDL in Source and Target Database 
    -- Grant 'alter any sequence' and 'select any sequence' to the owner
    -- Replace the database link and schema names with your own
    -- Run the code from Target
    declare
      v_ret PLS_INTEGER := 0;
      v_dummy VARCHAR2(100);
      v_ln number := 0;
      v_ib number := 0;
      v_cz number := 0;
      v_incr number := 0;
    begin
      for i in ( select sequence_owner  so
                      , sequence_name   sn
                      , last_number     ln
                      , increment_by    ib
                      , cache_size      cz 
                 from dba_sequences@<DBLINK_FROM_SOURCE2TARGET>
                 where sequence_owner in ('<SCHEMA01>','<SCHEMA02>','<SCHEMA03>','<SCHEMA04>') )
      loop
          select last_number
               , increment_by 
               , cache_size
            into v_ln
               , v_ib
               , v_cz 
          from dba_sequences
          where sequence_owner = i.so
            and sequence_name = i.sn;
    
    -- set the difference in last_numbers as increment if target.last_number < source.last_number
          if v_ln < i.ln then
            v_incr := i.ln - v_ln;
    -- set the cache as increment if last_numbers match  
          elsif v_ln = i.ln then
            v_incr := v_ib * v_cz;
          end if;
          
          if v_ln <= i.ln then    
            execute immediate 'alter sequence '||i.so||'.'||i.sn||' increment by '||v_incr;
            execute immediate 'select '||i.so||'.'||i.sn||'.nextval from dual' into v_dummy;
            execute immediate 'alter sequence '||i.so||'.'||i.sn||' increment by '||v_ib;
            v_ret := v_ret +1;
          end if;
      end loop;
      dbms_output.put_line('Nr. sequences adjusted: '||v_ret);
    end;
    /
    

    The post Golden Gate 12c and DIY Sequence Replication with PL/SQL appeared first on AMIS Oracle and Java Blog.

    Pages

    Subscribe to Oracle FAQ aggregator