Feed aggregator

[Blog] Compartment In Oracle Cloud Infrastructure (OCI): Everything You Must Know

Online Apps DBA - Tue, 2018-11-13 01:26

The First thing you pick when you do anything on Oracle Cloud Infrastructure (OCI) is pick Compartment to host your resources (Compute, Storage, Network, Database etc). Compartment is components of Identity & Access Management in OCI and is must know for all DBAs, Apps DBAs or Architects working on Cloud Visit: https://k21academy.com/oci21 and Learn all About… […]

The post [Blog] Compartment In Oracle Cloud Infrastructure (OCI): Everything You Must Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Strange behaviour with excecute immediate.

Tom Kyte - Mon, 2018-11-12 19:26
Hi, Had problems with SQLLive, (500 response), therefore the examples are her. I have a strange behaviour with execute immediate, where it behaves differently from within a PL/SQL procedure than it does when running it standalone. Here is th...
Categories: DBA Blogs

Difference between named sequence and system auto-generated

Tom Kyte - Mon, 2018-11-12 19:26
Hello, Guys. A new db (12c) will have lots of tables with sequence used as PK. What is the difference between named sequence and system auto-generated in Oracle 12c? What would be the best approach?
Categories: DBA Blogs

Row Chaining

Tom Kyte - Mon, 2018-11-12 19:26
Hi Tom What is row chaining/migration ? what are the consequences of Row Chaining/Migration ? How I can find whether it is there in my database or not ? and if it is there what is the solution to get rid from it? Thanks in advance Howie
Categories: DBA Blogs

Run procedures or functions in parallel

Tom Kyte - Mon, 2018-11-12 19:26
hello, I've web app which is using procedures and functions inside packages most of the time, in some cases procedures and functions execution taking long time to return data (as SYS_REFCURSOR). the problem is that when other users execute other p...
Categories: DBA Blogs

Export Tables from Oracle-12c to Oracle-10g

Tom Kyte - Mon, 2018-11-12 19:26
Why the following table is not being Exported from Oracle-12c to Oracle-10g Table : <code>create table stock(ModID varchar(20) primary key, Name varchar(30), Type varchar(15) ,mQty number, cmpID number, price number, Warranty number);</code> ...
Categories: DBA Blogs

Counting specific days between to two dates

Tom Kyte - Mon, 2018-11-12 19:26
Hi Tom, ? have a case that i need to count specific days between two dates. For example i have a table that contains contract startdate, enddate and specific date like 15. 15 means every 15th day of months. i need to count specific dates. for exa...
Categories: DBA Blogs

User_dump_dest is inconsistent with the actual trace path

Tom Kyte - Mon, 2018-11-12 19:26
If my question is too simple or meaningless, you can ignore it. Why does my user_dump_dest parameter get a different path than the actual path? I run this example: <code>EODA@muphy>select c.value || '/' || d.instance_name || '_ora_' || a.spi...
Categories: DBA Blogs

copy partition table stats

Tom Kyte - Mon, 2018-11-12 19:26
Hi Team , as per the requirement from application team , we need to copy table stats from one table to other table . Both source and destination table are partition tables . here we tested out in local system below steps : 1. created dum...
Categories: DBA Blogs

Join like (1=1)

Tom Kyte - Mon, 2018-11-12 19:26
Hi All, I am sorry if this is pretty basic,but it is intriguing me a bit. I saw a join written like Inner Join table B on (1=1) Why join like this should be written and under what scenario.Thanks in advance.
Categories: DBA Blogs

Conceptions of Fudo Myoo in Esoteric Buddhism

Greg Pavlik - Mon, 2018-11-12 17:14
Admittedly, this is an esoteric topic altogether - my own interest in understanding Fudo Myoo in Mahayana Buddhism have largely stemmed from an interest in Japanese art in the Edo wood block tradition - but I thought a rather interesting exploration of esoteric Buddhism and by implication currents of Japanese culture.

https://tricycle.org/magazine/evil-in-esoteric-japanese-buddhism/

On Education

Greg Pavlik - Mon, 2018-11-12 17:07
'We study to get diplomas and degrees and certifications, but imagine a life devoted to study for no other purpose than to be educated. Being educated is not the same as being informed or trained. Education is an "education", a drawing out of one's own genius, nature, and heart. The manifestation of one's essence, the unfolding of one's capacities, the revelation of one's heretofore hidden possibilities - these are the goals of study from the point of view of the person. From another side, study amplifies the speech and song of the world so that it's more palpably present.

Education in soul leads to the enchantment of the world and the attunement of self.'

Thomas Moore, 'Meditations'

AWS: Running a docker-image with ECS

Dietrich Schroff - Mon, 2018-11-12 15:09
After reading some parts of the AWS documentation i decided to launch a docker-image via ECS - or better i will try to launch nginx.

Go to Amazon ECS and click on "Task Definitions":

 Then "Create new Task Definition"
 and then "FARGATE":


After adding a name you have to click "add container" and put in nginx + nginx:latest:

Then go back to  "Task Definitions" and choose "Actions"
 If you select "Run Task", you will end up with this window:


"Cluster: None Available" - so next step is to add a FARGATE cluster:






Running a task definition will be a task in another posting ;-)

Chart your SQL direct with Apache Zeppelin Notebook

Kubilay Çilkara - Mon, 2018-11-12 14:00
Do you want a notebook where you can write some SQL queries to a database and see the results either as a chart or table? 

As long as you can connect to the database with a username and have the JDBC driver, no need to transfer data into spreadsheets for analysis, just download (or docker) and use Apache Zeppelin notebook and chart your SQL directly! 


I was impressed by the ability of Apache Zeppelin notebook to chart SQL queries directly. To configure this open source tool and start charting your SQL queries just point it your database JDBC endpoint and then start writing some SQL in real time.

See below how simple this is, just provide your database credentials and you are ready to go.



The notebook besides JDBC to any database, in my case I used a hosted Oracle cloud account, can also handle interpreters like: angular, Cassandra, neo4j, Python, SAP and many others. 

You can download Apache Zeppelin and configure on localhost or you can run it on docker like this

 docker run -d -p 8080:8080 -p 8443:8443 -v $PWD/logs:/logs -v $PWD/notebook:/notebook  xemuliam/zeppelin 



Categories: DBA Blogs

EDB BART 2.2, parallel full backups without using pg_basebackup

Yann Neuhaus - Mon, 2018-11-12 13:45

Some days ago EnterpriseDB released the latest version of its backup and recovery tool for PostgreSQL and EDB Postgres Advanced Server, release notes here. The main new features, at least for me, are backups using multiple cores and parallel incremental restores. Besides that BART does not require pg_basebackup anymore for taking full backups of a PostgreSQL instance. The downside with that could be that you can not easily restore the backups without using EDB BART. Lets see how all of that works.

I’ll not describe on how BART can be installed as that is just a “yum install …” from the EDB repositories. Once that is done BART should report version 2.2.0:

[postgres@edbbart ~]$ bart --version
bart (EnterpriseDB) 2.2.0
BART22-REL-2_2_0-GA2-0-g60d64ca 
Tue Nov 6 08:32:55 UTC 2018 

The help output of BART already gives an idea about the latest changes:

[postgres@edbbart ~]$ bart backup --help
bart: backup and recovery tool

Usage:
 bart BACKUP [OPTION]...

Options:
  -h, --help           Show this help message and exit
  -s, --server         Name of the server or 'all' (full backups only) to specify all servers
  -F, --format=p|t     Backup output format (tar (default) or plain)
  -z, --gzip           Enables gzip compression of tar files
  -c, --compress-level Specifies the compression level (1 through 9, 9 being best compression)

  --backup-name        Specify a friendly name for the current backup
  --parent             Specify parent backup for incremental backup
  --thread-count       Specify number of worker thread(s) to take backup
  --check              Verify checksum of required mbm files
  --with-pg_basebackup Use pg_basebackup to take the backup, valid only for full backup
  --no-pg_basebackup   Don't use pg_basebackup to take the backup, valid only for full backup

Parallel stuff came in and the possibility to avoid using pg_basebackup. Lets do a first test without specifying “–with-pg_basebackup” or “–no-pg_basebackup” just to see what the default is:

[postgres@edbbart ~]$ /usr/edb/bart/bin/bart init -s PG2 -o
INFO:  setting archive_command for server 'pg2'
WARNING: archive_command is set. server restart is required
[postgres@edbbart ~]$ bart backup -s PG2
INFO:  DebugTarget - getVar(checkDiskSpace.bytesAvailable)
INFO:  new backup identifier generated 1541853901964
INFO:  creating 2 harvester threads
/u90/pg2/1541853901964
/u90/pg2/1541853901964
INFO:  backup completed successfully
INFO:  
BART VERSION: 2.2.0
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1541853901964
BACKUP NAME: PG2_2018-11-10T13:45
BACKUP PARENT: none
BACKUP LOCATION: /u90/pg2/1541853901964
BACKUP SIZE: 81.08 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: Europe/Zurich
XLOG METHOD: stream
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000005
STOP WAL LOCATION: 000000010000000000000006
BACKUP METHOD: pg_start_backup
BACKUP FROM: master
START TIME: 2018-11-01 15:39:25 CET
STOP TIME: 2018-11-01 15:39:28 CET
TOTAL DURATION: 3 sec(s)

We get two threads for creating the backup and that is because I have requested that in my configuration:

[postgres@edbbart ~]$ cat /usr/edb/bart/etc/bart.cfg
[BART]              
bart_host = postgres@edbbart  
backup_path = /u90           
pg_basebackup_path = /usr/edb/as11/bin/pg_basebackup   
xlog_method = stream
retention_policy= 2 DAYS
logfile = /tmp/bart.log 
scanner_logfile = /tmp/scanner.log	   
thread_count = 2

[PG2]
backup_name = PG2_%year-%month-%dayT%hour:%minute
host = 192.168.22.60   
user = bart
port = 5433   
cluster_owner = postgres 
remote_host = postgres@192.168.22.60 
allow_incremental_backups = enabled
thread_count = 2      
description = PG1 backups    

The question is now if pg_basebackup was used in the background or not (the answer is already known though, as pg_basebackup has no parallel option).

[postgres@edbbart ~]$ cd /u90/pg2/1541853901964/
[postgres@edbbart 1541853901964]$ ls -l
total 83040
-rw-rw-r--. 1 postgres postgres      644 Nov 10 13:45 backupinfo
-rw-------. 1 postgres postgres      211 Nov 10 13:45 backup_label
drwxrwxr-x. 4 postgres postgres       37 Nov 10 13:45 base
-rw-rw-r--. 1 postgres postgres 26151424 Nov 10 13:45 base-1.tar
-rw-rw-r--. 1 postgres postgres 25309696 Nov 10 13:45 base-2.tar
-rw-rw-r--. 1 postgres postgres 33557504 Nov 10 13:45 base.tar

This is not what you get when you do a plain pg_basebackup. Could we restore that without using BART?

[postgres@edbbart aa]$ tar -axf /u90/pg2/1541853901964/base.tar 
[postgres@edbbart aa]$ ls -l
total 60
drwx------. 6 postgres postgres    54 Nov 10 13:45 base
-rw-------. 1 postgres postgres    33 Nov 10 13:45 current_logfiles
drwx------. 2 postgres postgres  4096 Nov 10 13:45 global
-rw-------. 1 postgres postgres    25 Nov 10 13:45 __payloadChecksum
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_commit_ts
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_dynshmem
-rw-------. 1 postgres postgres  4653 Nov 10 13:45 pg_hba.conf
-rw-------. 1 postgres postgres  1636 Nov 10 13:45 pg_ident.conf
drwx------. 2 postgres postgres    58 Nov 10 13:55 pg_log
drwx------. 4 postgres postgres    68 Nov 10 13:45 pg_logical
drwx------. 4 postgres postgres    36 Nov 10 13:45 pg_multixact
drwx------. 2 postgres postgres    18 Nov 10 13:55 pg_notify
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_replslot
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_serial
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_snapshots
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_stat
drwx------. 2 postgres postgres    88 Nov 10 13:55 pg_stat_tmp
drwx------. 2 postgres postgres    18 Nov 10 13:45 pg_subtrans
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_tblspc
drwx------. 2 postgres postgres     6 Nov 10 13:45 pg_twophase
-rw-------. 1 postgres postgres     3 Nov 10 13:45 PG_VERSION
drwx------. 2 postgres postgres    70 Nov 10 13:55 pg_wal
drwx------. 2 postgres postgres    18 Nov 10 13:55 pg_xact
-rw-------. 1 postgres postgres  1184 Nov 10 13:45 postgresql.auto.conf
-rw-------. 1 postgres postgres 27807 Nov 10 13:45 postgresql.conf

Looks like we can just start that, so lets try:

[postgres@edbbart aa]$ chmod 700 /var/tmp/aa
[postgres@edbbart aa]$ /usr/edb/as11/bin/pg_ctl -D /var/tmp/aa/ start
waiting for server to start....2018-11-10 13:57:29.027 CET - 1 - 14772 -  - @ LOG:  listening on IPv4 address "0.0.0.0", port 5433
2018-11-10 13:57:29.027 CET - 2 - 14772 -  - @ LOG:  listening on IPv6 address "::", port 5433
2018-11-10 13:57:29.034 CET - 3 - 14772 -  - @ LOG:  listening on Unix socket "/tmp/.s.PGSQL.5433"
2018-11-10 13:57:29.075 CET - 4 - 14772 -  - @ LOG:  redirecting log output to logging collector process
2018-11-10 13:57:29.075 CET - 5 - 14772 -  - @ HINT:  Future log output will appear in directory "pg_log".
 done
server started
[postgres@edbbart aa]$ /usr/edb/as11/bin/psql -p 5433
psql.bin (11.0.3, server 11.0.3)
Type "help" for help.
postgres=# \q
[postgres@edbbart aa]$ /usr/edb/as11/bin/pg_ctl -D /var/tmp/aa/ stop -m fast
waiting for server to shut down....... done
server stopped
[postgres@edbbart aa]$ 

Good news: Even when not using pg_basebackup there is no lock-in because you can just combine the tar files and start up the instance.

Having a closer look at what is inside the base*.tar files:

[postgres@edbbart ~]$ rm -rf /var/tmp/aa
[postgres@edbbart ~]$ mkdir /var/tmp/aa1
[postgres@edbbart ~]$ mkdir /var/tmp/aa2
[postgres@edbbart ~]$ cd /var/tmp/aa1
[postgres@edbbart aa1]$ tar -axf /u90/pg2/1541853901964/base-1.tar 
[postgres@edbbart aa1]$ cd ../aa2
[postgres@edbbart aa2]$ tar -axf /u90/pg2/1541853901964/base-2.tar 
[postgres@edbbart aa2]$ ls -la ../aa1
total 28
drwxrwxr-x. 15 postgres postgres 4096 Nov 10 14:09 .
drwxrwxrwt.  6 root     root      134 Nov 10 14:08 ..
drwxrwxr-x.  6 postgres postgres   54 Nov 10 14:09 base
-rw-------.  1 postgres postgres   33 Nov 10 13:45 current_logfiles
drwxrwxr-x.  2 postgres postgres 4096 Nov 10 14:09 global
-rw-------.  1 postgres postgres   25 Nov 10 13:45 __payloadChecksum
drwx------.  2 postgres postgres    6 Nov 10 13:45 pg_commit_ts
-rw-------.  1 postgres postgres 4653 Nov 10 13:45 pg_hba.conf
drwx------.  2 postgres postgres   32 Nov 10 13:45 pg_log
drwxrwxr-x.  4 postgres postgres   39 Nov 10 14:09 pg_logical
drwxrwxr-x.  4 postgres postgres   36 Nov 10 14:09 pg_multixact
drwx------.  2 postgres postgres    6 Nov 10 13:45 pg_notify
drwx------.  2 postgres postgres    6 Nov 10 13:45 pg_replslot
drwx------.  2 postgres postgres    6 Nov 10 13:45 pg_snapshots
drwx------.  2 postgres postgres   52 Nov 10 13:45 pg_stat_tmp
drwxrwxr-x.  2 postgres postgres   18 Nov 10 14:09 pg_subtrans
drwx------.  2 postgres postgres    6 Nov 10 13:45 pg_twophase
drwx------.  2 postgres postgres    6 Nov 10 13:45 pg_xact
-rw-------.  1 postgres postgres 1184 Nov 10 13:45 postgresql.auto.conf
[postgres@edbbart aa2]$ ls -la ../aa2
total 48
drwxrwxr-x. 16 postgres postgres  4096 Nov 10 14:09 .
drwxrwxrwt.  6 root     root       134 Nov 10 14:08 ..
drwx------.  6 postgres postgres    54 Nov 10 13:45 base
drwx------.  2 postgres postgres  4096 Nov 10 13:45 global
-rw-------.  1 postgres postgres    25 Nov 10 13:45 __payloadChecksum
drwx------.  2 postgres postgres     6 Nov 10 13:45 pg_dynshmem
-rw-------.  1 postgres postgres  1636 Nov 10 13:45 pg_ident.conf
drwxrwxr-x.  2 postgres postgres    32 Nov 10 14:09 pg_log
drwx------.  2 postgres postgres    35 Nov 10 13:45 pg_logical
drwx------.  4 postgres postgres    36 Nov 10 13:45 pg_multixact
drwxrwxr-x.  2 postgres postgres    18 Nov 10 14:09 pg_notify
drwx------.  2 postgres postgres     6 Nov 10 13:45 pg_serial
drwx------.  2 postgres postgres     6 Nov 10 13:45 pg_stat
drwxrwxr-x.  2 postgres postgres    42 Nov 10 14:09 pg_stat_tmp
drwx------.  2 postgres postgres     6 Nov 10 13:45 pg_subtrans
drwx------.  2 postgres postgres     6 Nov 10 13:45 pg_tblspc
-rw-------.  1 postgres postgres     3 Nov 10 13:45 PG_VERSION
drwx------.  2 postgres postgres     6 Nov 10 13:45 pg_wal
drwxrwxr-x.  2 postgres postgres    18 Nov 10 14:09 pg_xact
-rw-------.  1 postgres postgres 27807 Nov 10 13:45 postgresql.conf

The main benefit comes from splitting the data files of the databases ( $PGDATA/base/[OIDs]) into separat tar files:

[postgres@edbbart aa2]$ ls -la ../aa2/base
total 36
drwx------.  6 postgres postgres   54 Nov 10 13:45 .
drwxrwxr-x. 16 postgres postgres 4096 Nov 10 14:09 ..
drwxrwxr-x.  2 postgres postgres 4096 Nov 10 14:09 1
drwxrwxr-x.  2 postgres postgres 4096 Nov 10 14:09 15709
drwx------.  2 postgres postgres 4096 Nov 10 13:45 15710
drwx------.  2 postgres postgres 4096 Nov 10 13:45 15711
[postgres@edbbart aa2]$ ls -la ../aa1/base
total 36
drwxrwxr-x.  6 postgres postgres   54 Nov 10 14:09 .
drwxrwxr-x. 15 postgres postgres 4096 Nov 10 14:09 ..
drwx------.  2 postgres postgres 4096 Nov 10 13:45 1
drwx------.  2 postgres postgres 4096 Nov 10 13:45 15709
drwxrwxr-x.  2 postgres postgres 4096 Nov 10 14:09 15710
drwxrwxr-x.  2 postgres postgres 4096 Nov 10 14:09 15711
[postgres@edbbart aa2]$ ls -la ../aa1/base/15711
total 5828
drwxrwxr-x. 2 postgres postgres    4096 Nov 10 14:09 .
drwxrwxr-x. 6 postgres postgres      54 Nov 10 14:09 ..
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 112
-rw-------. 1 postgres postgres       0 Nov 10 13:45 1220
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1222
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1227
-rw-------. 1 postgres postgres  139264 Nov 10 13:45 1247
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1247_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 1249_fsm
-rw-------. 1 postgres postgres 1196032 Nov 10 13:45 1255
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1255_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 1259_fsm
-rw-------. 1 postgres postgres   57344 Nov 10 13:45 13795
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13795_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13799
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13800_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13802
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13805
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13805_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13809
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13810_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13812
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13815
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13815_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13819
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13820_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13822
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13825
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13829
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13956_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13966
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13966_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 1417_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 1418_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 14539
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 14542
-rw-------. 1 postgres postgres       0 Nov 10 13:45 14547
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 14550
-rw-------. 1 postgres postgres       0 Nov 10 13:45 14555
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15609
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15614
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15617
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15622
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15625
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15629
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15632
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15636
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15639
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15644
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15649
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15654
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15659
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15664
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 175
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2224
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2224_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2328_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2336
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2337
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2600
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2600_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2601_fsm
-rw-------. 1 postgres postgres   57344 Nov 10 13:45 2602
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2602_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2603_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2604
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2604_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2605_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2606
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2606_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2607_fsm
-rw-------. 1 postgres postgres  663552 Nov 10 13:45 2608
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2608_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2609_fsm
-rw-------. 1 postgres postgres   40960 Nov 10 13:45 2610
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2610_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2611_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2612_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2613
-rw-------. 1 postgres postgres   49152 Nov 10 13:45 2615
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2615_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2616_fsm
-rw-------. 1 postgres postgres  122880 Nov 10 13:45 2617
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2617_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2618_fsm
-rw-------. 1 postgres postgres  163840 Nov 10 13:45 2619
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2619_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2620_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2651
-rw-------. 1 postgres postgres   40960 Nov 10 13:45 2653
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2655
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2657
-rw-------. 1 postgres postgres  139264 Nov 10 13:45 2659
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2661
-rw-------. 1 postgres postgres   65536 Nov 10 13:45 2663
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2665
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2667
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2669
-rw-------. 1 postgres postgres  524288 Nov 10 13:45 2673
-rw-------. 1 postgres postgres  196608 Nov 10 13:45 2675
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2679
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2681
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2683
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2685
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2687
-rw-------. 1 postgres postgres   49152 Nov 10 13:45 2689
-rw-------. 1 postgres postgres  434176 Nov 10 13:45 2691
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2693
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2699
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2702
-rw-------. 1 postgres postgres   65536 Nov 10 13:45 2704
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2753_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2754
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2756
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2830
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2831
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2832_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2834
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2835
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2836_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2837
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2838_fsm
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2839
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2840_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2841
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2995_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3079
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3079_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3081
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3118
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3119
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3256
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3257
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3350
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3351
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3380
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3381_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3394_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3395
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3439_vm
-rw-------. 1 postgres postgres   49152 Nov 10 13:45 3455
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3456_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3466
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3467
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3501
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3501_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3503
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3541
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3541_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3574
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3576
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3596
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3597
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3598_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3600
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3600_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3601_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3602
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3602_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3603_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3604
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3606
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3608
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3712
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3764_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3766
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3997
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4943
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4951
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4953
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 5002
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 548
-rw-------. 1 postgres postgres       0 Nov 10 13:45 6102
-rw-------. 1 postgres postgres       0 Nov 10 13:45 6104
-rw-------. 1 postgres postgres       0 Nov 10 13:45 6106
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 6110
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 6112
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 6117
-rw-------. 1 postgres postgres       0 Nov 10 13:45 7200_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 826
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 827
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 8889
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 8890_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 8891
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 8895_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 8896
-rw-------. 1 postgres postgres       0 Nov 10 13:45 8899
-rw-------. 1 postgres postgres       0 Nov 10 13:45 8900
-rw-------. 1 postgres postgres       0 Nov 10 13:45 9970
-rw-------. 1 postgres postgres       0 Nov 10 13:45 9971
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9972
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9972_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 9973_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9984
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9986
-rw-------. 1 postgres postgres       0 Nov 10 13:45 9988
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9991
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9993
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9995
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9997
-rw-------. 1 postgres postgres       3 Nov 10 13:45 PG_VERSION
[postgres@edbbart aa2]$ ls -la ../aa2/base/15711
total 6028
drwx------. 2 postgres postgres    4096 Nov 10 13:45 .
drwx------. 6 postgres postgres      54 Nov 10 13:45 ..
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 113
-rw-------. 1 postgres postgres       0 Nov 10 13:45 1221
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1226
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1228
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 1247_fsm
-rw-------. 1 postgres postgres  761856 Nov 10 13:45 1249
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1249_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 1255_fsm
-rw-------. 1 postgres postgres  139264 Nov 10 13:45 1259
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 1259_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13795_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13797
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13800
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13800_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13804
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13805_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13807
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13810
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13810_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13814
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13815_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13817
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13820
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13820_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13824
-rw-------. 1 postgres postgres       0 Nov 10 13:45 13827
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13956
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 13956_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 13966_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 1417
-rw-------. 1 postgres postgres       0 Nov 10 13:45 1418
-rw-------. 1 postgres postgres       0 Nov 10 13:45 14536
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 14541
-rw-------. 1 postgres postgres       0 Nov 10 13:45 14544
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 14549
-rw-------. 1 postgres postgres       0 Nov 10 13:45 14552
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 14557
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15611
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15616
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15619
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15624
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15626
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15631
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15633
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 15638
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15641
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15646
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15651
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15656
-rw-------. 1 postgres postgres       0 Nov 10 13:45 15661
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 174
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2187
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2224_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2328
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2328_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2336_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2579
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2600_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2601
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2601_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2602_fsm
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2603
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2603_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2604_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2605
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2605_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2606_fsm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2607
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2607_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2608_fsm
-rw-------. 1 postgres postgres  360448 Nov 10 13:45 2609
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2609_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2610_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2611
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2612
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2612_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2613_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2615_fsm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2616
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2616_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2617_fsm
-rw-------. 1 postgres postgres  172032 Nov 10 13:45 2618
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2618_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 2619_fsm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2620
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2650
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2652
-rw-------. 1 postgres postgres   40960 Nov 10 13:45 2654
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2656
-rw-------. 1 postgres postgres  204800 Nov 10 13:45 2658
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2660
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2662
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2664
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2666
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2668
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2670
-rw-------. 1 postgres postgres  557056 Nov 10 13:45 2674
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2678
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2680
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2682
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2684
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2686
-rw-------. 1 postgres postgres   40960 Nov 10 13:45 2688
-rw-------. 1 postgres postgres  114688 Nov 10 13:45 2690
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2692
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2696
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2701
-rw-------. 1 postgres postgres   40960 Nov 10 13:45 2703
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2753
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2753_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 2755
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2757
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2830_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2832
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2833
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2834_vm
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 2836
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2836_vm
-rw-------. 1 postgres postgres 1040384 Nov 10 13:45 2838
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2838_vm
-rw-------. 1 postgres postgres   49152 Nov 10 13:45 2840
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2840_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 2995
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 2996
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3079_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3080
-rw-------. 1 postgres postgres   57344 Nov 10 13:45 3085
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3118_vm
-rw-------. 1 postgres postgres   90112 Nov 10 13:45 3164
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3256_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3258
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3350_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3379
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3381
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3394
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3394_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3439
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3440
-rw-------. 1 postgres postgres  507904 Nov 10 13:45 3456
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3456_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3466_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3468
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3501_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3502
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3534
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3541_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3542
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3575
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3576_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3596_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 3598
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3599
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3600_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3601
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3601_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 3602_fsm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3603
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3603_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3605
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3607
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 3609
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3764
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 3764_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 3767
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4942
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4950
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4952
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 4954
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 5007
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 549
-rw-------. 1 postgres postgres       0 Nov 10 13:45 6102_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 6104_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 6106_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 6111
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 6113
-rw-------. 1 postgres postgres       0 Nov 10 13:45 7200
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 7201
-rw-------. 1 postgres postgres       0 Nov 10 13:45 826_vm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 828
-rw-------. 1 postgres postgres   49152 Nov 10 13:45 8890
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 8890_vm
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 8895
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 8895_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 8896_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 8899_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 8900_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 9970_vm
-rw-------. 1 postgres postgres       0 Nov 10 13:45 9971_vm
-rw-------. 1 postgres postgres   24576 Nov 10 13:45 9972_fsm
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9973
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9973_vm
-rw-------. 1 postgres postgres   32768 Nov 10 13:45 9985
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9987
-rw-------. 1 postgres postgres       0 Nov 10 13:45 9988_vm
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9992
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9994
-rw-------. 1 postgres postgres   16384 Nov 10 13:45 9996
-rw-------. 1 postgres postgres    8192 Nov 10 13:45 9998
-rw-------. 1 postgres postgres     512 Nov 10 13:45 pg_filenode.map

When you reduce the “thread_count” to 1 you will get a normal pg_basebackup:

[postgres@edbbart aa2]$ cat /usr/edb/bart/etc/bart.cfg
[BART]              
bart_host = postgres@edbbart  
backup_path = /u90           
pg_basebackup_path = /usr/edb/as11/bin/pg_basebackup   
xlog_method = stream
retention_policy= 2 DAYS
logfile = /tmp/bart.log 
scanner_logfile = /tmp/scanner.log	   
thread_count = 2

[PG2]
backup_name = PG2_%year-%month-%dayT%hour:%minute
host = 192.168.22.60   
user = bart
port = 5433   
cluster_owner = postgres 
remote_host = postgres@192.168.22.60 
allow_incremental_backups = enabled
thread_count = 1
description = PG1 backuos 

[postgres@edbbart aa2]$ bart backup -s PG2
INFO:  DebugTarget - getVar(checkDiskSpace.bytesAvailable)
INFO:  creating full backup using pg_basebackup for server 'pg2'
INFO:  creating backup for server 'pg2'
INFO:  backup identifier: '1541856034356'
INFO:  backup completed successfully
INFO:  
BART VERSION: 2.2.0
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1541856034356
BACKUP NAME: PG2_2018-11-10T14:20
BACKUP PARENT: none
BACKUP LOCATION: /u90/pg2/1541856034356
BACKUP SIZE: 64.68 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: Europe/Zurich
XLOG METHOD: stream
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000008
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-11-01 16:14:57 CET
STOP TIME: 2018-11-10 14:20:34 CET
TOTAL DURATION: 214.094 hour(s)

In the next post we’ll look at how we can do parallel incremental restores.

Cet article EDB BART 2.2, parallel full backups without using pg_basebackup est apparu en premier sur Blog dbi services.

Partitioning -- 8 : Reference Partitioning

Hemant K Chitale - Mon, 2018-11-12 08:43
Like Interval Partitioning, another enhancement in 11g is Reference Partitioning.

Reference Partitioning allows you to use a Referential Integrity Constraint to equi-partition a "Child" Table with a "Parent" Table.

Here is a quick demonstration :

SQL> l
1 create table orders
2 (order_id number primary key,
3 order_date date not null,
4 customer_id number)
5 partition by range (order_date)
6 (partition P_2017 values less than (to_date('01-JAN-2018','DD-MON-YYYY')),
7 partition P_2018 values less than (to_date('01-JAN-2019','DD-MON-YYYY'))
8* )
SQL> /

Table created.

SQL>
SQL> l
1 create table order_lines
2 (line_unique_id number primary key,
3 order_id number not null,
4 order_line_id number,
5 product_id number,
6 product_quantity number,
7 constraint order_lines_fk foreign key (order_id)
8 references orders(order_id)
9 )
10* partition by reference (order_lines_fk)
SQL> /

Table created.

SQL>
SQL> col high_value format a28 trunc
SQL> col table_name format a16
SQL> col partition_name format a8
SQL> select table_name, partition_name, high_value
2 from user_tab_partitions
3 where table_name in ('ORDERS','ORDER_LINES')
4 order by table_name, partition_position
5 /

TABLE_NAME PARTITIO HIGH_VALUE
---------------- -------- ----------------------------
ORDERS P_2017 TO_DATE(' 2018-01-01 00:00:0
ORDERS P_2018 TO_DATE(' 2019-01-01 00:00:0
ORDER_LINES P_2017
ORDER_LINES P_2018

SQL>


Notice the "automatically" created Partitions for the ORDER_LINES ("Child") Table that match those for the ORDERS ("Parent") Table.

.
.
.

Categories: DBA Blogs

Oracle Cloud Unveils New HPC Offerings to Support Mission Critical Workloads

Oracle Press Releases - Mon, 2018-11-12 08:00
Press Release
Oracle Cloud Unveils New HPC Offerings to Support Mission Critical Workloads Oracle now provides a complete set of solutions for any high performance computing workload, built to offer the best performance at the lowest cost in the cloud

Redwood Shores, Calif.—Nov 12, 2018

Oracle today announced availability of new bare metal Oracle Cloud Infrastructure compute instances to help enterprises run various performance sensitive workloads, such as artificial intelligence (AI) or engineering simulations, in the cloud. These new instances are part of Oracle’s new “Clustered Network,” which gives customers access to a low latency and high bandwidth RDMA network. Oracle now provides large organizations with a complete set of solutions for any high performance computing (HPC) workload, enabling businesses to capitalize on the benefits of modern cloud computing while enjoying performance comparable to on-premises compute clusters at a cost that makes sense for their businesses.

Countless enterprises have considered a transition to the cloud for years for their legacy HPC workloads, but were unable to do so due to the lack of high performant cloud offerings or economics that would make it too expensive to run in the cloud. Organizations have struggled to find a cloud that can support new workload requirements while realizing cost efficiencies and flexibility that meet business goals and overall technology vision. With today’s news, organizations finally have a low-cost path to extend their on-premises HPC workloads to the cloud without sacrificing performance.

“HPC has been underserved in the cloud due to lack of high performance networking (RDMA) and unappealing price/performance. We’ve listened to our customers and over the last few years Oracle has focused on improving high performance bare-metal offerings, such as Clustered Networking, to provide on-premise customers with the options they need to extend their HPC workloads to the cloud,” said Vinay Kumar, vice president, Product Management & Strategy, Oracle Cloud Infrastructure. “Our growing collaboration with trusted vendors helps Oracle Cloud Infrastructure continue to expand and offer the best performance at the lowest cost for the workloads that customers really need to extend into the cloud.”

With Oracle’s Clustered Network offering at the data center level, Oracle has opened up an entirely new set of use-cases and workloads that enterprises can harness the power of cloud computing for, ranging from car crash simulations in automotive and DNA sequencing in healthcare to reservoir simulation for oil exploration. Organizations can now deploy additional use-cases previously out of their reach, such as using data from their Oracle Database and cutting-edge NVIDIA(R) Tesla(R) GPUs to run Neural Network AI training and instantly adding value to their data.

The new offering is powered by high-frequency Intel® Xeon® Scalable processors and Mellanox’s high performance network interface controllers. For more than 25 years, Oracle and Intel have worked closely to bring innovative, scalable, and secure enterprise-class solutions to its customers. This new addition completes Oracle’s comprehensive set of infrastructure solutions for the full range of both CPU- and NVIDIA GPU-based HPC workloads, providing customers with streamlined and lower cost access to specialized offerings in the cloud. Oracle Cloud Infrastructure now provides a complete range of HPC workloads, including the only bare metal Infrastructure-as-a-Service (IaaS) offering on the market with RDMA.

These new HPC instances with Clustered Networking are offered across Oracle regions in the US and EU at $0.075 per core hour, a 49 percent pay-as-you-go savings compared to other cloud providers on the market.

Additionally, these capabilities will also be expanded for Oracle’s recently announced support for NVIDIA HGX-2 platform on Oracle Cloud Infrastructure. Customers will be able to take advantage of clustered networking as part of the next generation of NVIDIA Tensor Core GPUs, opening up large scale GPU workloads such as AI and deep learning.

Industry Quotes

“HPC is critical for more use cases, complex workloads, and data-intensive computing than ever before. Access to HPC capabilities in the cloud enables users to do more by extending into the public cloud while providing new HPC and AI users a platform to develop and test new classes of HPC and AI algorithms,” said Lisa Davis, vice president of Intel’s Data Center Group and General Manager of Digital Transformation and Scale Solutions. “We are working with Oracle to enable leading HPC offerings that take advantage of the advanced performance, efficiency and scale delivered by Intel® Xeon® Scalable processors across HPC and AI workloads, all while benefiting from the agility and flexibility of Oracle’s next generation, enterprise-grade cloud offering.”

“As organizations look to ensure they stay ahead of the competition, they are looking for more efficient services to enable higher performing workloads. This requires fast data communication between CPUs, GPUs and storage, in the cloud,” said Michael Kagan, CTO, Mellanox Technologies. “Over the past 10 years we have provided advanced RDMA enabled networking solutions to Oracle for a variety of its products and are pleased to extend this to Oracle Cloud Infrastructure to help maximize performance and efficiency in the cloud.”

“With the massive explosion of data, you need larger clusters of GPUs to process and train the data being produced. HGX-2 on Oracle is the perfect fit for this problem,” said Ian Buck, vice president of Accelerated Computing, NVIDIA. “HGX-2 on Oracle combines cutting-edge V100 Tensor Core GPUs and NVSwitch along with ability to scale across multiple HGX-2s using Clustered Networking to solve the biggest computing challenges.”

Contact Info
Danielle Tarp
Oracle
+1.650.506.2905
danielle.tarp@oracle.com
Quentin Nolibois
Burson-Marsteller
+1.415.591.4097
quentin.nolibois@bm.com
About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is an enterprise Infrastructure as a Service (IaaS) platform. Companies of all sizes rely on Oracle Cloud to run enterprise and cloud native applications with mission-critical performance and core-to-edge security. By running both traditional and new workloads on a comprehensive cloud that includes compute, storage, networking, database, and containers, Oracle Cloud Infrastructure can dramatically increase operational efficiency and lower total cost of ownership. For more information, visit https://cloud.oracle.com/iaas

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Future Product Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Danielle Tarp

  • +1.650.506.2905

Quentin Nolibois

  • +1.415.591.4097

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Rittman Mead Consulting - Mon, 2018-11-12 05:18
How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Are you managing an OAC or OBIEE instance and your life is nice and easy since you feel like having everything in control: your users browse existing dashboards, create content via Analysis, Data Visualization or SmartView and deliver data via Agents or download dashboard content to use in Excel. You feel safe since you designed your platform to provide aggregated data and track every query via Usage Tracking.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

But one day you start noticing new BI tools appearing in your company that provide similar KPIs to the ones you are already exposing and you start questioning where those data are coming from. Then suddently realize they are automagically sourcing data from your platform in ways you don't think you can control or manage.
Well, you're not alone, let me introduce you on how to monitor OAC/OBIEE connections via network sniffing and usage tracking in this new world of self-service BI platforms.

A Bit of History

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Anybody who has been for some time in the Analytics market will be able to recognise the situation described in the image above as a direct experience: multiple people having different views on a KPI calculation and therefore results. Back in the days, that problem was strictly related to the usage of Excel as BI tool and the fact that everybody was directly accessing raw data to build up their own KPIs.

Centralised BI Solutions

The landscape started to change when Centralised Enterprise BI Solutions (like OBIEE or in more recent times OAC ) started appearing and being developed in the market. The Key point of those solutions was to provide a unique source of truth for a certain set of KPIs across the organization.

However, the fact that those tools were centralised in the hands of the IT department, meant most of the times a lack of agility for the Business Departments: every new KPI had to be well defined, understood, documented, implemented by IT, validated and delivered in a process that could take months. Even when the development phase was optimised, via DevOps practices for example, time was still burned due to the communication and coordination efforts which are necessary between Business and IT teams.

Self Service BI Platforms

In order to solve the agility problem, in the last few years a new bottom-up approach has been suggested by the latest set of self-service Analytics tools: a certain set of KPIs is developed locally directly by the Business Department and then, once the KPI has been validated and accepted, its definition and the related data model is certified to allow a broader audience to use it.

Oracle has historically been a leader on the Centralised BI platform space with OBIEE being the perfect tool for this kind of reporting. In recent years, Data Visualization closed the gap of the Self-Service Analytics, providing tools for data preparation, visualization and machine learning directly in the hands of Business Users. Oracle Analytics Cloud (OAC) combines in a unique tool both the traditional centralised BI as well as the self-service analytics providing the best option for each use case.

What we have seen at various customer is a proliferation of BI tools being acquired from various departments: most of the time a centralised BI tool is used side by side with one or more self-service with little or no control over data source usage or KPI calculation.

The transition from old-school centralised BI platform to the new bottom-up certified systems is not immediate and there is no automated solution for it. Moreover, centralised BI platforms are still key in most corporates with big investments associated with them in order to get fully automated KPI management. A complete rewrite of the well-working legacy BI solutions following the latest BI trends and tools is not a doable/affordable on short-term and definitively not a priority for the business.

A Mix of The Two

So, how can we make the old and the new world coexist in a solution which is efficient, agile, and doesn't waste all well defined KPIs that are already produced? The solution that we are suggesting more and more is the re-usage of the central BI solution as a curated data source for the self-service tools.

Just imagine the case where we have a very complex Churn Prediction formula, based on a series of fields in a star schema that has been already validated and approved by the Business. Instead of forcing a new user to rewrite the whole formula from the base tables we could just offer, based on the centralised BI system, something like:

Select "Dim Account"."Account Code", "Fact Churn"."Churn Prediction" from "Churn"

There are various benefits to this:

  • No mistakes in formula recalculation
  • No prior knowledge of joining Condition, filtering, aggregation needed
  • Security system inheritance if specific filters or security-sensitive fields were defined, those settings will still be valid.
  • No duplication of code, with different people accessing various versions of the same KPIs.

Using the centralised BI system to query existing KPIs and mashing-up with new datasources is the optimal way of giving agility to the business but at the same time certifying the validity of the core KPIs.

OBIEE as a datasource

A lot of our customers have OBIEE as their own centralised BI reporting tool and are now looking into expanding the BI footprint with a self-service tool. If the chosen tool is Oracle Data Visualization then all the hard work is already done: it natively interfaces with OBIEE's RPD and all the Subject Areas are available together with the related security constraints since the security system is shared.

But what if the self-service tool is not Oracle Data Visualization? How can you expose OBIEE's Data to an external system? Well, there are three main ways:

The first one is by using web-services: OAC (OBIEE) provides a set of SOAP web-services that can be called via python for example, with one of them being executeSQLQuery. After passing the SQL in a string the results are returned in XML format. This is the method used for example by Rittman Mead Insights. SOAP Web-services, however, can't directly be queried by BI tools this is why we created Unify to allow OBIEE connections from Tableau (which is now available for FREE!).
If you aren't using Tableau, a more generic connection method that can is accessible by most of BI tools is via ODBC: OBIEE's BIServer (the component managing the RPD) can be exposed via ODBC by installing the AdminTool Drivers and creating an ODBC connection.
How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Please note that the ODBC method is only available if the BIServer port is not blocked by firewalls. Once the port is open, the ODBC datasource can be queried by any tool having ODBC querying capabilities.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

The last method is obviously Smartview, that allows sourcing from pre-existing or the creation of new Analysis with the option of refreshing the data on demand. Smartview is the perfect choice if your target Analytical tool is one of the two supported: Excel or Powerpoint.

Good for all use-cases?

Are the above connection methods good in every situation?

via GIPHY

The solutions described above work really well if you let OBIEE do its job: KPI calculations, aggregations, group by and joins or, in other terms, if your aim is to extract aggregated data. OBIEE is not a massive data exporting tool, if your plan is to export 100k rows (just a random number) every time then you may need to rethink about the solution since you:

  • will experience poor performances since you're adding a layer (OAC) between where the data resides (DB) and yourself
  • put the OBIEE environment under pressure since it has to run the query and transform the resultset in XML before pushing it to you

If that's the use case you're looking for then you should think about alternative solutions like sourcing the data directly from the database and possibly moving your security settings there.

How Can You Monitor Who is Connecting?

Let's face the reality, in our days everyone tries to make his work as easy as it can. Business Analysts are tech savvy and configurations and connection options are just a google search away. Stopping people from finding alternative solutions to accelerate their work is counterproductive: there will be tension since the analyst work is slowed down thus the usage of the centralized BI platform will decline quickly since analysts will just move to other platforms giving them the required flexibility.

Blocking ports and access methods is not the correct way of providing a (BI) service that should be centrally controlled but used by the maximum amount of people in an organization. Therefore monitoring solutions should be created in order to:

  • Understand how users are interacting with the platform
  • Provide specific workarounds in cases when there is a misuse of the platform

But how can you monitor user's access? Well, you really have two options: network sniffing or usage tracking.

Network Sniffing

Let's take the example of ODBC connections directly to BI Server (RPD). Those connections can be of three main types:

  • From/To the Presentation Service in order to execute queries in the front-end (e.g. via analysis) and to retrieve the data
  • From OBI administrators Admin Tool to modify OAC/OBIEE's metadata but this shouldn't happen in Production systems
  • From End Users ODBC connections to query OAC/OBIEE data with other BI tools

In the type one connection both the sender and receiver (Presentation and BI server) share the same IP (or IPs in case of cluster), while in the second and third type (the one we are interested) the IP address of the packet sender/receiver is different from the IP of the OBIEE server.
We can then simply use a Linux network analysis tool like tcpdump to check the traffic. With the following command, we are able to listen on port 9516 (the BI Server one) and exclude all the traffic generated from the Presentation Server (IP 192.168.1.30)

sudo tcpdump  -i eth0 -ennA 'port 9516' | grep -v "IP 192.168.1.30" 

The following is a representation of the traffic

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

We can clearly see the traffic passing between the user's machine (IP ending with 161 and the BI Server port (IP ending with 30 and port 56639).
This is the first tracking effort and it already provides us with some information (like users IP address) however is limited to ODBC and doesn't tell us the username. Let's see now what can we get from Usage Tracking.

Usage Tracking

We wrote a lot about Usage Tracking, how to enhance and how to use it so I don't want to repeat that. A very basic description of it: is a database table containing statistics of every query generated by OBIEE.
The "every query" bit is really important: the query doesn't have to be generated by the standard front-end (analytics), but a record is created even if is coming from Smartview or with a direct ODBC access to the BIServer.

Looking into S_NQ_ACCT (the default table name) there is an interesting field named QUERY_SRC_CD that, from Oracle documentation contains

The source of the request.

Checking the values for that table we can see:
How Are My Users Connecting? Analyzing OAC and OBIEE entry points
Analysing the above data in Detail

  • DashboardPrompt and ValuePrompt are related to display values in Prompts
  • DisplayValueMap, Member Browser Display Values and Member Browser Path to Value seem related to items display when creating analysis
  • Report is an Analysis execution
  • SOAP is the webservices
  • rawSQL is the usage of Raw SQL (shouldn't be permitted)

So SOAP identifies the webservices, what about the direct ODBC connections? they don't seem to be logged! Not really, looking more in detail in a known dataset, we discovered that ODBC connections are marked with NULL value in QUERY_SRC_CD together with some other traffic.
Looking into the details of the Null QUERY_SRC_CD transactions we can see two types of logs:

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

  • The ones starting with SELECT are proper queries sent via an ODBC call
  • The ones starting with CALL are requests from the Presentation Server to the BI Server

Summarizing all the findings, the following query should give you the list of users accessing OBIEE via either ODBC, SOAP or using rawSQL.

SELECT DISTINCT 
  USER_NAME,
  NVL(QUERY_SRC_CD, 'RPD ODBC') SOURCE, 
  TRUNC(START_TS) TS
FROM S_NQ_ACCT 
WHERE 
   AND 
    (
     QUERY_SRC_CD IS NULL OR 
     UPPER(QUERY_SRC_CD) IN ('SOAP', 'RAWSQL')
    ) 
   AND QUERY_TEXT NOT LIKE '{CALL%'
ORDER BY 3 DESC;

You can, of course, do more than this, like analysing query volumes (ROW_COUNT column) and Subject Areas afflicted in order to understand any potential misuse of the platform!

Real Example

Let's see an example I'll try logging in via ODBC and executing a query. For this I'm using RazorSQL a SQL query tool and OBIEE, exactly the same logs can be found in Oracle Analytics Cloud (OAC) once the Usage Tracking is enabled so, administrators, don't afraid your job is not going to extinct right now.

Small note: Usage Tracking may be available only on non-Autonomous version of Oracle Analytics Cloud, since some parts of the setup need command line access and server configuration changes which may not available on the Autonomous version

Setup

First a bit of a setup: In order to connect to OAC all you need to do is to download OBIEE's Administration Tool, install it and create an ODBC connection. After this we can open RazorSQL and add create a connection.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Then we need to specify our connection details, by selecting Add Connection Profile, specifying OTHER as Connection Profile, then selecting ODBC as Connection Type and filling in the remaining properties. Please note that:

  • Datasource Name: Select the ODBC connection entry created with the Admin tool drivers
  • Login/Password: Enter the OAC/OBIEE credentials

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

Querying and Checking the Logs

Then it's time to connect. As expected we see in RazorSQL the list of Subject Areas as datapoints which depend on the security settings configured in Weblogic and RPD.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

The Login action is not visible from Usage Tracking S_NQ_ACCT table, it should be logged in the S_NQ_INITBLOCK if you have Init Blocks associated with the login. Let's start checking the data and see what's going to happen. First of all, let's explore which Tables and Columns are part of the Usage Tracking Subject Area, by clicking on the + Icon next to it.

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

The various Dims and Facts are exposed as Tables by the ODBC driver, now let's see if this action is logged in the database with the query

SELECT USER_NAME, 
  QUERY_TEXT, 
  QUERY_SRC_CD, 
  START_TS, 
  END_TS, 
  ROW_COUNT 
FROM S_NQ_ACCT

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

We can clearly see that even checking the columns within the Measures table is logged as ODBC call, with the column QUERY_SRC_CD as Null as expected.
Now let's try to fire a proper SQL, we need to remember that the SQL we are writing needs to be in the Logical SQL syntax. An example can be

select `Topic`.`Repository Name` from `Usage Tracking`

Which in RazorSQL returns the row

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

And in the database is logged as

How Are My Users Connecting? Analyzing OAC and OBIEE entry points

We can see the user who run the query, the execution time (START_TS and END_TS) as well as the number of rows returned (ROW_COUNT).
We demonstrated that we now have all the info neccessary to start tracking any misuse of OAC/OBIEE as a datasource via ODBC connections.

Automating the Tracking

The easiest solution to properly track this type of OBIEE usage is to have an Agent that on daily basis reports users accessing OAC/OBIEE via ODBC. This solution is very easy to implement since all the Usage Tracking tables are already part of the Repository. Creating an Agent that reports on Usage Tracking rows having QUERY_SRC_CD field as Null, SOAP or rawSQL covers all the "non traditional" use-cases we have been talking about.

As mentioned above sourcing aggregated data from OAC/OBIEE should be considered a "good practice" since it provides the unique source of truth across the company. On the other side, exporting massive amount of data should be avoided since end-user performances will be slow and there will be an impact on OAC/OBIEE server. Thus setting an upper limit on the number of rows (e.g. ROW_COUNT > 100k) reported by the Agent could also mean identifying all the specific data-exports cases that should drive an impact assessment and a possible solution redesign.

Conclusion

Tools and Options in the Analytical Market are exploding and more and more we'll see companies using a number of different solutions for specific purposes. Centralised BI solutions, built over the years, provide the significant advantage of containing the unique source of truth across the company and should be preserved. Giving agility to Analysts and at the same time keeping the centrality of well defined and calculated KPIs is a challenge we'll face more and more often in the future.
OAC (or OBIEE on-premises) offers the duality of both Centralised and Self-Service Analytics methods together with a variety (webservices, ODBC, Smartview) of connecting methods which makes it the perfect cornerstone of a company analytical system.
Tracking down usage, discovering potential misuse of the platform is very easy so inefficiencies can be addressed quickly to provide adequate agility and performance to all analytical business cases!

Categories: BI & Warehousing

Recovering from failed patch on virtualized ODA

Yann Neuhaus - Mon, 2018-11-12 05:17

When a patch fails on a virtualized Oracle Database Appliance (ODA), this ODA is often unusuable because Linux and OAKD are patched to new release but Grid Infrastructure is still on old version. OAKD cannot be restarted in default mode because in this mode the active Grid Infrastructure version is checked, which will fail due to old version. Also Grid Infrastructure cannot be started due to the fact that OAKD controls access of shared hardware on ODA and if OAKD does not run, shared hardware cannot be accessed.

One way to resolve this problem is to reimage the ODA, which is time consuming and means that all databases and VMs have to be restored.

A workaround of this chicken and egg problem (I cannot guarantee that it is supported) as a last try before reimaging the ODA could be to start OAKD in non-cluster mode. This not very good documented mode does not check active grid infrastructure but gives access to shared hardware. Additional VMs cannot be started because there is no master OAKD. In this mode manual patching/upgrade of Grid Infrastructure is possible.

The non cluster mode can be entered like following (on every ODA node):


cp /opt/oracle/oak/install/oakdrun /opt/oracle/oak/install/oakdrun_orig
echo "non-cluster" > /opt/oracle/oak/install/oakdrun
cd /etc/init.d
./init.oak start

[root@xx init.d]# ps -ef | grep oakd
root 49697 49658 11 11:05 ? 00:00:02 /opt/oracle/oak/bin/oakd -non-cluster
root 50511 42821 0 11:05 pts/0 00:00:00 grep oakd

Now Grid Infrastructure patching or upgrade can be done.

If only an ODA_BASE VM exists and timeframe for manual patching/upgrade is too short, it also can be tried is to start Grid Infrastructure on one ODA node and then start the services. Patching or reimaging has to be done in next suitable timeframe.

After running Grid Infrastructure on new version, OAKD can be tried to start in default mode:


echo "start" > /opt/oracle/oak/install/oakdrun
cd /etc/init.d
./init.oak start

[root@xx init.d]# ps -ef | grep oakd
root 30187 30117 13 10:18 ? 00:00:02 /opt/oracle/oak/bin/oakd foreground
root 31902 7569 0 10:18 pts/1 00:00:00 grep oakd

Perhaps manual patching/upgrade of other components has to be done afterwards.

After patching/upgrading, ODA has to be checked with:


oakcli show version -detail
oakcli validate -a

Cet article Recovering from failed patch on virtualized ODA est apparu en premier sur Blog dbi services.

AWS: Networking - Virtual Privat Cloud

Dietrich Schroff - Sun, 2018-11-11 14:06
After changing my AWS plans from docker to kubernetes, i decided to put the aws services inside a vpc (virtual private cloud).
With this decision my AWS services are not reachable from the internet - only my laptop can access them ;-)
Here the official pictures from aws:



Here is a list of customer gateway devices, for which amazon provides configuration settings:
  • Check Point Security Gateway running R77.10 (or later) software
  • Cisco ASA running Cisco ASA 8.2 (or later) software
  • Cisco IOS running Cisco IOS 12.4 (or later) software
  • Dell SonicWALL running SonicOS 5.9 (or later) software
  • Fortinet Fortigate 40+ Series running FortiOS 4.0 (or later) software
  • Juniper J-Series running JunOS 9.5 (or later) software
  • Juniper SRX running JunOS 11.0 (or later) software
  • Juniper SSG running ScreenOS 6.1, or 6.2 (or later) software
  • Juniper ISG running ScreenOS 6.1, or 6.2 (or later) software
  • Netgate pfSense running OS 2.2.5 (or later) software.
  • Palo Alto Networks PANOS 4.1.2 (or later) software
  • Yamaha RT107e, RTX1200, RTX1210, RTX1500, RTX3000 and SRT100 routers
  • Microsoft Windows Server 2008 R2 (or later) software
  • Microsoft Windows Server 2012 R2 (or later) software
  • Zyxel Zywall Series 4.20 (or later) software for statically routed VPN connections, or 4.30 (or later) software for dynamically routed VPN connections
The following requirements have to be met:
IKE Security Association (required to exchange keys used to establish the IPsec security association)
IPsec Security Association (handles the tunnel's encryption, authentication, and so on.)
Tunnel interface (receives traffic going to and from the tunnel) Optional
BGP peering (exchanges routes between the customer gateway and the virtual private gateway) for devices that use BGP
I do not own one of these devices, but i hope that the linux laptop can configured as customer gateway with appropriate ipsec settings.

So let's configure the VPC at AWS:


 And create a subnet for this vpc:



After that you have to add a virtual private gateway:




and attach it to your vpc:



You have to add a route from the VPC to your local network:


Then create a vpn connection:





 Then download the configuration:
and hurray: AWS provides a strongswan configuration:
After i downloaded the file an followed the instructions provided there, i was able to connect and the aws dashboard showed that the connection is up:


and on my local machine:
root@zerberus:~/AWS# ipsec status
Security Associations (1 up, 0 connecting):
     Tunnel1[1]: ESTABLISHED 3 seconds ago, 192.168.178.60[XX.YY.YY.XX8]...34.246.243.178[34.246.243.178]
     Tunnel1{1}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: cb84b8e5_i 488e669b_o
     Tunnel1{1}:   0.0.0.0/0 === 0.0.0.0/0

Pages

Subscribe to Oracle FAQ aggregator