Feed aggregator

Hello world!

Mathias Magnusson - Sun, 2016-09-04 14:14

Welcome to mathiasmagnusson Sites. This is your first post. Edit or delete it, then start blogging!

Timecard Humor: A First-Timer’s Perspective on PaaS4SaaS Partner Enablement

Usable Apps - Sun, 2016-09-04 05:23

By Vikki Lira (@vklira), Oracle Applications User Experience

While it may not seem possible to think humor could be found when discussing timecards and payroll, I’m here to say that it can, and dare I say it can be exciting at times, too.

As a new member of the Oracle Applications Cloud User Experience (OAUX) team, I recently had the opportunity to sit in on a two-day partner enablement workshop: Building Oracle ADF Simplified UIs with PaaS4SaaS. My first thought when I was invited was: “What does that even mean?” I came to discover that it was a hands-on design-and-build workshop facilitated by the OAUX team in collaboration with Oracle Partner Knex Technology. The purpose of the workshop was to enable PES Payroll to use the Oracle Applications Cloud UX Rapid Development Kit (RDK) to productively design and develop SaaS solutions for deployment to the Oracle Applications Cloud.

Ultan O'Broin explains the Oracle Cloud UX Goal of Increased Participation

Ultan O’Broin, fresh from Dublin, kicks off the Oracle Applications User Experience, Knex Technology, and PES Payroll collaborative workshop at Oracle HQ in the OAUX Participatory Design Room. (Photo by Misha Vaughan)

The OAUX team was led by Ultan O’Broin (@ultan), Senior Director and Julian Orr (@Orr_UX), Principal User Experience Engineer. Ultan kicked off the day by reviewing the expectations of the workshop and giving an overview of the Oracle Cloud UX Design Strategy. The focus over the next two days would be to define the job-to-be-done and use the OAUX Simplified UI Rapid Development Kit (RDK) for Release 10 to enable PES Payroll to customize and extend the digital user experience of their payroll solution. The OAUX PaaS4SaaS RDK is a complete standalone kit that contains SaaS simplified code samples, design guidance, wireframing templates, and developer how-tos.

Oracle Cloud UX Design Patterns eBook

The OAUX Design Patterns eBook that contains the user experience design patterns we use to build the sleek, modern, simplified user interface.

Next up was Stephen Chow, Director of Software Development from PES Payroll, who gave an overview of PES, their business needs, and expectations. Larry Morris, VP of Product Development from PES, noted, “This is energizing to me. This is what we do.”

While I will confess there were many times during the day when the material became highly specialized and complex, Julian kept everyone on track until eventually a breakthrough was made and the group had an aha moment. They finally agreed to a workflow. This was a very, very big deal for digital disruption. Suddenly the team went from business as usual to having a quantum breakthrough in their approach to simplicity and design. It was so impactful that  some individuals (who shall remain nameless) actually started jumping up and down. It was really exciting and humorous because after all, we were still talking about timecards.

According to Stephen Chow, the value of the workshop with the OAUX team and Knex Technology is that “it makes collaboration quicker, faster and you know when you have those aha moments, it’s really nice to celebrate with the team. It motivates everyone to keep moving forward and have more aha moments.”

On day 2, a fully sketched solution was agreed upon and presented to Jeremy Ashley (@jrwashley), Group Vice President, OAUX. Because of all the hard work that was done over the two days, combined with an adherence to the OAUX design principles, there was a selection of user interfaces highly rendered using Microsoft PowerPoint as a wireframing tool that perfectly captured the essence of Oracle’s Cloud UX simplicity and PES business requirements. As Jeremy put it, “I look at it, I understand exactly what I need to do”. The design was clean, simple, and to the point.

Basheer Khan (@bkhan), Principal, Knex Technology, told us that by using the OAUX Simplified UI Rapid Development Kit (RDK) for Release 10 “it opens the doors for any new functionality the customer needs to add to what they already have and gives them a complete solution to run their business. It allows them to reach any aspect of Cloud applications and make the process more efficient. It’s just amazing!”

PES Payroll, Knex Technology, and the Oracle Cloud UX Team together.

John Flores and Stephen Chow (PES Payroll); Ultan (Oracle); Larry Morris (PES Payroll) Basheer Khan and CK Leow, (Knex Technology), and Julian (Oracle) take a break from several hours of intensive design work to share some payroll humor and pose for a group photo. (Photo by Vikki Lira)

At the end of the workshop passion prevailed, with a dash of humor. A desire for a clear, accurate, and convenient payroll solution combined with enthusiasm for clean, simple design. I look forward to participating in another workshop soon!

Want to find out more?

Adaptive Plans and cost of inactive branches

Yann Neuhaus - Sat, 2016-09-03 06:27

Here are the details about an execution plan screenshot I’ve tweeted recently because the numbers looked odd. It’s not a big problem, or maybe not a problem at all. Just something surprising. I don’t like when the numbers don’t match and then I try to reproduce and get an explanation, just to be sure there is not something hidden that I misunderstood.

Here is a similar test case joining two small tables DEMO1 and DEMO2 with specific stale statistics.

Hash Join

I start by forcing a full table scan to get a hash join:

select /*+ full(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 3212315601
------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 | | | |
|* 1 | HASH JOIN | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 | 1696K| 1696K| 520K (0)|
| 2 | TABLE ACCESS FULL| DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 | | | |
| 3 | TABLE ACCESS FULL| DEMO2 | 0 | 100K| 127 (1)| 0 |00:00:00.01 | 0 | | | |
------------------------------------------------------------------------------------------------------------------------------

The cost of DEMO1 full table scan is 3. The cost of DEMO2 full table scan is 127. That’s a total of 130 (the cost of the hash join itself is negligible here)

Nested Loop

When forcing an index access, a nested loop will be used:

select /*+ index(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 203 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | 1 | 0 (0)| 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 1 (0)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

The cost of the index access is 1 and as it expected to run 200 loops the total cost is 200. With the full table scan of DEMO1 the total is 203.

Adaptive plan

Here is an explain plan to see the initial plan with active and inactive branches:

SQL> explain plan for
2 select /*+ */ * from DEMO1 natural join DEMO2;
SQL> select * from table(dbms_xplan.display(format=>'adaptive'));
Plan hash value: 3212315601
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 200 | 6400 | 130 (1)| 00:00:01 |
| * 1 | HASH JOIN | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 2 | NESTED LOOPS | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 3 | NESTED LOOPS | | | | | |
|- 4 | STATISTICS COLLECTOR | | | | | |
| 5 | TABLE ACCESS FULL | DEMO1 | 200 | 1000 | 3 (0)| 00:00:01 |
|- * 6 | INDEX UNIQUE SCAN | DEMOPK | | | | |
|- 7 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 1 | 27 | 127 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL | DEMO2 | 100K| 2636K| 127 (1)| 00:00:01 |
------------------------------------------------------------------------------------------

The active branches (full table scan) have the correct cost: 127 + 3 = 130

However, it’s not the case with inactive ones: no estimations for ‘INDEX UNIQUE SCAN’ and it seems that the ‘TABLE ACCESS BY INDEX ROWID’ get the cost from the full table scan (here 127).

It’s just an observation here. I’ve no explanation about it and I’ve no idea about the consequences except the big surprise when you see the numbers. I guess that the cost of the inactive branches is meaningless. What is important is that the right cost has been used to determine the inflection point.

The index access having a cost of 1, the cost of the nested loop will be higher than full table scan (estimated to 127) when there are more than 127 loops. This is what we see from the 10053 trace:
SQL> host grep ^DP DEMO14_ora_19470_OPTIMIZER.trc
DP: Found point of inflection for NLJ vs. HJ: card = 127.34

Now, as I have no rows in the tables, the nested loop branch will be activated in place of the hash join. So if we display the plan once it is resolved, we will see the lines with an unexpected cost:

Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | | | 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | | | 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 127 (1)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

I think it’s harmless, just a bit misleading. 127 is not the cost of the index access. It’s the cost of the full table scan.
I had this surprise when trying to understand why the optimizer choose a full scan instead of index access. This is probably the only reason why I look at the cost: use hints to force the plan that I think is better, in order to understand where the optimizer thinks it is more expensive.

 

Cet article Adaptive Plans and cost of inactive branches est apparu en premier sur Blog dbi services.

Implicit cursor (SELECT INTO...) behavior

Tom Kyte - Fri, 2016-09-02 14:06
When using SELECT INTO... and have multiple rows returning by query, if the target variable declared strongly (e.g....%TYPE), it keeps the result of a fetch, if declared weakly (e.g....NUMBER), it keeps nothing for the same case. What is going? ...
Categories: DBA Blogs

String Pattern Search...

Tom Kyte - Fri, 2016-09-02 14:06
Hi, I am trying to define a regular expression pattern which satisfies the following criteria for string match: String Length: Maximum 13 characters Allowed Characters: Any digit (0-9) and ":" Condition: ":" can occur only once, digits can re...
Categories: DBA Blogs

Tricky Sequence value requirement thru Trigger

Tom Kyte - Fri, 2016-09-02 14:06
Hi Wonderful Team, I have an oracle sequence say s. I have table t ( id number , n number ); I need to insert the Id column with the sequence s such that the value is the same for the entire transaction thru a "Trigger" So this Id change...
Categories: DBA Blogs

Cannot create the method in customer type.Can anyone explain me the reason .Thanks

Tom Kyte - Fri, 2016-09-02 14:06
create type deposit_ty2 as object( depNo number, depCatagory ref depcatagoey_ty2, amount number, period number ) / create type deposit_ntty2 as table of deposit_ty2 / create type address_tyy as object( homeNo number, street char(14), ...
Categories: DBA Blogs

Materialized Views

Tom Kyte - Fri, 2016-09-02 14:06
Hi, I have a scenario as below. a) Table A with 10 columns of which 3 columns form a composite primary key b) A Materialzed view is created on top of the Table A with primary key enabled and fast refresh c) Materilized view log is also crea...
Categories: DBA Blogs

Avoid TRUNC and using between on sysdate

Tom Kyte - Fri, 2016-09-02 14:06
Hi , I have the below query where am using TRUNC on the column2 filed which is increasing the performance delay. so need to avoid the TRUNC on this check and need to use BETWEEN operator. Here SYSTEMDATE is the currentdateandtime. So ho...
Categories: DBA Blogs

Delete 50 percent data from a table with billions of records.

Tom Kyte - Fri, 2016-09-02 14:06
Hi team, I have a nightmare recently, it comes along with a poor table design of our customer's database: A table named T_PRODUCT_TEST_DATA which has more than 3.6 billion records. What's worse, neither is this table a partitioned table nor has a DA...
Categories: DBA Blogs

Pluggable Database not open automatically

Tom Kyte - Fri, 2016-09-02 14:06
Some Days before I have Install Oracle Database New version 12.1.0.2 with no error or warnings. Created 2 pluggable databases in the DB container. <code>SELECT name, open_mode from v$pdbs; NAME OPEN_MODE ------------...
Categories: DBA Blogs

Oracle Service Secrets: quiesce tactically

Pythian Group - Fri, 2016-09-02 10:18

In the last post of this series about Oracle net services, I talked about how services can help you identify performance issues faster and easier by tagging connections with service names. Today I am introducing you to the idea of temporarily disabling connections during maintenance with the help of services.

During deployments, testing or reorganizations it might be necessary to prevent clients from connecting to the database while still allowing access for DBAs to do their work. Some methods to do this include temporarily locking application user accounts or putting the database in quiesce mode. But with services, you now also have a more tactical approach to this issue.

My example assumes a single instance with two services DEMO_BATCH and DEMO_OLTP. And let’s assume that we need to temporarily disable batch services, maybe just to reduce system load due to those activities or maybe because we are reorganizing the objects used by the batch processes.

To disable a service in a single instance we can either remove it from the SERVICE_NAMES instance parameter or use the DBMS_SERVICE package.

SELECT NAME FROM V$ACTIVE_SERVICES;

NAME
----------------------------------------------------------------
DEMO_BATCH
DEMO_OLTP
ORCLXDB
ORCL.PYTHIAN.COM
SYS$BACKGROUND
SYS$USERS

exec DBMS_SERVICE.STOP_SERVICE('DEMO_BATCH');

PL/SQL procedure successfully completed.

New sessions using the service name will receive an ORA-12514 error when trying to connect:

brbook:~ brost$ ./sqlcl/bin/sql brost/******@192.168.78.101:1521/DEMO_BATCH.PYTHIAN.COM

SQLcl: Release 4.2.0.16.175.1027 RC on Thu Aug 18 13:12:27 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

  USER          = brost
  URL           = jdbc:oracle:thin:@192.168.78.101:1521/DEMO_BATCH.PYTHIAN.COM
  Error Message = Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
Existing sessions are allowed to continue

Note that stopping will only affect new connections. Existing sessions that used the DEMO_BATCH service are allowed to continue until they disconnect or you kill them. This gives you the flexibility of a grace period where you just wait for existing sessions to finish their work and disconnect by themselves.

SELECT NAME FROM V$ACTIVE_SERVICES WHERE NAME = 'DEMO_BATCH';
no rows selected

SELECT SERVICE_NAME, USERNAME FROM V$SESSION WHERE SERVICE_NAME='DEMO_BATCH';

SERVICE_NAME         USERNAME
-------------------- ------------------------------
DEMO_BATCH           BROST
Grid Infrastructure has option to force disconnects

If you are using grid infrastructure and manage services through srvctl this behaviour is basically the same but you get an extra “force” switch to also disconnect existing sessions while stopping a service.

[oracle@ractrial1 ~]$ srvctl stop service -db orcl42 -service racdemo_batch [-force]

[oracle@ractrial1 ~]$ srvctl stop service -h

Stops the service.

Usage: srvctl stop service -db <db_unique_name> [-service  "<service_name_list>"] [-serverpool <pool_name>] [-node <node_name> | -instance <inst_name>] [-pq] [-global_override] [-force [-noreplay]] [-eval] [-verbose]
    -db <db_unique_name>           Unique name for the database
    -service "<serv,...>"          Comma separated service names
    -serverpool <pool_name>        Server pool name
    -node <node_name>              Node name
    -instance <inst_name>          Instance name
    -pq                            To perform the action on parallel query service
    -global_override               Override value to operate on a global service.Ignored for a non-global service
    -force                         Disconnect all sessions during stop or relocate service operations
    -noreplay                      Disable session replay during disconnection
    -eval                          Evaluates the effects of event without making any changes to the system
    -verbose                       Verbose output
    -help                          Print usage
Conclusion

Creating extra services on a database allows you to stop and start them for maintenance which can be used as a convenient way to lock out only certain parts of an application while leaving user accounts unlocked to connect via different services.

Categories: DBA Blogs

Initiate a local GIT repository in command line

Yann Neuhaus - Fri, 2016-09-02 10:08

GIT

Objective of the document is to describe how to start manually with command lines a development project, from an existing GIT repository.

Usage of GIT protocol for software development empowers projects team management. It is intended to ease source code management in terms of versioning, branching and sharing between all team members.

    GIT platform Architecture

GIT is a distributed version control system, it means developers can share source code from their workstation to others without the need of any centralized repository. However, at dbi-services we made the choice to deploy a centralized repository platform, first in order to avoid manual synchronization between all developers, then to benefit from a shared common project collaboration platform, like GitHub or GitLab.

Prior being allowed to make a push request to centralized repository, a developer must first ensure having got latest source code revision into its local workstation’s repository (pull request). Then he can commit locally his changes, eventually correct merge conflicts, and finally make the push request to centralized platform.

GIT Architecture

 

    Manual / Command line management

This section will demonstrate how to initiate developer’s local source code management with a remote GIT repository, (as well as from a collaboration platform like GitLab), using the command line.

These commands run out of the box in a Linux operating system.
Under Windows, you must install “git-bash” application.

There are 2 cases for a project initialization:

–    Starting a project from your source code
–    Getting source code from a shared repository

First of all a GIT repository has to be created in the GIT collaboration platform. Do ask GIT platform’s administrators for project creation.

Before starting, it is recommended to update your GIT personal information:

git config --global user.name user
git config --global user.email user@xxx.com

 

Check status of your GIT configuration:

git config –list

       

        Project initialization from local source code

First you must go to your project folder. It is recommended to have the “src” folder underneath.

GIT repository initialization:

git init

 

Create a “master” branch on your local and on remote GIT repository

For local branch creation, you will need to add and commit something (like a README.txt file):

git add README.txt
git commit -m adding README.txt
git branch
* master

 

For remote branch creation, you must first create the local branch, add the remote repository “origin”, then make a pull request to shared repository:

git remote add origin http://<your git server>/<your repo>.git
git push origin master

“origin” represents a pointer name to remote repository.

 

        Project initialization getting source code from shared repository

Get source code from the repository:

git clone http://<your git server>/<your repo>.git <your destination folder>

 

Congratulations, you are now ready to use GIT with your new project !

 

Cet article Initiate a local GIT repository in command line est apparu en premier sur Blog dbi services.

Auditing in PostgreSQL

Yann Neuhaus - Fri, 2016-09-02 09:51

Today, especially in the Pharma and Banking sectors, sooner or later you will be faced with the requirement of auditing. Detailed requirements will vary but usually at least tracking logons to the database is a must. Some companies need more information to pass their internal audits such as: Who created which objects, who fired which sql against the database, who was given which permissions and so on. In this post we’ll look at what PostgreSQL can offer here.

PostgreSQL comes with a comprehensive logging system by default. In my 9.5.4 instance there are 28 parameters related to logging:

(postgres@[local]:5438) [postgres] > select count(*) from pg_settings where name like 'log%';
 count 
-------
    28
(1 row)

Not all of them are relevant when it comes to auditing but some can be used for a minimal auditing setup. For logons and loggoffs there are “log_connections” and “log_disconnections”:

(postgres@[local]:5438) [postgres] > alter system set log_connections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > alter system set log_disconnections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select context from pg_settings where name in ('log_dicconnections','log_connections');
      context      
-------------------
 superuser-backend
(1 row)
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on, as soon as someone connects to or disconnects from the instance it is reported in the logfile:

2016-09-02 10:35:56.983 CEST - 2 - 13021 - [local] - postgres@postgres LOG:  connection authorized: user=postgres database=postgres
2016-09-02 10:36:04.820 CEST - 3 - 13021 - [local] - postgres@postgres LOG:  disconnection: session time: 0:00:07.837 user=postgres database=postgres host=[local]

Another parameter that might be useful for auditing is “log_statement”. When you set this to “ddl” all DDLs are logged, when you set it to “mod” all DDLs plus all statements that modify data will be logged. To log all statements there is the value of “all”.

(postgres@[local]:5438) [postgres] > alter system set log_statement='all';
ALTER SYSTEM

For new session all statements will be logged from now on:

2016-09-02 10:45:15.859 CEST - 3 - 13086 - [local] - postgres@postgres LOG:  statement: create table t ( a int );
2016-09-02 10:46:44.064 CEST - 4 - 13098 - [local] - postgres@postgres LOG:  statement: insert into t values (1);
2016-09-02 10:47:00.162 CEST - 5 - 13098 - [local] - postgres@postgres LOG:  statement: update t set a = 2;
2016-09-02 10:47:10.606 CEST - 6 - 13098 - [local] - postgres@postgres LOG:  statement: delete from t;
2016-09-02 10:47:22.012 CEST - 7 - 13098 - [local] - postgres@postgres LOG:  statement: truncate table t;
2016-09-02 10:47:25.284 CEST - 8 - 13098 - [local] - postgres@postgres LOG:  statement: drop table t;

Be aware that your logfile can grow significantly if you turn this on and especially if you set the value to “all”.

That’s it more or less when it comes to auditing: You can audit logons, logoffs and SQL statements. This might be sufficient for your requirements but this also might not be sufficient for requirements. What do you do if you need e.g. to audit on object level? With the default logging parameters you can not do this. But, as always in PostgreSQL, there is an extension: pgaudit.

If you want to install this extension you’ll need the PostgreSQL source code. To show the complete procedure here is a PostgreSQL setup from source. Obiously the first step is to download and extract the source code:

postgres@pgbox:/u01/app/postgres/software/ [PG953] cd /u01/app/postgres/software/
postgres@pgbox:/u01/app/postgres/software/ [PG953] wget https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
--2016-09-02 09:39:29--  https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 213.189.17.228, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|213.189.17.228|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18496299 (18M) [application/x-bzip-compressed-tar]
Saving to: ‘postgresql-9.5.4.tar.bz2’

100%[==================================================================================>] 18'496'299  13.1MB/s   in 1.3s   

2016-09-02 09:39:30 (13.1 MB/s) - ‘postgresql-9.5.4.tar.bz2’ saved [18496299/18496299]

postgres@pgbox:/u01/app/postgres/software/ [PG953] tar -axf postgresql-9.5.4.tar.bz2 
postgres@pgbox:/u01/app/postgres/software/ [PG953] cd postgresql-9.5.4

Then do the usual configure, make and make install:

postgres@pgbox:/u01/app/postgres/software/ [PG953] PGHOME=/u01/app/postgres/product/95/db_4
postgres@pgbox:/u01/app/postgres/software/ [PG953] SEGSIZE=2
postgres@pgbox:/u01/app/postgres/software/ [PG953] BLOCKSIZE=8
postgres@pgbox:/u01/app/postgres/software/ [PG953] ./configure --prefix=${PGOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGOME}/bin \
            --libdir=${PGOME}/lib \
            --sysconfdir=${PGOME}/etc \
            --includedir=${PGOME}/include \
            --datarootdir=${PGOME}/share \
            --datadir=${PGOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-tcl \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=16  \
            --with-extra-version=" dbi services build"
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make world
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] cd contrib
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install

Once this is done you can continue with the installation of the pgaudit extension:

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] git clone https://github.com/pgaudit/pgaudit.git
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/ [PG953] cd pgaudit/
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make -s check
============== creating temporary instance            ==============
============== initializing database system           ==============
============== starting postmaster                    ==============
running on port 57736 with PID 8635
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test pgaudit                  ... ok
============== shutting down postmaster               ==============
============== removing temporary instance            ==============

=====================
 All 1 tests passed. 
=====================

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make install
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/lib'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/install -c -m 755  pgaudit.so '/u01/app/postgres/product/95/db_4/lib/pgaudit.so'
/usr/bin/install -c -m 644 ./pgaudit.control '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 ./pgaudit--1.0.sql  '/u01/app/postgres/product/95/db_4/share/extension/'

That’s it. Initialize a new cluster:

postgres@pgbox:/u01/app/postgres/software/ [PG954] initdb -D /u02/pgdata/PG954 -X /u03/pgdata/PG954
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/PG954 ... ok
creating directory /u03/pgdata/PG954 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG954/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG954 -l logfile start

… and install the extension:

postgres@pgbox:/u02/pgdata/PG954/ [PG954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create extension pgaudit;
ERROR:  pgaudit must be loaded via shared_preload_libraries
Time: 2.226 ms

(postgres@[local]:5438) [postgres] > alter system set shared_preload_libraries='pgaudit';
ALTER SYSTEM
Time: 18.236 ms

##### Restart the PostgreSQL instance

(postgres@[local]:5438) [postgres] > show shared_preload_libraries ;
 shared_preload_libraries 
--------------------------
 pgaudit
(1 row)

Time: 0.278 ms
(postgres@[local]:5438) [postgres] > create extension pgaudit;
CREATE EXTENSION
Time: 4.688 ms

(postgres@[local]:5438) [postgres] > \dx
                   List of installed extensions
  Name   | Version |   Schema   |           Description           
---------+---------+------------+---------------------------------
 pgaudit | 1.0     | public     | provides auditing functionality
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Ready. So, what can you do with it? As the documentation is quite well here are just a few examples.

To log all statements against a role:

(postgres@[local]:5438) [postgres] > alter system set pgaudit.log = 'ROLE';

Altering or creating roles from now on is reported in the logfile as:

2016-09-02 14:50:45.432 CEST - 9 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,2,1,ROLE,CREATE ROLE,,,create user uu login password ,
2016-09-02 14:52:03.745 CEST - 16 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,3,1,ROLE,ALTER ROLE,,,alter user uu CREATEDB;,
2016-09-02 14:52:20.881 CEST - 18 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,4,1,ROLE,DROP ROLE,,,drop user uu;,

Object level auditing can be implemented like this (check the documentation for the meaning of the pgaudit.role parameter):

(postgres@[local]:5438) [postgres] > create user audit;
CREATE ROLE
(postgres@[local]:5438) [postgres] > create table taudit ( a int );
CREATE TABLE
(postgres@[local]:5438) [postgres] > insert into taudit values ( 1 );
INSERT 0 1
(postgres@[local]:5438) [postgres] > grant select,delete on taudit to audit;
GRANT
(postgres@[local]:5438) [postgres] > alter system set pgaudit.role='audit';
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Once we touch the table:

(postgres@[local]:5438) [postgres] > select * from taudit;
 a 
---
 1
(1 row)
(postgres@[local]:5438) [postgres] > update taudit set a = 4;

… the audit information appears in the logfile:

2016-09-02 14:57:10.198 CEST - 5 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,1,1,READ,SELECT,TABLE,public.taudit,select * from taudit;,
2016-09-02 15:00:59.537 CEST - 9 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,2,1,WRITE,UPDATE,TABLE,public.taudit,update taudit set a = 4;,

Have fun with auditing …

 

Cet article Auditing in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2016 My Cloud ERP Sessions

David Haimes - Fri, 2016-09-02 09:33
  • oow-logo-2015

    OpenWorld is this month, so time to start planning your agenda.  I’ll be presenting at a few different sessions this year with a focus on Financials Cloud (Part of the ERP Cloud) and E-Business Suite.

    I’m busy preparing for the first two sessions, hope to see you there.  Check out the content catalog for more details and to add them to your agenda.

    How Oracle E-Business Suite Customers Have Achieved Modern Reporting in the Cloud [CON7313]
    David Haimes, Senior Director, Financial Applciations Development, Oracle
    Sara Mannur, Financial Systems Analyst, Niagara Bottling
    This session discusses how to leverage powerful modern reporting tools with no disruption to your existing Oracle E-Business Suite implementation. You will learn the steps required to start using the cloud service. Customers who have implemented Oracle Fusion Accounting Hub Reporting Cloud Service share their implementation experiences and the business benefits they have realized.
    Conference Session
    Tuesday, Sep 20, 12:15 p.m. – 1:00 p.m. | Moscone West—3016
    Oracle ERP Cloud UX Extensibility: From Mystery to Magic [CON7312]
    David Haimes, Senior Director, Financial Applciations Development, Oracle
    Tim Dubois, Senior Director, Applications User Experience, Oracle
    The user experience (UX) design strategy of Oracle Enterprise Resource Planning Cloud (Oracle ERP Cloud)—including financials and product portfolio management—is about simplicity, mobility, and extensibility. Extensibility and admin personalization of Oracle ERP Cloud experiences include a range of tools, from a simplified approach for rebranding the applications, to match your company culture and image to page-level admin personalization and extension, to building your own simplified cloud application in platform as a service using the UX Rapid Development Kit (RDK). In this session, learn about RDK code samples, wireframing tools, and design patterns. Get a view into the RDK roadmap and where Oracle is going next. .
    Conference Session
    Thursday, Sep 22, 1:15 p.m. – 2:00 p.m. | Moscone West—3001
    Meet the Experts: Oracle Financials Cloud [MTE7784]
    Do not miss this opportunity to meet with Oracle Financials Cloud experts—the people who design and build the applications. In this session, you can have discussions regarding the Oracle Applications Cloud strategy and your specific business and IT strategy. The experts are available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions.
    Meet the Experts Session
    Wednesday, Sep 21, 3:00 p.m. – 3:45 p.m. | Moscone West—3001A

Categories: APPS Blogs

Brand Transformation Starts with Oracle Marketing Cloud at OpenWorld 2016!

Linda Fishman Hoyle - Fri, 2016-09-02 09:13

A Guest Post by Jennifer Dennis, Director of Marketing, Oracle (pictured left)

Learn how leading companies deliver the best of their brand with Oracle Marketing Cloud at Oracle OpenWorld 2016 in San Francisco, September 18–22, 2016.

Hear Laura Ipsen,General Manager and Senior Vice President, Oracle Marketing Cloud (pictured right), at our general session on Brand Experience: Modern Marketing Transformation. You’ll get insights into how brands are making this data-driven digital transformation and achieving dramatic results. Laura will be joined by Ninish Ukkan, Senior Vice President, Head of Technology for eHarmony.

Then see how Modern Marketing works in sessions featuring MongoDB, DocuSign, Nestle USA, Team One, LinkedIn, CSC, Clorox, technology partners, and Modern Marketing experts. Attendees will experience how Oracle Marketing Cloud is the solution that marketers love and IT trusts.

Whether you’re focused on marketing automation or cross-channel marketing, you’ll get insights into solutions for data-driven, mobile, and account-based marketing to help you achieve a personalized customer experience.

Modern Marketing experts will offer demonstrations and an in-depth look at some of the exciting new solutions and features released for Oracle Marketing Cloud.

You won’t want to miss these opportunities to transform yourself into a Modern Marketer. If you have questions, contact me at jennifer.dennis@oracle.com.

Use these links to get more information and to register

If you have questions, contact me at jennifer.dennis@oracle.com.

.... this is only a psuedo object?

Darwin IT - Fri, 2016-09-02 06:00
Yesterday I was working on a BPEL project that I created before the summer holidays. I wanted to implement it further. But on first redeployment I ran into:
[12:18:01 PM] ----  Deployment started.  ----
[12:18:01 PM] Target platform is (Weblogic 12.x).
[12:18:01 PM] Running dependency analysis...
[12:18:01 PM] Building...
[12:18:08 PM] Deploying profile...
[12:18:09 PM] Wrote Archive Module to D:\Projects\2016DWN\SOASuite\HRApplication\DWN_CdmHR\trunk\SOA\DWN_CdmHR\CDMHRDomainService\deploy\sca_CDMHRDomainService.jar
[12:18:18 PM] Deploying sca_CDMHRDomainService.jar to partition "default" on server SoaServer1 [http://darlin-vce-db.darwin-it.local:8001]
[12:18:18 PM] Processing sar=/D:/Projects/2016DWN/SOASuite/HRApplication/DWN_CdmHR/trunk/SOA/DWN_CdmHR/CDMHRDomainService/deploy/sca_CDMHRDomainService.jar
[12:18:18 PM] Adding sar file - D:\Projects\2016DWN\SOASuite\HRApplication\DWN_CdmHR\trunk\SOA\DWN_CdmHR\CDMHRDomainService\deploy\sca_CDMHRDomainService.jar
[12:18:18 PM] Preparing to send HTTP request for deployment
[12:18:18 PM] Creating HTTP connection to host:darlin-vce-db.darwin-it.local, port:8001
[12:18:18 PM] Sending internal deployment descriptor
[12:18:19 PM] Sending archive - sca_CDMHRDomainService.jar
[12:18:19 PM] Received HTTP response from the server, response code=500
[12:18:19 PM] Error deploying archive sca_CDMHRDomainService.jar to partition "default" on server SoaServer1 [http://darlin-vce-db.darwin-it.local:8001]
[12:18:19 PM] HTTP error code returned [500]
[12:18:19 PM] Error message from server:
There was an error deploying the composite on SoaServer1: Deployment Failed: Error occurred during deployment of component: HREmployeeProcess to service engine: implementation.bpel, for composite: CDMHRDomainService: ORABPEL-05215

Error while loading process.
The process domain is encountering the following errors while loading the process "HREmployeeProcess" (composite "default/CDMHRDomainService!1.0*soa_6e4206b5-3297-4f53-9944-734349aed8ab"): this is only a psuedo object.
This error contained an exception thrown by the underlying process loader module.
Check the exception trace in the log (with logging level set to debug mode). If there is a patch installed on the server, verify that the bpelcClasspath domain property includes the patch classes.
.

[12:18:19 PM] Check server log for more details.
[12:18:19 PM] Error deploying archive sca_CDMHRDomainService.jar to partition "default" on server SoaServer1 [http://darlin-vce-db.darwin-it.local:8001]
[12:18:19 PM] Deployment cancelled.
[12:18:19 PM] ---- Deployment incomplete ----.
[12:18:19 PM] Error deploying archive file:/D:/Projects/2016DWN/SOASuite/HRApplication/DWN_CdmHR/trunk/SOA/DWN_CdmHR/CDMHRDomainService/deploy/sca_CDMHRDomainService.jar
(oracle.tip.tools.ide.fabric.deploy.common.SOARemoteDeployer)

So the I was googling around, and found this blog entry. This one suggested a missmatch between the project and referenced wsdl's/xsd's in the MDS.

So I refreshed the MDS, restarted the whole SOA Server, but no luck.

At the doorstep of removing the lot of components and references, I decided to take a last closer look to the composite.xml.

The BPEL process component HREmployeeProcess had a reference to the service HREmployeeProcessSubscriber. The latter was based on a wsdl in the mds:
  <reference name="HREmployeeProcessSubscriber"
ui:wsdlLocation="oramds:/apps/CDM/services/domain/operations/hrm/v2/EmployeeDomainEntityEventService.wsdl">
<interface.wsdl interface="http://hhs.nl/services/domain/operations/hrm/v2/#wsdl.interface(EmployeeDomainEntityEventServicePortType)"/>
<binding.ws port="http://hhs.nl/services/domain/operations/hrm/v2/#wsdl.endpoint(hremployeeprocessa_client/EmployeeDomainEntityEventServicePort)"
location="http://darlin-vce-db:8001/soa-infra/services/default/HRSubscriberA/HREmployeeEventServiceA?WSDL"
soapVersion="1.1"/>
</reference>
But in the reference in the bpel component it refered to the BPEL process on the server:
<reference name="HREmployeeProcessSubscriber"
ui:wsdlLocation="http://darlin-vce-db:8001/soa-infra/services/default/HRSubscriberA/HREmployeeEventServiceA?WSDL">
<interface.wsdl interface="http://hhs.nl/services/domain/operations/hrm/v2/#wsdl.interface(EmployeeDomainEntityEventServicePortType)"/>
</reference>

Since the wsdl defined in the ui:wsdlLocation attribute needs to be available on compiling and loading of the component by the component-engine it's recommended to use a reference to an abstract wsdl in the mds. In this case I replaced the ui:wsdlLocation in the service reference by the mds. But apparently I forgot the BPEL Comnponent. To replace that, you should do this in the partnerlink definition in the BPEL Process. Because the composite.xml is automatically updated. Because the abastract wsdl lacks the partnerlink types, as you might know, JDeveloper suggests to create a wrapper wsdl for you.

Now, because of the synchronizations between bpel and the composite, you might need to hack the composite and the bpel process, to get thinks consistent again (at least I had to). But then, having it resolved, the composite was deployable again... And the BPEL process wasn't so pseudo anymore.

Understanding Row Level Security on PostgreSQL

Yann Neuhaus - Fri, 2016-09-02 03:11

In this article we will talk about a nice feature Row Level Security on PostgreSQL. We are using EDB Postgres Advanced Server 9.5.
Suppose that I am a team manager and that employee bonus are stored in a table Bonus. I want that each employee can see only data related to him and not data for other. How Can I implement this? I can simply use Row Level Security.
Let’s go on. Below is the structure of my table Bonus

testdb=# \d Bonus
            Table "public.bonus"
 Column |         Type          | Modifiers
--------+-----------------------+-----------
 id     | numeric               | not null
 login   | character varying(20) |
 bonus  | numeric               |
Indexes:
    "bonus_pkey" PRIMARY KEY, btree (id)

Below data inside Bonus

testdb=# table bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

Let’s create users with corresponding logins

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create user "james@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Moise@example.com" password 'secret';
CREATE ROLE
testdb=# create user "jennifer@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Mikael@example.com" password 'secret';
CREATE ROLE

And let’s grant them select on Table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# grant select on bonus to "james@example.com";
GRANT
testdb=# grant select on bonus to "Moise@example.com";
GRANT
testdb=# grant select on bonus to "Mikael@example.com";
GRANT
testdb=# grant select on bonus to "jennifer@example.com";
GRANT

We can verify that  by default   each user can see all data (what I don’t want). For example with user james@example.com

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

And with user jennifer@example.com

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

To allow user to see only his data. I have first to create a policy on the table Bonus with an expression which will filter data.

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create policy bonus_policy on bonus for all to public using (login=current_user);
CREATE POLICY
testdb=#

After creating the policy, let’s enable the RLS on table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# alter table bonus enable row level security;
ALTER TABLE
testdb=#

And now bingo we can  verify that each user can only see his corresponding data

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  4 | jennifer@example.com |  3520
(1 row)
testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |       login       | bonus
----+-------------------+-------
  1 | james@example.com |  2500
(1 row)

testdb=>

Now let’s drop the policy but let’s still keep table bonus with the RLS enabled. What happens?

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# drop policy bonus_policy on bonus;
DROP POLICY
testdb=#

Let’s Query  table bonus with user james@example.com for example

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id | login | bonus
----+-------+-------
(0 rows)

testdb=>

But if we query the table with user enterprisedb which is the table owner (should also be a superuser)

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

So we see that if RLS is enabled on a table and that there is no defined policy, a default-deny  policy will be applied. Only owners, super users and users  with the BYPASSRLS attribute will be able to see data in the table

 

Cet article Understanding Row Level Security on PostgreSQL est apparu en premier sur Blog dbi services.

Links for 2016-09-01 [del.icio.us]

Categories: DBA Blogs

Oracle 12c: Indexing JSON in the Database Part III (Paperback Writer)

Richard Foote - Fri, 2016-09-02 00:13
In Part I and Part II, we looked at how to index specific attributes within a JSON document store within an Oracle 12c database. But what if we’re not sure which specific attributes might benefit from an index or indeed, as JSON is by it’s nature a schema-less way to store data, what if we’re not entirely sure […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator