Feed aggregator

Oracle Service Secrets: quiesce tactically

Pythian Group - Fri, 2016-09-02 10:18

In the last post of this series about Oracle net services, I talked about how services can help you identify performance issues faster and easier by tagging connections with service names. Today I am introducing you to the idea of temporarily disabling connections during maintenance with the help of services.

During deployments, testing or reorganizations it might be necessary to prevent clients from connecting to the database while still allowing access for DBAs to do their work. Some methods to do this include temporarily locking application user accounts or putting the database in quiesce mode. But with services, you now also have a more tactical approach to this issue.

My example assumes a single instance with two services DEMO_BATCH and DEMO_OLTP. And let’s assume that we need to temporarily disable batch services, maybe just to reduce system load due to those activities or maybe because we are reorganizing the objects used by the batch processes.

To disable a service in a single instance we can either remove it from the SERVICE_NAMES instance parameter or use the DBMS_SERVICE package.

SELECT NAME FROM V$ACTIVE_SERVICES;

NAME
----------------------------------------------------------------
DEMO_BATCH
DEMO_OLTP
ORCLXDB
ORCL.PYTHIAN.COM
SYS$BACKGROUND
SYS$USERS

exec DBMS_SERVICE.STOP_SERVICE('DEMO_BATCH');

PL/SQL procedure successfully completed.

New sessions using the service name will receive an ORA-12514 error when trying to connect:

brbook:~ brost$ ./sqlcl/bin/sql brost/******@192.168.78.101:1521/DEMO_BATCH.PYTHIAN.COM

SQLcl: Release 4.2.0.16.175.1027 RC on Thu Aug 18 13:12:27 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

  USER          = brost
  URL           = jdbc:oracle:thin:@192.168.78.101:1521/DEMO_BATCH.PYTHIAN.COM
  Error Message = Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
Existing sessions are allowed to continue

Note that stopping will only affect new connections. Existing sessions that used the DEMO_BATCH service are allowed to continue until they disconnect or you kill them. This gives you the flexibility of a grace period where you just wait for existing sessions to finish their work and disconnect by themselves.

SELECT NAME FROM V$ACTIVE_SERVICES WHERE NAME = 'DEMO_BATCH';
no rows selected

SELECT SERVICE_NAME, USERNAME FROM V$SESSION WHERE SERVICE_NAME='DEMO_BATCH';

SERVICE_NAME         USERNAME
-------------------- ------------------------------
DEMO_BATCH           BROST
Grid Infrastructure has option to force disconnects

If you are using grid infrastructure and manage services through srvctl this behaviour is basically the same but you get an extra “force” switch to also disconnect existing sessions while stopping a service.

[oracle@ractrial1 ~]$ srvctl stop service -db orcl42 -service racdemo_batch [-force]

[oracle@ractrial1 ~]$ srvctl stop service -h

Stops the service.

Usage: srvctl stop service -db <db_unique_name> [-service  "<service_name_list>"] [-serverpool <pool_name>] [-node <node_name> | -instance <inst_name>] [-pq] [-global_override] [-force [-noreplay]] [-eval] [-verbose]
    -db <db_unique_name>           Unique name for the database
    -service "<serv,...>"          Comma separated service names
    -serverpool <pool_name>        Server pool name
    -node <node_name>              Node name
    -instance <inst_name>          Instance name
    -pq                            To perform the action on parallel query service
    -global_override               Override value to operate on a global service.Ignored for a non-global service
    -force                         Disconnect all sessions during stop or relocate service operations
    -noreplay                      Disable session replay during disconnection
    -eval                          Evaluates the effects of event without making any changes to the system
    -verbose                       Verbose output
    -help                          Print usage
Conclusion

Creating extra services on a database allows you to stop and start them for maintenance which can be used as a convenient way to lock out only certain parts of an application while leaving user accounts unlocked to connect via different services.

Categories: DBA Blogs

Initiate a local GIT repository in command line

Yann Neuhaus - Fri, 2016-09-02 10:08

GIT

Objective of the document is to describe how to start manually with command lines a development project, from an existing GIT repository.

Usage of GIT protocol for software development empowers projects team management. It is intended to ease source code management in terms of versioning, branching and sharing between all team members.

    GIT platform Architecture

GIT is a distributed version control system, it means developers can share source code from their workstation to others without the need of any centralized repository. However, at dbi-services we made the choice to deploy a centralized repository platform, first in order to avoid manual synchronization between all developers, then to benefit from a shared common project collaboration platform, like GitHub or GitLab.

Prior being allowed to make a push request to centralized repository, a developer must first ensure having got latest source code revision into its local workstation’s repository (pull request). Then he can commit locally his changes, eventually correct merge conflicts, and finally make the push request to centralized platform.

GIT Architecture

 

    Manual / Command line management

This section will demonstrate how to initiate developer’s local source code management with a remote GIT repository, (as well as from a collaboration platform like GitLab), using the command line.

These commands run out of the box in a Linux operating system.
Under Windows, you must install “git-bash” application.

There are 2 cases for a project initialization:

–    Starting a project from your source code
–    Getting source code from a shared repository

First of all a GIT repository has to be created in the GIT collaboration platform. Do ask GIT platform’s administrators for project creation.

Before starting, it is recommended to update your GIT personal information:

git config --global user.name user
git config --global user.email user@xxx.com

 

Check status of your GIT configuration:

git config –list

       

        Project initialization from local source code

First you must go to your project folder. It is recommended to have the “src” folder underneath.

GIT repository initialization:

git init

 

Create a “master” branch on your local and on remote GIT repository

For local branch creation, you will need to add and commit something (like a README.txt file):

git add README.txt
git commit -m adding README.txt
git branch
* master

 

For remote branch creation, you must first create the local branch, add the remote repository “origin”, then make a pull request to shared repository:

git remote add origin http://<your git server>/<your repo>.git
git push origin master

“origin” represents a pointer name to remote repository.

 

        Project initialization getting source code from shared repository

Get source code from the repository:

git clone http://<your git server>/<your repo>.git <your destination folder>

 

Congratulations, you are now ready to use GIT with your new project !

 

Cet article Initiate a local GIT repository in command line est apparu en premier sur Blog dbi services.

Auditing in PostgreSQL

Yann Neuhaus - Fri, 2016-09-02 09:51

Today, especially in the Pharma and Banking sectors, sooner or later you will be faced with the requirement of auditing. Detailed requirements will vary but usually at least tracking logons to the database is a must. Some companies need more information to pass their internal audits such as: Who created which objects, who fired which sql against the database, who was given which permissions and so on. In this post we’ll look at what PostgreSQL can offer here.

PostgreSQL comes with a comprehensive logging system by default. In my 9.5.4 instance there are 28 parameters related to logging:

(postgres@[local]:5438) [postgres] > select count(*) from pg_settings where name like 'log%';
 count 
-------
    28
(1 row)

Not all of them are relevant when it comes to auditing but some can be used for a minimal auditing setup. For logons and loggoffs there are “log_connections” and “log_disconnections”:

(postgres@[local]:5438) [postgres] > alter system set log_connections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > alter system set log_disconnections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select context from pg_settings where name in ('log_dicconnections','log_connections');
      context      
-------------------
 superuser-backend
(1 row)
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on, as soon as someone connects to or disconnects from the instance it is reported in the logfile:

2016-09-02 10:35:56.983 CEST - 2 - 13021 - [local] - postgres@postgres LOG:  connection authorized: user=postgres database=postgres
2016-09-02 10:36:04.820 CEST - 3 - 13021 - [local] - postgres@postgres LOG:  disconnection: session time: 0:00:07.837 user=postgres database=postgres host=[local]

Another parameter that might be useful for auditing is “log_statement”. When you set this to “ddl” all DDLs are logged, when you set it to “mod” all DDLs plus all statements that modify data will be logged. To log all statements there is the value of “all”.

(postgres@[local]:5438) [postgres] > alter system set log_statement='all';
ALTER SYSTEM

For new session all statements will be logged from now on:

2016-09-02 10:45:15.859 CEST - 3 - 13086 - [local] - postgres@postgres LOG:  statement: create table t ( a int );
2016-09-02 10:46:44.064 CEST - 4 - 13098 - [local] - postgres@postgres LOG:  statement: insert into t values (1);
2016-09-02 10:47:00.162 CEST - 5 - 13098 - [local] - postgres@postgres LOG:  statement: update t set a = 2;
2016-09-02 10:47:10.606 CEST - 6 - 13098 - [local] - postgres@postgres LOG:  statement: delete from t;
2016-09-02 10:47:22.012 CEST - 7 - 13098 - [local] - postgres@postgres LOG:  statement: truncate table t;
2016-09-02 10:47:25.284 CEST - 8 - 13098 - [local] - postgres@postgres LOG:  statement: drop table t;

Be aware that your logfile can grow significantly if you turn this on and especially if you set the value to “all”.

That’s it more or less when it comes to auditing: You can audit logons, logoffs and SQL statements. This might be sufficient for your requirements but this also might not be sufficient for requirements. What do you do if you need e.g. to audit on object level? With the default logging parameters you can not do this. But, as always in PostgreSQL, there is an extension: pgaudit.

If you want to install this extension you’ll need the PostgreSQL source code. To show the complete procedure here is a PostgreSQL setup from source. Obiously the first step is to download and extract the source code:

postgres@pgbox:/u01/app/postgres/software/ [PG953] cd /u01/app/postgres/software/
postgres@pgbox:/u01/app/postgres/software/ [PG953] wget https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
--2016-09-02 09:39:29--  https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 213.189.17.228, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|213.189.17.228|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18496299 (18M) [application/x-bzip-compressed-tar]
Saving to: ‘postgresql-9.5.4.tar.bz2’

100%[==================================================================================>] 18'496'299  13.1MB/s   in 1.3s   

2016-09-02 09:39:30 (13.1 MB/s) - ‘postgresql-9.5.4.tar.bz2’ saved [18496299/18496299]

postgres@pgbox:/u01/app/postgres/software/ [PG953] tar -axf postgresql-9.5.4.tar.bz2 
postgres@pgbox:/u01/app/postgres/software/ [PG953] cd postgresql-9.5.4

Then do the usual configure, make and make install:

postgres@pgbox:/u01/app/postgres/software/ [PG953] PGHOME=/u01/app/postgres/product/95/db_4
postgres@pgbox:/u01/app/postgres/software/ [PG953] SEGSIZE=2
postgres@pgbox:/u01/app/postgres/software/ [PG953] BLOCKSIZE=8
postgres@pgbox:/u01/app/postgres/software/ [PG953] ./configure --prefix=${PGOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGOME}/bin \
            --libdir=${PGOME}/lib \
            --sysconfdir=${PGOME}/etc \
            --includedir=${PGOME}/include \
            --datarootdir=${PGOME}/share \
            --datadir=${PGOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-tcl \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=16  \
            --with-extra-version=" dbi services build"
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make world
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] cd contrib
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install

Once this is done you can continue with the installation of the pgaudit extension:

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] git clone https://github.com/pgaudit/pgaudit.git
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/ [PG953] cd pgaudit/
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make -s check
============== creating temporary instance            ==============
============== initializing database system           ==============
============== starting postmaster                    ==============
running on port 57736 with PID 8635
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test pgaudit                  ... ok
============== shutting down postmaster               ==============
============== removing temporary instance            ==============

=====================
 All 1 tests passed. 
=====================

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make install
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/lib'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/install -c -m 755  pgaudit.so '/u01/app/postgres/product/95/db_4/lib/pgaudit.so'
/usr/bin/install -c -m 644 ./pgaudit.control '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 ./pgaudit--1.0.sql  '/u01/app/postgres/product/95/db_4/share/extension/'

That’s it. Initialize a new cluster:

postgres@pgbox:/u01/app/postgres/software/ [PG954] initdb -D /u02/pgdata/PG954 -X /u03/pgdata/PG954
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/PG954 ... ok
creating directory /u03/pgdata/PG954 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG954/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG954 -l logfile start

… and install the extension:

postgres@pgbox:/u02/pgdata/PG954/ [PG954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create extension pgaudit;
ERROR:  pgaudit must be loaded via shared_preload_libraries
Time: 2.226 ms

(postgres@[local]:5438) [postgres] > alter system set shared_preload_libraries='pgaudit';
ALTER SYSTEM
Time: 18.236 ms

##### Restart the PostgreSQL instance

(postgres@[local]:5438) [postgres] > show shared_preload_libraries ;
 shared_preload_libraries 
--------------------------
 pgaudit
(1 row)

Time: 0.278 ms
(postgres@[local]:5438) [postgres] > create extension pgaudit;
CREATE EXTENSION
Time: 4.688 ms

(postgres@[local]:5438) [postgres] > \dx
                   List of installed extensions
  Name   | Version |   Schema   |           Description           
---------+---------+------------+---------------------------------
 pgaudit | 1.0     | public     | provides auditing functionality
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Ready. So, what can you do with it? As the documentation is quite well here are just a few examples.

To log all statements against a role:

(postgres@[local]:5438) [postgres] > alter system set pgaudit.log = 'ROLE';

Altering or creating roles from now on is reported in the logfile as:

2016-09-02 14:50:45.432 CEST - 9 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,2,1,ROLE,CREATE ROLE,,,create user uu login password ,
2016-09-02 14:52:03.745 CEST - 16 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,3,1,ROLE,ALTER ROLE,,,alter user uu CREATEDB;,
2016-09-02 14:52:20.881 CEST - 18 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,4,1,ROLE,DROP ROLE,,,drop user uu;,

Object level auditing can be implemented like this (check the documentation for the meaning of the pgaudit.role parameter):

(postgres@[local]:5438) [postgres] > create user audit;
CREATE ROLE
(postgres@[local]:5438) [postgres] > create table taudit ( a int );
CREATE TABLE
(postgres@[local]:5438) [postgres] > insert into taudit values ( 1 );
INSERT 0 1
(postgres@[local]:5438) [postgres] > grant select,delete on taudit to audit;
GRANT
(postgres@[local]:5438) [postgres] > alter system set pgaudit.role='audit';
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Once we touch the table:

(postgres@[local]:5438) [postgres] > select * from taudit;
 a 
---
 1
(1 row)
(postgres@[local]:5438) [postgres] > update taudit set a = 4;

… the audit information appears in the logfile:

2016-09-02 14:57:10.198 CEST - 5 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,1,1,READ,SELECT,TABLE,public.taudit,select * from taudit;,
2016-09-02 15:00:59.537 CEST - 9 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,2,1,WRITE,UPDATE,TABLE,public.taudit,update taudit set a = 4;,

Have fun with auditing …

 

Cet article Auditing in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2016 My Cloud ERP Sessions

David Haimes - Fri, 2016-09-02 09:33
  • oow-logo-2015

    OpenWorld is this month, so time to start planning your agenda.  I’ll be presenting at a few different sessions this year with a focus on Financials Cloud (Part of the ERP Cloud) and E-Business Suite.

    I’m busy preparing for the first two sessions, hope to see you there.  Check out the content catalog for more details and to add them to your agenda.

    How Oracle E-Business Suite Customers Have Achieved Modern Reporting in the Cloud [CON7313]
    David Haimes, Senior Director, Financial Applciations Development, Oracle
    Sara Mannur, Financial Systems Analyst, Niagara Bottling
    This session discusses how to leverage powerful modern reporting tools with no disruption to your existing Oracle E-Business Suite implementation. You will learn the steps required to start using the cloud service. Customers who have implemented Oracle Fusion Accounting Hub Reporting Cloud Service share their implementation experiences and the business benefits they have realized.
    Conference Session
    Tuesday, Sep 20, 12:15 p.m. – 1:00 p.m. | Moscone West—3016
    Oracle ERP Cloud UX Extensibility: From Mystery to Magic [CON7312]
    David Haimes, Senior Director, Financial Applciations Development, Oracle
    Tim Dubois, Senior Director, Applications User Experience, Oracle
    The user experience (UX) design strategy of Oracle Enterprise Resource Planning Cloud (Oracle ERP Cloud)—including financials and product portfolio management—is about simplicity, mobility, and extensibility. Extensibility and admin personalization of Oracle ERP Cloud experiences include a range of tools, from a simplified approach for rebranding the applications, to match your company culture and image to page-level admin personalization and extension, to building your own simplified cloud application in platform as a service using the UX Rapid Development Kit (RDK). In this session, learn about RDK code samples, wireframing tools, and design patterns. Get a view into the RDK roadmap and where Oracle is going next. .
    Conference Session
    Thursday, Sep 22, 1:15 p.m. – 2:00 p.m. | Moscone West—3001
    Meet the Experts: Oracle Financials Cloud [MTE7784]
    Do not miss this opportunity to meet with Oracle Financials Cloud experts—the people who design and build the applications. In this session, you can have discussions regarding the Oracle Applications Cloud strategy and your specific business and IT strategy. The experts are available to discuss the value of the latest releases and share insights into the best path for your enterprise, so come ready with your questions.
    Meet the Experts Session
    Wednesday, Sep 21, 3:00 p.m. – 3:45 p.m. | Moscone West—3001A

Categories: APPS Blogs

Brand Transformation Starts with Oracle Marketing Cloud at OpenWorld 2016!

Linda Fishman Hoyle - Fri, 2016-09-02 09:13

A Guest Post by Jennifer Dennis, Director of Marketing, Oracle (pictured left)

Learn how leading companies deliver the best of their brand with Oracle Marketing Cloud at Oracle OpenWorld 2016 in San Francisco, September 18–22, 2016.

Hear Laura Ipsen,General Manager and Senior Vice President, Oracle Marketing Cloud (pictured right), at our general session on Brand Experience: Modern Marketing Transformation. You’ll get insights into how brands are making this data-driven digital transformation and achieving dramatic results. Laura will be joined by Ninish Ukkan, Senior Vice President, Head of Technology for eHarmony.

Then see how Modern Marketing works in sessions featuring MongoDB, DocuSign, Nestle USA, Team One, LinkedIn, CSC, Clorox, technology partners, and Modern Marketing experts. Attendees will experience how Oracle Marketing Cloud is the solution that marketers love and IT trusts.

Whether you’re focused on marketing automation or cross-channel marketing, you’ll get insights into solutions for data-driven, mobile, and account-based marketing to help you achieve a personalized customer experience.

Modern Marketing experts will offer demonstrations and an in-depth look at some of the exciting new solutions and features released for Oracle Marketing Cloud.

You won’t want to miss these opportunities to transform yourself into a Modern Marketer. If you have questions, contact me at jennifer.dennis@oracle.com.

Use these links to get more information and to register

If you have questions, contact me at jennifer.dennis@oracle.com.

.... this is only a psuedo object?

Darwin IT - Fri, 2016-09-02 06:00
Yesterday I was working on a BPEL project that I created before the summer holidays. I wanted to implement it further. But on first redeployment I ran into:
[12:18:01 PM] ----  Deployment started.  ----
[12:18:01 PM] Target platform is (Weblogic 12.x).
[12:18:01 PM] Running dependency analysis...
[12:18:01 PM] Building...
[12:18:08 PM] Deploying profile...
[12:18:09 PM] Wrote Archive Module to D:\Projects\2016DWN\SOASuite\HRApplication\DWN_CdmHR\trunk\SOA\DWN_CdmHR\CDMHRDomainService\deploy\sca_CDMHRDomainService.jar
[12:18:18 PM] Deploying sca_CDMHRDomainService.jar to partition "default" on server SoaServer1 [http://darlin-vce-db.darwin-it.local:8001]
[12:18:18 PM] Processing sar=/D:/Projects/2016DWN/SOASuite/HRApplication/DWN_CdmHR/trunk/SOA/DWN_CdmHR/CDMHRDomainService/deploy/sca_CDMHRDomainService.jar
[12:18:18 PM] Adding sar file - D:\Projects\2016DWN\SOASuite\HRApplication\DWN_CdmHR\trunk\SOA\DWN_CdmHR\CDMHRDomainService\deploy\sca_CDMHRDomainService.jar
[12:18:18 PM] Preparing to send HTTP request for deployment
[12:18:18 PM] Creating HTTP connection to host:darlin-vce-db.darwin-it.local, port:8001
[12:18:18 PM] Sending internal deployment descriptor
[12:18:19 PM] Sending archive - sca_CDMHRDomainService.jar
[12:18:19 PM] Received HTTP response from the server, response code=500
[12:18:19 PM] Error deploying archive sca_CDMHRDomainService.jar to partition "default" on server SoaServer1 [http://darlin-vce-db.darwin-it.local:8001]
[12:18:19 PM] HTTP error code returned [500]
[12:18:19 PM] Error message from server:
There was an error deploying the composite on SoaServer1: Deployment Failed: Error occurred during deployment of component: HREmployeeProcess to service engine: implementation.bpel, for composite: CDMHRDomainService: ORABPEL-05215

Error while loading process.
The process domain is encountering the following errors while loading the process "HREmployeeProcess" (composite "default/CDMHRDomainService!1.0*soa_6e4206b5-3297-4f53-9944-734349aed8ab"): this is only a psuedo object.
This error contained an exception thrown by the underlying process loader module.
Check the exception trace in the log (with logging level set to debug mode). If there is a patch installed on the server, verify that the bpelcClasspath domain property includes the patch classes.
.

[12:18:19 PM] Check server log for more details.
[12:18:19 PM] Error deploying archive sca_CDMHRDomainService.jar to partition "default" on server SoaServer1 [http://darlin-vce-db.darwin-it.local:8001]
[12:18:19 PM] Deployment cancelled.
[12:18:19 PM] ---- Deployment incomplete ----.
[12:18:19 PM] Error deploying archive file:/D:/Projects/2016DWN/SOASuite/HRApplication/DWN_CdmHR/trunk/SOA/DWN_CdmHR/CDMHRDomainService/deploy/sca_CDMHRDomainService.jar
(oracle.tip.tools.ide.fabric.deploy.common.SOARemoteDeployer)

So the I was googling around, and found this blog entry. This one suggested a missmatch between the project and referenced wsdl's/xsd's in the MDS.

So I refreshed the MDS, restarted the whole SOA Server, but no luck.

At the doorstep of removing the lot of components and references, I decided to take a last closer look to the composite.xml.

The BPEL process component HREmployeeProcess had a reference to the service HREmployeeProcessSubscriber. The latter was based on a wsdl in the mds:
  <reference name="HREmployeeProcessSubscriber"
ui:wsdlLocation="oramds:/apps/CDM/services/domain/operations/hrm/v2/EmployeeDomainEntityEventService.wsdl">
<interface.wsdl interface="http://hhs.nl/services/domain/operations/hrm/v2/#wsdl.interface(EmployeeDomainEntityEventServicePortType)"/>
<binding.ws port="http://hhs.nl/services/domain/operations/hrm/v2/#wsdl.endpoint(hremployeeprocessa_client/EmployeeDomainEntityEventServicePort)"
location="http://darlin-vce-db:8001/soa-infra/services/default/HRSubscriberA/HREmployeeEventServiceA?WSDL"
soapVersion="1.1"/>
</reference>
But in the reference in the bpel component it refered to the BPEL process on the server:
<reference name="HREmployeeProcessSubscriber"
ui:wsdlLocation="http://darlin-vce-db:8001/soa-infra/services/default/HRSubscriberA/HREmployeeEventServiceA?WSDL">
<interface.wsdl interface="http://hhs.nl/services/domain/operations/hrm/v2/#wsdl.interface(EmployeeDomainEntityEventServicePortType)"/>
</reference>

Since the wsdl defined in the ui:wsdlLocation attribute needs to be available on compiling and loading of the component by the component-engine it's recommended to use a reference to an abstract wsdl in the mds. In this case I replaced the ui:wsdlLocation in the service reference by the mds. But apparently I forgot the BPEL Comnponent. To replace that, you should do this in the partnerlink definition in the BPEL Process. Because the composite.xml is automatically updated. Because the abastract wsdl lacks the partnerlink types, as you might know, JDeveloper suggests to create a wrapper wsdl for you.

Now, because of the synchronizations between bpel and the composite, you might need to hack the composite and the bpel process, to get thinks consistent again (at least I had to). But then, having it resolved, the composite was deployable again... And the BPEL process wasn't so pseudo anymore.

Understanding Row Level Security on PostgreSQL

Yann Neuhaus - Fri, 2016-09-02 03:11

In this article we will talk about a nice feature Row Level Security on PostgreSQL. We are using EDB Postgres Advanced Server 9.5.
Suppose that I am a team manager and that employee bonus are stored in a table Bonus. I want that each employee can see only data related to him and not data for other. How Can I implement this? I can simply use Row Level Security.
Let’s go on. Below is the structure of my table Bonus

testdb=# \d Bonus
            Table "public.bonus"
 Column |         Type          | Modifiers
--------+-----------------------+-----------
 id     | numeric               | not null
 login   | character varying(20) |
 bonus  | numeric               |
Indexes:
    "bonus_pkey" PRIMARY KEY, btree (id)

Below data inside Bonus

testdb=# table bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

Let’s create users with corresponding logins

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create user "james@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Moise@example.com" password 'secret';
CREATE ROLE
testdb=# create user "jennifer@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Mikael@example.com" password 'secret';
CREATE ROLE

And let’s grant them select on Table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# grant select on bonus to "james@example.com";
GRANT
testdb=# grant select on bonus to "Moise@example.com";
GRANT
testdb=# grant select on bonus to "Mikael@example.com";
GRANT
testdb=# grant select on bonus to "jennifer@example.com";
GRANT

We can verify that  by default   each user can see all data (what I don’t want). For example with user james@example.com

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

And with user jennifer@example.com

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

To allow user to see only his data. I have first to create a policy on the table Bonus with an expression which will filter data.

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create policy bonus_policy on bonus for all to public using (login=current_user);
CREATE POLICY
testdb=#

After creating the policy, let’s enable the RLS on table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# alter table bonus enable row level security;
ALTER TABLE
testdb=#

And now bingo we can  verify that each user can only see his corresponding data

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  4 | jennifer@example.com |  3520
(1 row)
testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |       login       | bonus
----+-------------------+-------
  1 | james@example.com |  2500
(1 row)

testdb=>

Now let’s drop the policy but let’s still keep table bonus with the RLS enabled. What happens?

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# drop policy bonus_policy on bonus;
DROP POLICY
testdb=#

Let’s Query  table bonus with user james@example.com for example

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id | login | bonus
----+-------+-------
(0 rows)

testdb=>

But if we query the table with user enterprisedb which is the table owner (should also be a superuser)

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

So we see that if RLS is enabled on a table and that there is no defined policy, a default-deny  policy will be applied. Only owners, super users and users  with the BYPASSRLS attribute will be able to see data in the table

 

Cet article Understanding Row Level Security on PostgreSQL est apparu en premier sur Blog dbi services.

Links for 2016-09-01 [del.icio.us]

Categories: DBA Blogs

Oracle 12c: Indexing JSON in the Database Part III (Paperback Writer)

Richard Foote - Fri, 2016-09-02 00:13
In Part I and Part II, we looked at how to index specific attributes within a JSON document store within an Oracle 12c database. But what if we’re not sure which specific attributes might benefit from an index or indeed, as JSON is by it’s nature a schema-less way to store data, what if we’re not entirely sure […]
Categories: DBA Blogs

New graph: Average Active Sessions per minute

Bobby Durrett's DBA Blog - Thu, 2016-09-01 17:25

I am working on a production issue. I do not think that we have a database issue but I am graphing some performance metrics to make sure. I made a new graph in my PythonDBAGraphs program.

ash_active_session_count_today

It shows the average number of active sessions for a given minute. It prompts you for start and stop date and time. It works best with a relatively small interval or the graph gets too busy. Red is sessions active on CPU and blue is all active sessions. This graph is a production database today. Activity peaked around mid day.

It is kind of like the OEM performance screen but at least having it in Python lets me tinker with the graph to meet my needs. Check out the README on the GitHub link above if you want to run this in your environment.

Bobby

Categories: DBA Blogs

Another Update on PeopleSoft's Plans for Elasticsearch

PeopleSoft Technology Blog - Thu, 2016-09-01 15:43

In an earlier post, we indicated that Elasticsearch would be available in the 8.55.10 of PeopleTools.  Instead, it will not be available with this specific PeopleTools patch. We are working on completing release requirements and we will update the exact 8.55 patch number when we make Elasticsearch generally available for PeopleSoft.  As mentioned previously, Elasticsearch will be the only Search engine supported in PeopleTools 8.56.

For those using SES and Verity...

  • SES: We plan to support SES in PeopleTools 8.55 for 18 months after Elasticsearch is generally available.
  • Verity: We plan to support Verity for our 9.1 application releases through September 2018. Note that Verity will not be supported in 8.56, which means that if customers wish to use Verity, they will need to stay on 8.55. Our agreement with Verity ends in September 2017 but we will do our best to support customers through the support policy of the 9.1 applications.

For those attending Oracle Open World this September, we offer a session on Elasticsearch that you should find interesting and informative. In the presentation, we will describe all the benefits of Elasticsearch integration for PeopleSoft including its ease of deployment, transition, and operations. Here are the session logistics:

Session ID: CON7066
Session Title: Getting the Most Out of PeopleSoft: Transitioning to Elasticsearch 
Room: Moscone West—2004
Date and Time: 09/21/16, 12:15:00 PM - 01:00:00 PM


Steve Miranda’s CXOTALK: Will Data Make Apps Smarter?

Linda Fishman Hoyle - Thu, 2016-09-01 15:08

If you want to know what Steve Miranda is going to talk about at OpenWorld, his interview with ZDNet columnist Michael Krigsman provides a few clues. One topic sure to get top billing is data—as in Data Cloud’s ability to combine third-party data (cookies, mobile ID, credit card purchases, etc.) with first-party data.

You’ve probably heard about Data Cloud in the context of marketing. If you’ve ever clicked on an item in one web site and then seen ads for that item on another web site, you’ve experienced the power of data.Instead of targeting broad customer segments, marketers can now connect digital identities across marketing channels (Facebook and Twitter handles) and devices (desktops, tablets, mobile devices) and target anonymous—with emphasis on anonymous—individuals.

But here’s where Miranda's talk gets really interesting. He says that we’re only at the beginning of potential uses of Data Cloud. For example, you could create highly compelling, next-best offers by combining purchase data from Commerce Cloud with external purchase intent data. Or you could determine whether or not to take a supplier discount based on real-time currency rates or supplier ratings. To hear Steve describe his vision, click here.

Steve sums up the future of apps in this way: “Collect as much data as you can, use that data to make better product, and that’s where we think it’s going to go in SaaS.”

Guaranteed, this won’t be the last time you hear the words data and applications together.

Oracle TO_LOB Function with Examples

Complete IT Professional - Thu, 2016-09-01 14:05
The TO_LOB function is an Oracle conversion function. Learn what it is and see some examples in this article. Purpose of the Oracle TO_LOB Function The purpose of the TO_LOB function is to convert LONG or LONG RAW values to LOB values.   Syntax The syntax of the TO_LOB function is: TO_LOB ( long_value ) […]
Categories: Development

Setting up Spark Dynamic Allocation on MapR

Tugdual Grall - Thu, 2016-09-01 13:30
Read this article on my new blog Apache Spark can use various cluster manager to execute application (Stand Alone, YARN, Apache Mesos). When you install Apache Spark on MapR you can submit application in a Stand Alone mode or using YARN. This article focuses on YARN and Dynamic Allocation, a feature that lets Spark add or remove executors dynamically based on the workload. You can Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

The third one on Creating weblogic user, now for SOA Suite

Darwin IT - Thu, 2016-09-01 08:43
A few months ago I figured out how to create specific users with restricted access to Service Bus components. I blogged about it in part 2 of creating WebLogic users for ServiceBus 12c. But the series lacks an explanation on restricted user access on SOASuite.

Today in a question about Roles on Oracle Community Forums, the reference to this elaborate blog entry was given: Restricted View, by Antony Reynolds.

I think that blog explains it well. Unfortunately the link to 7.2 Partition Roles, Anthony mentioned, did not work. What I found (12.1.3) is 7.3 Securing Access to Partitions (12.1.3) and 7.3 Securing Access to SOA Folders (12.2.1). (Apparently from 12.2.1 onwards, partitions are called SOA folders...)




Inside the #OOW16 Session: Digital Experience in the Cloud: Strategy & Roadmap

WebCenter Team - Thu, 2016-09-01 07:02

Authored by Mariam Tariq, Senior Director, Product Management, Oracle

Digital Experience in the Cloud: Strategy and Roadmap [CON7260]
Igor Polyakov and Mariam Tariq, Oracle Product Management
Tuesday, Sep 20, 5:15PM | Moscone West—2014

In this session, you will hear Igor Polyakov and Mariam Tariq from Oracle Product Management cover the strategy and roadmap of Oracle’s Cloud Portfolio for Digital Experience.  This session will help IT and Business professionals understand how to leverage Oracle Cloud technology to build engaging, mobile-first web experiences. For different lines of business including Marketing, Sales, Business Development, and HR, the goal is to engage and interact with customers, partners and employees. For IT, the focus is to establish governance so that there is clear visibility into the business activities, as well as to enforce security and compliance.

Putting more control in the hands of business users, is definitely a theme that’s been advocated by enterprise software vendors for years. For Oracle, for example, WebCenter has established fundamentals in areas such as content-reuse, data models, integrations, and templating that have enabled organizations to solve complex use cases and allowed lines of business to take more control. But we’re at a crossroads now, where demands for engagement are skyrocketing.  We live in a day and age where information moves quickly and expectations are very high to get access to what you want, whenever you want. Simply put, businesses need to move even faster.

According to Forbes, 55% of customers prefer automated self-service, which is double the figure seen in the past 5 years. This statistic is very telling. Customers are constantly driving at the need for faster turnarounds and access to information. Furthermore, the need for engagement exists across all lines of business, both internally and externally facing.  An engaging mobile web experience must bring together content and data, self-service, and communities.  For the average business user, building a true integrated engagement experience simply cannot be done without heavy reliance on IT or by working within tight constraints.

This is precisely where the cloud can come into the picture to help meet the enterprise needs to drive better internal and external digital engagement. With Content and Experience Management, Oracle is providing a cloud platform to enable organizations to meet demands, engage stakeholders and move business forward. In this session, we’ll review our key accomplishments in the last year and demonstrate the ease of building engagement experiences. This includes looking at both the business platform to build experiences and also the developer platform to handle integrations and build solution templates leveraged by the business.

We will also give insights into our roadmap including covering upcoming features in areas such as Digital Asset Management and Content-as-a-Service (CaaS). CaaS is a particularly interesting vision for Oracle in that it will provide a unified content platform across Oracle products. Imagine ALL of your digital channels: web, email, marketing, social and more, accessing a single source to manage, search and retrieve content. Content could be unstructured documents like images, videos and presentations or structured like knowledge articles and press releases. Content could also include user-generated things including comments or discussions. Which content was used in what channel, how often it was accessed in each channel, segmented by profile, all accessible through a single set of services accessible to all your channels! This is a vision we believe will revolutionize digital experience. Please join us at OpenWorld to learn more about the future of Digital Experience at Oracle.



Creating compelling digital engagement is as simple as picking a template and assembling rich content and components onto a page.

See you here at this OOW session:

Digital Experience in the Cloud: Strategy and Roadmap [CON7260]
Igor Polyakov and Mariam Tariq, Oracle Product Management
Tuesday, Sep 20, 5:15PM | Moscone West—2014

And don’t forget to follow all things #OracleDX and #OOW16 on twitter!

See you at Oracle Open World!

Javier Delgado - Thu, 2016-09-01 06:08
This year I will be attending again Oracle Open World, as I did in 2014. I'm really looking forward learning what is new in PeopleSoft and networking with lots of interesting people from around the globe.




However, this year will be special for me as we will be presenting one of the sessions. The presentation will be delivered by César García Galán, a fellow consultant at BNB, Daniel Plaza Pardo, PeopleSoft HCM key user at Consum Cooperativa, one of the leading retailers in Spain, and me.

The session will be about how Consum is investing on Fluid user interface in order to improve the user experience. It will take place on Monday 19th September at 4.15pm. If you are attending Oracle Open World and interested in taking part of the sessions, please register on session CON2405. I hope you can make it!

Oracle LOCALTIMESTAMP Function with Examples

Complete IT Professional - Thu, 2016-09-01 06:00
In this article, I’ll explain what the Oracle LOCALTIMESTAMP function does and show some examples. Purpose of the Oracle LOCALTIMESTAMP Function The LOCALTIMESTAMP function will return the current date and time of the session time zone. This means it’s likely in the time zone that you’re located in (as opposed to the server time zone). […]
Categories: Development

How do I get my query results paginated?

Bar Solutions - Thu, 2016-09-01 04:01

Dear Patrick,

I have got a website with a search form. I want to display a limited number of results to the user and have him/her navigate through different pages. Is this possible using plain SQL?

Kindest regards,

Mitchell Ian

Dear Mitchell,

Of course this is possible. It might take some thinking, but that has never hurt anyone (yet). First we need a table with some randomly sorted data in it. In this example I am just using 20 records, but you can use this approach on bigger tables of course.

[PATRICK]SQL>CREATE TABLE t AS 
             SELECT LEVEL val#, to_char(LEVEL, '9999') value_in_text 
               FROM dual 
             CONNECT BY LEVEL &lt; 21
              ORDER BY dbms_random.random 
             /  

Table created.

The order by dbms_random.random is to ensure the data is inserted in random order. I you just select from this new table then you data will be unordered.

Now we select the first ‘page’ from this table. Our page size is 5 records. So the query will be:

[PATRICK]SQL>SELECT *
              FROM t
             WHERE ROWNUM &lt;= 5
            /
VAL#       VALUE
---------- -----
10         10
20         20
16         16
1          1
17         17

This results in the first 5 rows from the table. If we want to get the next 5, rownums 6 through 10 then you might want to try something like this.

[PATRICK]SQL>SELECT *
               FROM t
              WHERE ROWNUM &gt; 5 AND ROWNUM &lt;= 10
             /

no rows selected

Unfortunately this doesn’t work. I appears this query will never have any resulting row with a number between 6 and 10. The solution to this issue is the use of a subquery:

[PATRICK]SQL>SELECT val#, value_in_text
               FROM (SELECT t.val#, t.value_in_text, ROWNUM rn
                       FROM t)
              WHERE rn > 5 AND rn <= 10
             /

VAL#       VALUE
---------- -----
13         13
4          4
5          5
3          3
14         14

In this query we first select all the rows we might need for the pages and using this resultset we just select the rows we are interested in for our page.

If your table is rather big you may want to include the maximum rownum in the inline view.

[PATRICK]SQL>SELECT val#, value_in_text
               FROM (SELECT t.val#, t.value_in_text, ROWNUM rn
                       FROM t
                      WHERE ROWNUM <= 10)
              WHERE rn > 5 AND rn <= 10
             /

VAL#       VALUE
---------- -----
13         13
4          4
5          5
3          3
14         14

As you are probably aware of the is no guarantee on how the rows are being returned unless you specify an order by clause. But what happens if you were to just include this order by in your query. Let’s see what happens when you include it in the first query for the first page:

[PATRICK]SQL>SELECT *
               FROM t
              WHERE ROWNUM <= 5
             ORDER BY t.val#
             /

VAL#       VALUE
---------- -----
12         12
13         13
15         15
17         17
19         19

The rows returned are in order, but they are definitely not the first 5 values currently in the table. That is how the sql engine works. It first gets the first 5 rows to honor the predicate in the query and then it sorts the result before returning it to the caller.

What we should do to get the correct behavior of our query is use a subquery to get the results in order and apply the rownum clause to that result.

[PATRICK]SQL>SELECT *
               FROM (SELECT *
                       FROM t
                     ORDER BY t.val#)
              WHERE ROWNUM &lt;= 5
             /

VAL#       VALUE
---------- -----
1          1
2          2
3          3
4          4
5          5

We can now use this to build a query to get the next page of results:

[PATRICK]SQL>SELECT val#, value_in_text
               FROM (SELECT val#, value_in_text, ROWNUM rn
                       FROM (SELECT *
                               FROM t
                             ORDER BY t.val#)
                     ORDER BY rn)
              WHERE rn > 5 AND rn <= 10
             /

VAL#       VALUE
---------- -----
6          6
7          7
8          8
9          9
10         10

When you have access to an Oracle 12c database, it is a lot easier, to get the first page of the ordered results, you can issue this statement:

[PATRICK]SQL>SELECT *
               FROM t
            ORDER BY t.val#
            FETCH FIRST 5 ROWS ONLY
            /

To get another page you can provide query with an offset of how many rows to skip:

[PATRICK]SQL>SELECT *
               FROM t
             ORDER BY t.val#
             OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY
             /

Under the covers Oracle still issues similar queries as the ones we built earlier, but it is a lot easier to write these.

Hope this sheds a bit of light on your issue.

Happy Oracle’ing,

Patrick Barel

If you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Summer 2014

Send data to oracle procedure using ref cursor

Tom Kyte - Thu, 2016-09-01 01:26
Hi Tom, In our application (.Net & Oracle) we have an address table with 20+ fields, and I was wondering if it's a good idea to pass address to SP using REF CURSOR insted of multiple params (SP below will be called from other SPs as well as from C...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator