Feed aggregator

Oracle COMPOSE Function with Examples

Complete IT Professional - Thu, 2016-09-08 14:40
In this article, I’ll explain what the Oracle COMPOSE function is and show you some examples. Purpose of the Oracle COMPOSE Function The COMPOSE function allows you to provide a string or a string expression, and return a Unicode string of that input. It’s helpful for turning letters and accent characters into an accented letter. […]
Categories: Development

Come See Us at Open World

PeopleSoft Technology Blog - Thu, 2016-09-08 11:41

Oracle's Open World conference is just around the corner (September 18-22).  PeopleSoft always has a significant presence at Open World, and this year we have lots of great sessions and exciting new things to announce.  If you are interested in PeopleSoft technology and how PeopleTools and related technology can provide value to your PeopleSoft ecosystem, consider attending some of these sessions. 

Our general sessions and keynotes are popular every year.  This year, Paco Aubrejuan PeopleSoft's Senior Vice President will give you an overview of all things PeopleSoft including how you can deploy PeopleSoft in the Cloud to achieve cost savings and business agility, while preserving your investment in PeopleSoft.  Jeff Robbins will once again be presenting the PeopleSoft Technology Roadmap, where you can learn how we increase the value of PeopleSoft applications with important technology innovations.  With all the talk about the Cloud these days, customers may not know about their options with PeopleSoft in the cloud.  To address that, we have several sessions dedicated to learning about PeopleSoft Cloud deployment.  Jeff Robbins will teach you how to use Oracle Cloud to efficiently manage Oracle’s PeopleSoft infrastructure. Whether it is development, test, training, demo, or production environments.  Biju Narayanan and Mark Hoernemann will tell you about the PeopleSoft Cloud Architecture and Deployment Framework as well as PeopleSoft’s upcoming Cloud Manager.  This new application can help customers and partners achieve higher service levels and greater automation in managing their PeopleSoft environments deployed to the Oracle Cloud.  Ravi Shankar will inform attendees about Oracle Managed Cloud Services and how they can help you manage Oracle’s PeopleSoft applications deployed on Oracle Cloud.  Mark Hoernemann will present the Platform and Infrastructure Overview, which provides the information you need to plan your PeopleSoft environments. 

Learn all about PeopleSoft's Fluid User Interface in several OpenWorld sessions.  Dave Bain will cover how to extend your Fluid applications.  Dave will also present with a customer who has deployed Oracle Fluid User Interface. You will learn about the customer’s experiences and discover what to do to prepare, deploy, and deliver PeopleSoft Fluid–based applications.  Matthew Haavisto and Pramod Agrawal will show you some simple configuration techniques that can enrich the Fluid content delivered by our Applications teams.  You will also hear from several of our customers, who will relate their experiences adopting Fluid.  Matthew and Ramasimha Rangaraju will also talk about PeopleSoft's move to use Elastic Search.  With Search becoming such an important part of the user experience, learn what the transition to Elastic means for you.

Security is always an important topic.  Greg Kelly will describe how to secure your PeopleSoft environments when deployed in the Cloud. In Greg's session you'll find out how to achieve the same levels of security and governance as you do on premise. Greg will also show you how to take advantage of the latest PeopleTools security features to protect against both internal and external threats to your systems and data. 

PeopleSoft has made some important advancements in Reporting and Analytics.  Matthew Haavisto and Jody Schnell offer a roadmap on PeopleSoft's rich set of tools for reporting and analytics, which are focused on enabling the end user to be more productive than ever. 

With so many new things happening in PeopleSoft, how does one learn about them after Open World?  In a live PeopleSoft Talk session, Greg Parikh discusses the many information sources Oracle offers for PeopleSoft, from videos and planning tools to concept portal pages and blogs. You will understand what resources exist, where to find them, and what to use them for.

Customers and Partners are also presenting many important and interesting sessions.  They cover everything from Selective Adoption and PeopleSoft's new life-cycle and delivery model, successful adoption of the Fluid UI, rapid deployment in the Cloud. 

We're offering several panel sessions featuring customers, partners, and Oracle staff.  In these lively sessions we'll also be talking about Selective Adoption and the new life-cycle, Cloud deployment, and adopting PeopleSoft's Fluid UI. 

For detailed information on all these sessions and many more--including the dates, times, and locations--see our Focus on PeopleSoft page located here.

EDB Failover Manager 2.1, upgrading

Yann Neuhaus - Thu, 2016-09-08 05:53

Some days ago EnterpriseDB released a new version of its EDB Failover Manager which brings one feature that really sounds great: “Controlled switchover and switchback for easier maintenance and disaster recovery tests”. This is exactly what you want when you are used to operate Oracle DataGuard. Switching back and forward as you like without caring much about the old master. The old master shall just be converted to a standby which follows the new master automatically. This post is about upgrading EFM from version 2.0 to 2.1.

As I still have the environment available which was used for describing the maintenance scenarios with EDB Failover Manager (Maintenance scenarios with EDB Failover Manager (1) – Standby node, Maintenance scenarios with EDB Failover Manager (2) – Primary node and Maintenance scenarios with EDB Failover Manager (3) – Witness node) I will use the same environment to upgrade to the new release. Lets start …

This is the current status of my failover cluster:

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C5E0       
	Standby     192.168.22.243       0/3B01C5E0       

	Standby database(s) in sync with master. It is safe to promote.
[root@edbbart ~]$ 

Obviously you have to download the new version to begin the upgrade. Once the rpm is available on all nodes simply install it on all the nodes:

[root@edbppas tmp]$ yum localinstall efm21-2.1.0-1.rhel7.x86_64.rpm

EFM 2.1 comes with an utility command that helps in upgrading a cluster. You should invoke it on each node:

[root@edbbart tmp]$ /usr/efm-2.1/bin/efm upgrade-conf efm
Processing efm.properties file.
Setting new property node.timeout to 40 (sec) based on existing timeout 5000 (ms) and max tries 8.

Processing efm.nodes file.

Upgrade of files is finished. Please ensure that the new file permissions match those of the template files before starting EFM.
The db.service.name property should be set before starting a non-witness agent.

This created a new configuration file in the new directory under /etc which was created when the new version was installed:

[root@edbbart tmp]$ ls /etc/efm-2.1
efm.nodes  efm.nodes.in  efm.properties  efm.properties.in

All the values from the old EFM cluster should be there in the new configuration files:

[root@edbbart efm-2.1]$ pwd
/etc/efm-2.1
[root@edbbart efm-2.1]$ cat efm.properties | grep daniel
user.email=daniel.westermann...

Before going further check the new configuration parameters for EFM 2.1, which are:

auto.allow.hosts
auto.resume.period
db.service.name
jvm.options
minimum.standbys
node.timeout
promotable
recovery.check.period
script.notification
script.resumed

I’ll leave everything as it was before for now. Notice that a new service got created:

[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             enabled 
efm-2.1.service                             disabled

Lets try to shutdown the old service on all nodes and then start the new one. Step 1 (on all nodes):

[root@edbppas efm-2.1]$ systemctl stop efm-2.0.service
[root@edbppas efm-2.1]$ systemctl disable efm-2.0.service
rm '/etc/systemd/system/multi-user.target.wants/efm-2.0.service'

Then enable the new service:

[root@edbppas efm-2.1]$ systemctl enable efm-2.1.service
ln -s '/usr/lib/systemd/system/efm-2.1.service' '/etc/systemd/system/multi-user.target.wants/efm-2.1.service'
[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             disabled
efm-2.1.service                             enabled 

Make sure your efm.nodes file contains all the nodes which make up the cluster, in my case:

[root@edbppas efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.243:9998 192.168.22.244:9998 192.168.22.245:9998

Lets try to start the new service on the witness node first:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Looks good. Are we really running the new version?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm -v
Failover Manager, version 2.1.0

Looks fine as well. Time to add the other nodes:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.243
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.245
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Proceed on the master:

[root@ppasstandby efm-2.1]$ systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:04:11 CEST; 25s ago
  Process: 4020 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 4075 (java)
   CGroup: /system.slice/efm-2.1.service
           └─4075 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/java -cp /usr/e...

Sep 08 12:04:07 ppasstandby systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:04:08 ppasstandby sudo[4087]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4098]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4114]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:08 ppasstandby sudo[4125]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:10 ppasstandby sudo[4165]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...9998
Sep 08 12:04:10 ppasstandby sudo[4176]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...4075
Sep 08 12:04:11 ppasstandby systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

And then continue on the standby:

[root@edbppas efm-2.1]$ systemctl start efm-2.1.service
[root@edbppas efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:05:28 CEST; 3s ago
  Process: 3820 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 3875 (java)
   CGroup: /system.slice/efm-2.1.service
           └─3875 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/jav...

Sep 08 12:05:24 edbppas systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:05:25 edbppas sudo[3887]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3898]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3914]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3925]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3945]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:28 edbppas sudo[3981]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...998
Sep 08 12:05:28 edbppas sudo[3994]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...875
Sep 08 12:05:28 edbppas systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

What is the cluster status now?:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C7A0       
	Standby     192.168.22.243       0/3B01C7A0       

	Standby database(s) in sync with master. It is safe to promote.

Cool. Back in operation on the new release. Quite easy.

PS: Remember to re-point your symlinks in /etc and /usr if you created symlinks for easy of use.

 

Cet article EDB Failover Manager 2.1, upgrading est apparu en premier sur Blog dbi services.

What is the difference between NVL and COALESCE?

Bar Solutions - Thu, 2016-09-08 04:08

Dear Patrick,

Could you tell me what the difference is between NVL and COALESCE?

Kindest regards,
Paul McCurdey

Dear Paul,

NVL returns the value of the first argument if it doesn’t evaluate to NULL, otherwise it will return the value of the second argument.
COALESCE returns the first argument that doesn’t evaluate to NULL. That can be any one of the arguments.
So they are definitely similar – but there are significant differences.
First of all, COALESCE is a function that’s part of the ANSI-92 standard whereas NVL was made in the 80′s when there were no standards. Since COALESCE is the newer function of the two (since 9i), it is better equipped for multiple values and it does less work, therefore it’s the greener option (doing less work means using less resources, like power, and therefore it is greener).
How is doing less work better, you might ask? Well, would you want to do the work for which you know the result is never going to be used? I know I wouldn’t. That is one thing COALESCE does for you. It does not evaluate an argument if its result is not needed.
A big advantage of using the COALESCE function is the short-circuit evaluation. Where NVL evaluates both arguments, whether the second argument should be used or not, COALESCE only evaluates the arguments if they are needed.
For example:
If you run the following statement:

[PATRICK]SQL>SELECT NVL(1, 1/0) FROM dual
             /

you will see the

ORA-01476: division by zero

error.
In this statement the first argument (1) is NOT NULL so the second argument (1/0) should not be returned, but since with NVL PL/SQL evaluates the expression, this statement results in the exception.
The COALESCE function only evaluates the arguments if they are needed.
If you run the following statement:

[PATRICK]SQL>SELECT COALESCE(1, 1/0) FROM dual
          2  /

you will not get an error, since the second argument is not evaluated. In other words it evaluates exprN only if expr(N-1) evaluates to NULL.

A simple test shows the difference again:
First we create simple package which holds a variable and a function to increase that variable:

[PATRICK]SQL>CREATE OR REPLACE PACKAGE nvl_test
             IS
               g_value NUMBER := 0;
               FUNCTION increase_value RETURN NUMBER;
             END nvl_test;
             /
[PATRICK]SQL>CREATE OR REPLACE PACKAGE BODY nvl_test
             IS
               FUNCTION increase_value RETURN NUMBER
               IS
                l_returnvalue NUMBER;
               BEGIN
                 dbms_output.put_line('nvl_test.increase_value');
                 nvl_test.g_value := nvl_test.g_value + 1;
                 l_returnvalue := nvl_test.g_value;
                 RETURN l_returnvalue;
               END increase_value;
             END nvl_test;
             /

Then a script to demonstrate what happens. First display the value of the variable. Then call the NVL function where the first value is NULL. As you can see, the function in the package is called, hence the variable is increased. Then another call to the NVL function, this time with a non NULL value. The function in the package is still called even though its value is not being used.
Then we reset the value of the variable and run the same tests, but this time using the COALESCE function. As you can see, the function is only being called if the previous argument(s) evaluate to NULL.

[PATRICK]SQL>DECLARE
               l_dummy NUMBER;
               l_foo NUMBER;
             BEGIN
               dbms_output.put_line('====reset package====');
               nvl_test.g_value := 0;
               l_dummy := nvl_test.g_value;
               dbms_output.put_line(l_dummy);
               l_foo := NVL(NULL,nvl_test.increase_value);
               dbms_output.put_line(l_foo);
               l_dummy := nvl_test.g_value;
               dbms_output.put_line(l_dummy);
               l_foo := NVL(2912,nvl_test.increase_value);
               dbms_output.put_line(l_foo);
               l_dummy := nvl_test.g_value;
               dbms_output.put_line(l_dummy);
               dbms_output.put_line('====reset package====');
               nvl_test.g_value := 0;
               l_dummy := nvl_test.g_value;
               dbms_output.put_line(l_dummy);
               l_foo := coalesce(NULL,nvl_test.increase_value);
               dbms_output.put_line(l_foo);
               l_dummy := nvl_test.g_value;
               dbms_output.put_line(l_dummy);
               l_foo := coalesce(2912,nvl_test.increase_value);
               dbms_output.put_line(l_foo);
               l_dummy := nvl_test.g_value;
               dbms_output.put_line(l_dummy);
             END;
             /

====reset package====
0
nvl_test.increase_value
1
1
nvl_test.increase_value
2912
2
====reset package====
0
nvl_test.increase_value
1
1
2912
1

If you run the anonymous block in an IDE where you can step through the code, you can see when the code is executed and when it is bypassed.
So, if you don’t need or want the code executed when the value of a variable or result of a function is not NULL, then you should use COALESCE to prevent this from happening. But there might be a use case in which you always want a piece of code executed whether the first argument is NULL or not. Then you should use (or stick to) NVL.
I think the rule should be: Use COALESCE unless…

Hope this answers your question.
Happy Oracle’ing,
Patrick Barel

If you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Fall 2014

Large row by row table update with values from another table using cursor

Tom Kyte - Wed, 2016-09-07 22:26
Hello, I have a requirement to update 2 columns (A, B) in a arbitrarily (very, very) large table. The values that needs to be updated relies on values from another table and must be consistent with the values from the other table. Since the values...
Categories: DBA Blogs

How to ensure that both parent and child rows are created in 1:1 relationship

Tom Kyte - Wed, 2016-09-07 22:26
Hi Tom, One of our applications has a basic 'customer' type schema: customer table, address, emails, orders, etc. At the heart of it is 'customer' itself defined with the following (all not null) <code> id int (pk), customer_type char(1), cus...
Categories: DBA Blogs

ORA-01034: ORACLE NOT AVAILABLE

Tom Kyte - Wed, 2016-09-07 22:26
connect system password : password. i got following errors ORA-01034: ORACLE NOT AVAILABLE ORA-27101: shared memory realm does not exist
Categories: DBA Blogs

How to optimize these two Queries

Tom Kyte - Wed, 2016-09-07 22:26
How Can we optimize the below query MERGE INTO Department dept USING (SELECT DISTINCT emp_name, hire_date FROM employees WHERE emp_id = '10') emp_details ON (dept.emp_name = emp_details.emp_name AND emp_details.hire_date IS NOT NULL...
Categories: DBA Blogs

Recover tablespace on anothe host or database

Tom Kyte - Wed, 2016-09-07 22:26
Hello, friends! I can't find any info to solve my issue. We need to restore some old data from table. Full database is too large and we don't have much space for restore full database. That's why we need to restore only tablespace or (ideally) t...
Categories: DBA Blogs

Compress for OLTP

Tom Kyte - Wed, 2016-09-07 22:26
Hi, We have reached 90% of our tablespace (disk space) full, and we thought to compress the tables so that we may free up some space. <code> select segment_name, segment_type, bytes/1024/1024 MB from user_segments where segment_type='TA...
Categories: DBA Blogs

oracle pl/sql

Tom Kyte - Wed, 2016-09-07 22:26
how to breck large database table result in reguller interwol
Categories: DBA Blogs

New OA Framework 12.2.5 Update 6 Now Available

Steven Chan - Wed, 2016-09-07 16:27

Web-based content in Oracle E-Business Suite 12 runs on the Oracle Application Framework (OAF or "OA Framework") user interface libraries and infrastructure.  Since the release of Oracle E-Business Suite 12.2 in 2013, we have released several cumulative updates to Oracle Application Framework to fix performance, security, and stability issues. 

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Pack. "Cumulative" means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for EBS 12.2.5 is now available:

Screenshot EBS 12.2.5 OAF Bundle Patch 6

Where is the documentation for this update?

Instructions for installing this OAF Release Update Pack are here:

Who should apply this patch?

All EBS 12.2.5 users should apply this patch.  Future OAF patches for EBS 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS 12.2.5 bundle patches.

This latest bundle patch includes new fixes for the following critical issues:

  • The table header content overlapped with the records present at the immediate row when there is a specific personalization on the page.
  • In environments with Multi-Org Access Control (MOAC) enabled, the List of Values (LOV) search returns empty results with a newly added search query filter while creating a personalized view.
  • On specific JTT Interop pages, the LOV search window opens up only on double click if the LOV is dependent on the Partial Page Refresh (PPR) enabled field.

This Bundle Patch requires the R12.ATG_PF.C.Delta.5 Release Update Pack as a mandatory prerequisite.

Related Articles

Categories: APPS Blogs

Ask Tom table about NOLOGGING and redo generation

Bobby Durrett's DBA Blog - Wed, 2016-09-07 14:34

I was googling for things related to NOLOGGING operations and found this useful post on the Ask Tom web site: url

There is a nice table in the post that shows when insert operations generate redo log activity. But it isn’t formatted very well so I thought I would format the table here so it lines up better.

Table Mode    Insert Mode     ArchiveLog           mode result
-----------   -------------   -----------------    -----------
LOGGING       APPEND          ARCHIVE LOG          redo generated
NOLOGGING     APPEND          ARCHIVE LOG          no redo
LOGGING       no append       ""                   redo generated
NOLOGGING     no append       ""                   redo generated
LOGGING       APPEND          noarchive log mode   no redo
NOLOGGING     APPEND          noarchive log mode   no redo
LOGGING       no append       noarchive log mode   redo generated
NOLOGGING     no append       noarchive log mode   redo generated

All of this is from Ask Tom. My contribution here is just the formatting.

I ran a couple of tests whose results agree with this table. I ran insert append on a database that was not in archivelog mode and the insert ran for the same amount of time with the table set for LOGGING as it did with the table set for NOLOGGING. I ran the same test on a database that is in archivelog mode and saw a big difference in run time between LOGGING and NOLOGGING. I didn’t prove it but I assume that the redo generation caused the difference in run time.

No archivelog and logging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.36

No archivelog and nologging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.38

Archivelog and logging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.84

Archivelog and nologging:

insert /*+append*/ into target select * from source;

64000 rows created.

Elapsed: 00:00:00.53

I haven’t tested all the table options but I thought it was worth formatting for my reference and for others who find it useful.

Bobby

Categories: DBA Blogs

Speaking on Oracle OOW'16 - MCS, JET, ACS, JCS and MAF

Andrejus Baranovski - Wed, 2016-09-07 13:08
With Oracle Open World'16 around the corner, I have prepared demo use case including Mobile Cloud Service (MCS), JavaScript Extension Toolkit (JET), Application Cloud Service (ACS) and Java Cloud Service (JCS). I will describe what Oracle Cloud offers to implement end-to-end enterprise solution.

This year I will be speaking on two sessions.

- End-to-End Cloud: Oracle Java Cloud, Oracle Mobile Cloud Service, Oracle MAF, and Oracle JET [CON2388]

Monday, Sep 19, 12:30 p.m. - 1:15 p.m. | Moscone West - 2012

I will be co-presenting and talking about Oracle JET Hybrid implementation:

- Building Enterprise-Grade Mobile Apps with Oracle JET and Cordova [CON5731]

Thursday, Sep 22, 12:00 p.m. - 12:45 p.m. | Moscone West - 2016

Demo use case will be based on JET application running on Application Container Cloud and integrated with Mobile Cloud Service (MCS) REST services:


There will be JET Hybrid application listening for MCS notifications:


You will learn how to process notifications:


Display and synch data from MCS in JET Hybrid application:


Functionality to be described during [CON2388] session:

MAF:

- Integration with MCS

JCS:

- ADF BC REST development and deployment
- Security implementation

ACS:

- JET application depoyment process with Node.JS application

JET:

- JET application implementation tips & tricks
- JET oAuth security integration with MCS
- JET and REST calls
- JET and MCS notifications

MCS:

- Custom API implementation tips & tricks
- Security configuration
- MCS DB API
- MCS Notifications API
- MCS Connector API to call ADF BC REST

AIOUG Sangam'16 - India's largest Annual Oracle Confernece - DON'T MISS IT

Syed Jaffar - Wed, 2016-09-07 07:53
AIOUG Sangam'16, India's largest annual Oracle conference, to be held in Bangalore this year during 11th and 12th November.

I am glad and super excited to present and attend the conference. I am going to present the following two sessions:


  • Exadata Migration Recipes - Secret Sauce for a Tasty Migration
Abstract
The purpose of this presentation is to provide real-world augmented database migration procedures that allowed us to successfully achieve the migrations of 54 business critical databases to Exadata x5-2 of different sizes and Operating Systems. The presentation will layout the different possible migration technologies that meets the business demands such as Data Pumps, RMAN, XTTS with Incremental Backups.  The presentation will also highlight and focus on the challenges that were encountered and their solutions. A very specific scenario will also be presented: Cross platform Oracle EBS migration of 30TB size.
  • Cost Optimization and Better Database Management with Oracle 12c Multitenant A 360 Degree Overview
Abstract
Organizations who deploy and manage several numbers of standalone databases in their complex environments typically face the uphill task of better (ease) database management and also Cost optimization. Oracle 12c Multitenant got the ability to consolidate databases, which addresses the major concerns of those Organizations and provide the easier Database management to the DBAs. The presentation will focuses on the key advantages and benefits of 12c Multitenant features, and how one can reduce the IT cost as well as manage their databases efficiently. Outlines the architecture, deployment scenarios, easy management of Multitenant features.


Why should you attend
If you are around in India during those dates, I strongly recommend you to avail the opportunity to meet and learn from many international stars. AIOUG really did a tremeounds job to bring famous Oracle personalities together on board for this year's event.I am really honored and indeed my previlege to present and listen to other fellow speakers.

Visit the website to for more details: enrollment, view list of topics and spearks etc http://www.aioug.org/sangam16/

Look forward to seeng you at the event!


How to solve JSchException Algorithm negotiation fail. And get logging out of JSch in SoapUI.

Darwin IT - Wed, 2016-09-07 05:43
I was so glad with my SoapUI solution to SFTP files to a server. But so dissapointed I couldn't have it working at my customer.

After I changed the log entries to log with e.message, I got the line:
Wed Sep 07 11:17:43 CEST 2016:INFO:JSchException Algorithm negotiation fail

Now I needed more information than that. But the hint is at least that there is a mismatch in the available cyphers client side verses server side.

So first I wanted to get more logging out of Jsch. It turns out that it has it's own Logger framework, but the bright side of that is that you can easily wrap your own logging mechanism. In the case of SoapUI it is log4j. So create a java project with the libraries jsch-0.1.54.jar and from the SoapUI libs: log4j-1.2.14.jar. Then I created the following class file from an example from the answer in this stackoverflow question.

My version is:
package nl.darwin.jsch.log;


import com.jcraft.jsch.Logger;

/**
* Class to route log messages generated by JSch to Apache Log4J Logging.
*
* @author mrabbitt
* @see com.jcraft.jsch.Logger
*/
public class JSchLog4JLogger implements Logger {
private org.apache.log4j.Logger logger;

/**
* Constructor with custom category name
*
* @param logger the logger from Apache Log4J.
*/
public JSchLog4JLogger(org.apache.log4j.Logger logger) {
this.logger = logger;
}

/**
* Default constructor
*/
public JSchLog4JLogger() {
this(org.apache.log4j.Logger.getLogger(Logger.class.getName()));
}

/* (non-Javadoc)
* @see com.jcraft.jsch.Logger#isEnabled(int)
*/
public boolean isEnabled(int level) {
switch (level) {
case DEBUG:
return logger.isDebugEnabled();
case INFO:
return logger.isInfoEnabled();
case WARN:
return logger.isInfoEnabled();
case ERROR:
return logger.isInfoEnabled();
case FATAL:
return logger.isInfoEnabled();
}
return false;
}

/* (non-Javadoc)
* @see com.jcraft.jsch.Logger#log(int, java.lang.String)
*/
public void log(int level, String message) {
switch (level) {
case DEBUG:
logger.debug(message);
break;
case INFO:
logger.info(message);
break;
case WARN:
logger.warn(message);
break;
case ERROR:
logger.error(message);
break;
case FATAL:
logger.fatal(message);
break;
}
}
}

Then Jar it and add it to the bin/ext older of SoapUI (like in the previous blog post).
Now a simple extra line is needed and an import in your groovy script :
import nl.darwin.jsch.log.JSchLog4JLogger
...
JSch.setLogger(new JSchLog4JLogger(log))
JSch ssh = new JSch()

So simply set the logger on the JSch class, before the instantiation. Then the logging of JSch appears in the SoapUI logging, as easy as that.
It turned out that the server required the use of aes256-ctr, while the jre of SoapUI (which is Java 7 in SoapUI 5.2.1) has limited JCE policy. As is suggested here.

You can download the unlimited JCE policies here:
JDK
Unlimited JCE Download
JDK 1.6http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.htmlJDK 1.7http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.htmlJDK 1.8http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
For SoapUI, download the JDK 1.7 policy. Go to your SoapUI Home folder, and navigate to the security library folder within the JRE. For instance: c:\Program Files\SmartBear\SoapUI-5.2.1\jre\lib\security.

Unzip the JCE to a new folder UnlimitedJCEPolicy within the security folder. Create a another backup folder like LimitedJCEPolicy and copy the jars US_export_policy.jar and local_policy.jar to the LimitedJCEPolicy folder. And copy the corresponding files from UnlimitedJCEPolicy to the security folder, replacing the original ones.

Restart SoapUI and you're good to go.


Use SoapUI to test your SFTP based services

Darwin IT - Wed, 2016-09-07 04:13
SoapUI is my favorite tool to do unit tests. I'd try to keep my self to test based development and build up tests along with the development service. For SOAP or REST based services this goes quite intuitively using SoapUI. For database driven it is a little harder, but SoapUI has a nice JDBC activity, that supports DML as well as callable statements as stored procedures.

But for files and especially SFTP its a little more complicated. For a while I'm working on a filebased integration with SAP as source system.

I configured and defined the SOASuite FTP adapter to use my SSH user (oracle) on my VirtualBox VM. Until now I tested it using the SFTP/SCP client from MobaXTerm (enthousiastically recommended: download here). But not so handy for unit tests.

I wanted to automate this using SoapUI. With some searching I found that JCraft Java Secure Channel library was the best and easiest option. I did take a look at Apache Commons Net. But couldn't get it to work so easily. Download the jsch-0.1.54.jar (or newer) file and copy it to the ./bin/ext folder in your SoapUI home:


And restart SoapUI.

Create a new empty SoapUI project, create a TestSuite called something like 'Utilities' and a TestCase called 'TC-FTP'. Add the following properties to the TestCase:

Property
Value
ftpHostdarlin-vce-dbftpPort22ftpUsernameoracleftpPasswordwelcome1localFilePathd:/Projects/2016MySapProject/ExampleFiles/SAP-MESSAGE.XMLremoteFileDir/home/oracle/SapHR/in
In the TestCase add a Groovy Script called FTP add the script below. I took the example from snip2code.com (also found elsewhere) and refactered it to:
//Send Files to SSH Location
//
//Download jsch-0.1.54.jar from http://www.jcraft.com/jsch/ and copy it to SoapUI-Home/bin/ext location
//Example from: https://www.snip2code.com/Snippet/413499/SoapUI-Groovy-Script-compatible-SFTP-fil

//import org.apache.commons.net.ftp.FTPSClient
import com.jcraft.jsch.*
//
// Properties
//
def testCase = testRunner.testCase;


def String ftpHost = testCase.getPropertyValue("ftpHost")
def int ftpPort = testCase.getPropertyValue("ftpPort").toInteger()
def String ftpUsername = testCase.getPropertyValue("ftpUsername")
def String ftpPassword = testCase.getPropertyValue("ftpPassword")
def String localFilePath = testCase.getPropertyValue("localFilePath")
def String remoteFileDir = testCase.getPropertyValue("remoteFileDir")
//
//
Session session = null
Channel channel = null
try {
log.info("Starting sftp upload process")
JSch ssh = new JSch()

session = ssh.getSession(ftpUsername, ftpHost, ftpPort)
session.setConfig("StrictHostKeyChecking", "no"); //auto accept secure host
session.setPassword(ftpPassword)
session.connect()
log.info("Connected to session")

channel = session.openChannel("sftp")
channel.connect()
log.info("Connected to channel")

ChannelSftp sftp = (ChannelSftp) channel;
sftp.put(localFilePath, remoteFileDir);
log.info("File Uploaded " + localFilePath + " TO " + remoteFileDir)

} catch (JSchException e) {
e.printStackTrace()
log.info("JSchException " + e.message)
} catch (SftpException e) {
e.printStackTrace()
log.info("SftpException " + e.message)
} finally {
if (channel != null) {
channel.disconnect()
log.info("Disconnected from channel")
}
if (session != null) {
session.disconnect()
log.info("Disconnected from session")
}
log.info("sftp upload process complete")
}

Changes I did was to define the input values based on the properties from the testcase. And move the session and channel variable declartions out of the try, to get it accessible from the finally branch. And to replace e.printStackTrace from the logging by e.message, to have a propery message (e.printStackTrace returns null) in the logging.

The reason that I suggest to have it in a separate test cases is to enable it to be called from actual testcases with parameters. To do so add to your test case a call-test case activity:

Set the following properties:

Choose 'Run primary TestCase (wait for running to finish, Thread-Safe)' option as Run Mode.

And provide the property values.

This script copies a file from a file location and uploads it. But I want to be able to insert some runtime specific options to refer to in asserts and later JDBC calls. To check on specific running instances. So I want to be able to adapt the content running in my test case. Actually I want to upload a string fetched from a property, maybe with expanded properties.

So I copied the testcase and groovy activity and adapted the script to:
//Send Files to SSH Location
//
//Download jsch-0.1.54.jar from http://www.jcraft.com/jsch/ and copy it to SoapUI-Home/bin/ext location
//Example from: https://www.snip2code.com/Snippet/413499/SoapUI-Groovy-Script-compatible-SFTP-fil

//import org.apache.commons.net.ftp.FTPSClient
import com.jcraft.jsch.*
import java.nio.charset.StandardCharsets
//
// Properties
//
def testCase = testRunner.testCase;
//
def String ftpHost = testCase.getPropertyValue("ftpHost")
def int ftpPort = testCase.getPropertyValue("ftpPort").toInteger()
def String ftpUsername = testCase.getPropertyValue("ftpUsername")
def String ftpPassword = testCase.getPropertyValue("ftpPassword")
def String fileContent = testCase.getPropertyValue("fileContent")
def String remoteFile = testCase.getPropertyValue("remoteFile")
//
Channel channel = null
Session session = null
try {
log.info("Starting sftp upload process")
JSch ssh = new JSch()

session = ssh.getSession(ftpUsername, ftpHost, ftpPort)
session.setConfig("StrictHostKeyChecking", "no"); //auto accept secure host
session.setPassword(ftpPassword)
session.connect()
log.info("Connected to session")

channel = session.openChannel("sftp")
channel.connect()
log.info("Connected to channel")

ChannelSftp sftp = (ChannelSftp) channel;

byte[] fileContentBytes = fileContent.getBytes(StandardCharsets.UTF_8)
InputStream fileInputStream = new ByteArrayInputStream(fileContentBytes);
log.info("Start uploaded to " + remoteFile)
sftp.put(fileInputStream, remoteFile);
log.info("File Content uploaded to " + remoteFile)

} catch (JSchException e) {
e.printStackTrace()
log.info("JSchException " + e.message)
} catch (SftpException e) {
e.printStackTrace()
log.info("SftpException " + e.message)
} catch (Exception e) {
e.printStackTrace()
log.info("Exception " + e.message)
} finally {
if (channel != null) {
channel.disconnect()
log.info("Disconnected from channel")
}
if (session != null) {
session.disconnect()
log.info("Disconnected from session")
}
log.info("sftp upload process complete")
}

here the lines and related properties:
def String localFilePath = testCase.getPropertyValue("localFilePath")
def String remoteFileDir = testCase.getPropertyValue("remoteFileDir")

are changed to:
def String fileContent = testCase.getPropertyValue("fileContent")
def String remoteFile = testCase.getPropertyValue("remoteFile")
Then the lines:
  byte[] fileContentBytes =   fileContent.getBytes(StandardCharsets.UTF_8)
InputStream fileInputStream = new ByteArrayInputStream(fileContentBytes);

convert the fileContent property value to an InputString. And that is given as an input to the statement sftp.put(fileInputStream, remoteFile);. Notice that since we upload file content, we need to provide a remoteFile path, including file name, insead of a remote directory. And that we need an extra import java.nio.charset.StandardCharsets.

It would be nice if the guys from SmartBear add both put and get as a seperate activity.  

JSON_VALUE() and JSON_TABLE(...COLUMNS...NUMBER...) honour NLS_NUMERIC_CHARACTERS while they shouldn't.

Tom Kyte - Wed, 2016-09-07 04:06
The following shows that the same (JSON) table data and the same queries yield different results when different NLS_NUMERIC_CHARACTERS are used. The above sentence is no surprise, obviously, but in this case I consider the behavior to be wrong; Plea...
Categories: DBA Blogs

Oracle database not responding

Tom Kyte - Wed, 2016-09-07 04:06
We have a Java application that uses a connection pool with Oracle database, during the week these conections are frequently refreshed (closed and opened again), the problem is at some point (after two or three days) the application is unable to acqu...
Categories: DBA Blogs

connection issue

Tom Kyte - Wed, 2016-09-07 04:06
Hi Tom, I have deployed Oracle RDBMS software & created new database in Azure cloud.And also created Application users in this database. If I connect application user using SQL Developer I am getting error. IO Error: Connection reset ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator