Feed aggregator

Wordpress Fatal error: Call to undefined method PHPMailer::getCustomHeaders()

Pakistan's First Oracle Blog - Sat, 2016-07-30 03:06

With numerous new themes and new versions of the wordpress, when you try to publish some post or page or try to send email or try to use contact form, you might get following error:


Fatal error: Call to undefined method PHPMailer::getCustomHeaders()





This error normally occurs if you are using incompatible wordpress and theme and plugin versions. The best course of action is to update your wordpress, your theme, and all the plugin and make sure that contact form 7 plugin isn't complaining about the wordpress version.

If you cannot upgrade your versions or if the error is occuring even after the upgrade, then you can do the following workaround:

Workaround:

Make the following change in your mail.php file in the directory wordpress/wp-content/plugins/contact-form-7/includes

Go to the line 171 and change:
 
add_action( ‘phpmailer_init’, ‘wpcf7_phpmailer_init’ );


to
 

//add_action( ‘phpmailer_init’, ‘wpcf7_phpmailer_init’ ); 

Hope that helps.
Categories: DBA Blogs

Hit by the ‘Tech Support’ Scammers

Duncan Davies - Fri, 2016-07-29 12:19

I got a call earlier today from the Tech Support Scammers. You’ve probably heard of this horribly unethical practice already, but the premise is that they cold-call seemingly randomly and try to convince you that there is a problem with your PC/router, and then attempt to get you to allow them remote access to your PC to ‘fix it’. Some then claim problems are due to expired warranties on the computer and demand payment, others setup a boot password and demand money for unlocking it. Either way, it’s a nasty thing to do and is taking advantage of people while pretending to help.

So, I thought I’d play along and see what they’d do.

The call appeared to come from a UK number, however it clearly originated from an offshore data centre. They introduced themselves as calling from my ISP and said that they’d detected a problem and needed to fix it on my PC. They could show me some symptoms to reassure me that they were legitimate. I asked them “who is my ISP?” as I wasn’t sure whether they’d know, however they were able to accurately answer.

The nice lady then got me to open Event Viewer and tell me how many errors and warnings were listed.

Event Viewer

I’d wager that most computers have a heap of entries here, but when I said that there was over 8,000 she did some great play-acting that 8-10 is a typical number, feigning shock and how this proves the appalling state of my laptop.

Next, she asked me to open a command-prompt and run the ASSOC command. This lists all the file associations on my laptop, and she read out a string and asked me to verify that it matched the CLSID below.

assoc

This, of course, proves nothing as it would be the same on all versions of Windows. However, she said that CLSID stands for “Computer Licence Security ID” and that by knowing this code it proved her authenticity.

At this point she asked me to visit http://www.support.me, which forwards to LogMeIn – a remote control system – and said she was going to pass me over to a technical colleague called ‘David’.

When David came on the line I said I’d used the lengthy delay to google what had happened and found out what they were doing. He then swore at me and hung up.

I feel sorry for the poor people that fall victim to these horrible practices and wish there was some way of stopping them preying on the innocent.


Real Time Scenarios & duplicate rows

Tom Kyte - Fri, 2016-07-29 11:26
I have two questions 1.) Deleting Duplicate rows DELETE FROM EMP1 WHERE ROWID NOT IN (SELECT MIN(ROWID) FROM EMP1 GROUP BY EMPNO); the above command deletes the duplicate rows from a given table such as emp1 in this case, group by eliminat...
Categories: DBA Blogs

DBA essentials workshop with release Oracle 12c is Available

Yann Neuhaus - Fri, 2016-07-29 03:23

It’s with pleasure to announced dbi services have upgrade the infrastructure for Oracle DBA essentials workshop.
We have migrate the release of Oracle 11g R2 to the latest version Oracle 12c features and install the last PSU (April 2016).

During this course you understand different topics:

  • Understand the Oracle database architecture
  • Install and Manage Oracle database
  • Administer Oracle databases with dbi expert certified OCM or ACE director :-)

For more information about the workshop, please click on the link

Oracle Database release:

SQL> select product, version, status from product_component_version where product like '%Database%';


PRODUCT                                    VERSION              STATUS
------------------------------------------ -------------------- ------------------------------
Oracle Database 12c Enterprise Edition     12.1.0.2.0           64bit Production

Last PSU installed:

SQL> select patch_id, version, status, description from dba_registry_sqlpatch;


PATCH_ID   VERSION              STATUS          DESCRIPTION
---------- -------------------- --------------- ------------------------------------------------------
22291127   12.1.0.2             SUCCESS         Database Patch Set Update : 12.1.0.2.160419 (22291127)

020_ORA-DBA-Essential-OracleArchitecture

 

Cet article DBA essentials workshop with release Oracle 12c is Available est apparu en premier sur Blog dbi services.

Shrinking Oracle VM VirtualBox with Zerofree

Yann Neuhaus - Fri, 2016-07-29 03:17

In this blog I would like to talk about Shrinking a Virtual Machine with Oracle databases and MySQL which is install.
Unfortunately, whilst Virtual Box will dynamically expand the hard drive as it’s required, it won’t dynamically shrink it again if you free up space in the VM. The good news is You can shrink a dynamic Virtual Box disk image and so reduce the size of your VM infrastructure.

I have use the Zerofree utility for scans the free blocks in an ext2 and ext3 file system and fills any non-zero blocks with zeroes. Source

Step by step:
  • Convert .vmdk to .vdi
  • Mount the .vdi to another VM
  • Stop processes Oracle and MySQL
  • Mount file system on read only
  • Use zerofree
  • Shutdown VM
  • Vboxmanage compact
  • Convert .vdi to .vmdk is you find any reason not to keep the .vdi

I have my Virtual Box extension “.vmdk”. The fist step is clone the virtual-disk to “.vdi” extension.

root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# vboxmanage clonehd "vmrefdba01.vmdk" "vmrefdba01.vdi" --format vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...

When the clone is finish, start the VM with the new file extension “.vdi”:
shrink

Install Zerofree on your VM:

[root@vmrefdba01 zerofree-1.0.4]# yum install e2fsprogs-devel -y
[root@vmrefdba01 zerofree-1.0.4]# wget http://frippery.org/uml/zerofree-1.0.3.tgz
[root@vmrefdba01 zerofree-1.0.4]# tar -zxf zerofree-1.0.3.tgz
[root@vmrefdba01 zerofree-1.0.4]# cd zerofree-1.0.3
[root@vmrefdba01 zerofree-1.0.3]# make
[root@vmrefdba01 zerofree-1.0.3]# cp zerofree /usr/bin

Stopping all processes Oracle and MySQL for umount the File system:

mysql@vmrefdba01:/home/mysql/ [DUMMY] mysqld_multi stop
oracle@vmrefdba01:/home/oracle/ [WSDBA2] sqh
SQL*Plus: Release 12.1.0.2.0 Production on Thu Jul 28 10:27:29 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown;
Database closed.
Database dismounted.
ORACLE instance shut down.

Mount the File system on Read Only mode:

[root@vmrefdba01 ~]# mount -o ro /dev/mapper/vgdata-lvdata /oracle/
[root@vmrefdba01 ~]# mount -l
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,size=3G)
/dev/sda1 on /boot type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
/dev/mapper/vgdata-lvdata on /oracle type ext3 (ro)

Use the utility Zerofree for scans free blocks:

[root@vmrefdba01 ~]# zerofree /dev/mapper/vgdata-lvdata

Shutdown the VM and compact the disk:

[root@vmrefdba01 ~]# shutdown -h now
root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# vboxmanage modifyhd vmrefdba01.vdi --compact
0%...10%...20%...30%...40%...50%...

Check the size with the new VirtualDisk and you can see.. we have won 20Gb with the shrink space :D

root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# du -sh *
740K Logs
44M Snapshots
34G vmrefdba01.vdi --> New disk
54G vmrefdba01.vmdk --> Old disk

Optional : If you want you can clone again the VM with the extension “.vmdk”.

root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# vboxmanage clonehd "vmrefdba01.vdi" "vmrefdba01_v1.vmdk" --format vmdk
0%...10%...20%...30%...40%...

 

Cet article Shrinking Oracle VM VirtualBox with Zerofree est apparu en premier sur Blog dbi services.

Exadata X-5 Bare Metal vs. OVM performance

Yann Neuhaus - Fri, 2016-07-29 02:54

The Exadata X5 can be installed Bare Metal or Virtualized. The latter one, Oracle VM allows to create multiple clusters on one machine, is more complex for installation and for storage capacity planning. But it allows a small flexibility on options licencing. Those are the real challenges behind the choice. However, when we talk about virtualization, most of questions are about the overhead. Last week, we did some tests on same machine with different configuration, thanks to Arrow Oracle Authorized Solution Center.

Comparison is not easy. Bare Metal has all resources. Virtualized has to distribute resources. And this test is very specific: all I/O hitting the ‘extreme’ flash cache because I don’t expect any virtualization overhead to be in milliseconds. So, don’t expect some universal conclusions from those tests. And don’t hesitate to comment about my way to read those numbers.

CPU

Do not expect a benchmark that shows the maximum capacity of the machine here. I’m comparing a bare metal node with 36 cores with a VM with 4 vCPUS. So I’ll compare a one thread workload only: SLOB with one session and SCALE=100M UPDATE_PCT=0 RUN_TIME=120 WORK_UNIT=64

Bare Metal load profile:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.5 0.00 2.91
DB CPU(s): 1.0 29.0 0.00 2.76
Background CPU(s): 0.0 0.2 0.00 0.00
Redo size (bytes): 14,172.4 432,594.0
Logical read (blocks): 810,244.4 24,731,696.3
Block changes: 41.7 1,271.3
Physical read (blocks): 111.6 3,407.8
Physical write (blocks): 0.0 0.3
Read IO requests: 111.3 3,397.3
Write IO requests: 0.0 0.3
Read IO (MB): 0.9 26.6
Write IO (MB): 0.0 0.0
Executes (SQL): 12,285.1 374,988.5

Virtualized load profile:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.6 0.00 4.37
DB CPU(s): 1.0 29.8 0.00 4.26
Background CPU(s): 0.0 0.2 0.00 0.00
Redo size (bytes): 13,316.5 410,404.0
Logical read (blocks): 848,095.1 26,137,653.8
Block changes: 41.1 1,266.3
Physical read (blocks): 109.1 3,361.3
Physical write (blocks): 0.0 0.3
Read IO requests: 103.8 3,198.5
Write IO requests: 0.0 0.3
Read IO (MB): 0.9 26.3
Write IO (MB): 0.0 0.0
Executes (SQL): 13,051.2 402,228.0

We can say that CPU and RAM performance is similar.

I/O

Now about IOPS on the storage cell flash cache.
I’ll compare SLOB with one session and SCALE=100000M UPDATE_PCT=100 RUN_TIME=120 WORK_UNIT=64

Bare Metal load profile:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 0.0 0.02 4.06
DB CPU(s): 0.1 0.0 0.00 0.49
Background CPU(s): 0.1 0.0 0.00 0.00
Redo size (bytes): 1,652,624.9 51,700.6
Logical read (blocks): 2,582.2 80.8
Block changes: 4,214.5 131.9
Physical read (blocks): 2,060.6 64.5
Physical write (blocks): 1,818.0 56.9
Read IO requests: 2,051.0 64.2
Write IO requests: 1,738.6 54.4
Read IO (MB): 16.1 0.5
Write IO (MB): 14.2 0.4
Executes (SQL): 66.3 2.1
Rollbacks: 0.0 0.0
Transactions: 32.0

Virtualized load profile:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 0.0 0.01 3.49
DB CPU(s): 0.3 0.0 0.00 1.01
Background CPU(s): 0.2 0.0 0.00 0.00
Redo size (bytes): 2,796,963.3 51,713.3
Logical read (blocks): 4,226.0 78.1
Block changes: 7,107.0 131.4
Physical read (blocks): 3,470.6 64.2
Physical write (blocks): 3,278.7 60.6
Read IO requests: 3,462.0 64.0
Write IO requests: 3,132.0 57.9
Read IO (MB): 27.1 0.5
Write IO (MB): 25.6 0.5
Executes (SQL): 86.9 1.6
Rollbacks: 0.0 0.0
Transactions: 54.1

In two minutes we did more work here. Timed events show statistics about the ‘cell single block reads’ which are nothing else than ‘db file sequential read’ renamed to look more ‘Exadata’. No SmartScan happens here as they go to buffer cache and we cannot do any filtering for blocks that will be shared with other sessions.

Bare Metal:
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
cell single block physical rea 249,854 115.7 0.46 94.9 User I/O
DB CPU 14.6 12.0

Virtualized: Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
cell single block physical rea 425,071 109.3 0.26 89.4 User I/O
DB CPU 35.2 28.8

Lower latency here on average which explains why we did more work. But no conclusion before we know where the latency comes from. Averages hides the details, and it’s the same with the ‘IO Profile’ section:

Bare Metal
IO Profile Read+Write/Second Read/Second Write/Second
~~~~~~~~~~ ----------------- --------------- ---------------
Total Requests: 3,826.6 2,055.1 1,771.5
Database Requests: 3,789.5 2,051.0 1,738.6
Optimized Requests: 3,720.7 1,985.1 1,735.6
Redo Requests: 32.5 0.0 32.5
Total (MB): 32.0 16.2 15.9
Database (MB): 30.3 16.1 14.2
Optimized Total (MB): 29.3 15.6 13.7
Redo (MB): 1.7 0.0 1.7
Database (blocks): 3,878.6 2,060.6 1,818.0
Via Buffer Cache (blocks): 3,878.6 2,060.6 1,818.0
Direct (blocks): 0.0 0.0 0.0

Virtualized
IO Profile Read+Write/Second Read/Second Write/Second
~~~~~~~~~~ ----------------- --------------- ---------------
Total Requests: 6,652.2 3,467.0 3,185.2
Database Requests: 6,594.0 3,462.0 3,132.0
Optimized Requests: 6,582.7 3,461.2 3,121.5
Redo Requests: 54.7 0.0 54.7
Total (MB): 55.6 27.2 28.4
Database (MB): 52.7 27.1 25.6
Optimized Total (MB): 51.8 27.1 24.6
Redo (MB): 2.8 0.0 2.8
Database (blocks): 6,749.3 3,470.6 3,278.7
Via Buffer Cache (blocks): 6,749.3 3,470.6 3,278.7
Direct (blocks): 0.0 0.0 0.0

and for IO statistics.
Bare Metal:
Reads: Reqs Data Writes: Reqs Data Waits: Avg
Function Name Data per sec per sec Data per sec per sec Count Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Buffer Cache Re 1.9G 2050.9 16.093M 0M 0.0 0M 250.2K 0.5
DBWR 0M 0.0 0M 1.7G 1740.5 14.216M 0 N/A
LGWR 0M 0.0 0M 201M 32.5 1.648M 3914 0.3
Others 8M 4.1 .066M 1M 0.5 .008M 560 0.0
TOTAL: 1.9G 2055.0 16.159M 1.9G 1773.4 15.872M 254.6K 0.5

Virtualized:
Reads: Reqs Data Writes: Reqs Data Waits: Avg
Function Name Data per sec per sec Data per sec per sec Count Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Buffer Cache Re 3.3G 3462.7 27.12M 0M 0.0 0M 425.6K 0.3
DBWR 0M 0.0 0M 3.1G 3133.9 25.639M 0 N/A
LGWR 0M 0.0 0M 341M 54.7 2.775M 6665 0.3
Others 10M 5.0 .081M 1M 0.5 .008M 514 0.3
TOTAL: 3.3G 3467.7 27.202M 3.4G 3189.0 28.422M 432.7K 0.3

I’ve put the physical read statistics side-by-side to compare:


BARE METAL VIRTUALIZED
 
Statistic Total per Trans Total per Trans
-------------------------------- ------------------ ------------- ------------------ -------------
cell flash cache read hits 242,142 62.1 425,365 64.0
cell logical write IO requests 5,032 1.3 8,351 1.3
cell overwrites in flash cache 200,897 51.5 937,973 141.1
cell physical IO interconnect by 8,145,832,448 2,089,210.7 14,331,230,720 2,156,044.9
cell writes to flash cache 638,514 163.8 1,149,990 173.0
physical read IO requests 250,168 64.2 425,473 64.0
physical read bytes 2,059,042,816 528,095.1 3,494,084,608 525,663.4
physical read partial requests 4 0.0 0 0.0
physical read requests optimized 242,136 62.1 425,365 64.0
physical read total IO requests 250,671 64.3 426,089 64.1
physical read total bytes 2,067,243,008 530,198.3 3,504,136,192 527,175.6
physical read total bytes optimi 1,993,089,024 511,179.5 3,497,918,464 526,240.2
physical read total multi block 0 0.0 0 0.0
physical reads 251,348 64.5 426,524 64.2
physical reads cache 251,348 64.5 426,524 64.2
physical reads cache prefetch 1,180 0.3 1,051 0.2
physical reads direct 0 0.0 0 0.0
physical reads direct (lob) 0 0.0 0 0.0
physical reads prefetch warmup 1,165 0.3 1,016 0.2
physical write IO requests 212,061 54.4 384,909 57.9
physical write bytes 1,816,551,424 465,901.9 3,300,933,632 496,605.0
physical write requests optimize 211,699 54.3 383,624 57.7
physical write total IO requests 216,077 55.4 391,445 58.9
physical write total bytes 2,026,819,072 519,830.5 3,656,793,600 550,142.0
physical write total bytes optim 1,755,620,352 450,274.5 3,171,875,328 477,189.0
physical write total multi block 531 0.1 942 0.1
physical writes 221,747 56.9 402,946 60.6
physical writes direct 0 0.0 0 0.0
physical writes direct (lob) 0 0.0 0 0.0
physical writes from cache 221,747 56.9 402,946 60.6
physical writes non checkpoint 221,694 56.9 402,922 60.6

We already know that there were more work on the OVM run but comparing the ‘per transaction’ statistics show similar things but a bit more ‘flash cache’ ‘optimized’ I/O in the second run.
Of course, even if it’s the same machine, it has been re-imaged, database re-created, different volume and capacity. So maybe I hit more the cell flash on the second run than on the first one and more reads on spinning disks can explain the difference on single block reads latency.

We need to get beyond the averages with the wait event histograms. They don’t show lower than millisecond in the AWR report (I’ve opened an enhancement request for 12.2 about that) but I collected them from the V$EVENT_HISTOGRAM_MICRO

Bare Metal:
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
cell single block physical read 1 0 1 microsecond
cell single block physical read 2 0 2 microseconds
cell single block physical read 4 0 4 microseconds
cell single block physical read 8 0 8 microseconds
cell single block physical read 16 0 16 microseconds
cell single block physical read 32 0 32 microseconds
cell single block physical read 64 0 64 microseconds
cell single block physical read 128 533 128 microseconds
cell single block physical read 256 240142 256 microseconds
cell single block physical read 512 7818 512 microseconds
cell single block physical read 1024 949 1 millisecond
cell single block physical read 2048 491 2 milliseconds
cell single block physical read 4096 1885 4 milliseconds
cell single block physical read 8192 3681 8 milliseconds
cell single block physical read 16384 2562 16 milliseconds
cell single block physical read 32768 257 32 milliseconds
cell single block physical read 65536 52 65 milliseconds
cell single block physical read 131072 3 131 milliseconds
cell single block physical read 262144 0 262 milliseconds
cell single block physical read 524288 1 524 milliseconds

Virtualized:
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
cell single block physical read 1 0 1 microsecond
cell single block physical read 2 0 2 microseconds
cell single block physical read 4 0 4 microseconds
cell single block physical read 8 0 8 microseconds
cell single block physical read 16 0 16 microseconds
cell single block physical read 32 0 32 microseconds
cell single block physical read 64 0 64 microseconds
cell single block physical read 128 1 128 microseconds
cell single block physical read 256 322113 256 microseconds
cell single block physical read 512 105055 512 microseconds
cell single block physical read 1024 1822 1 millisecond
cell single block physical read 2048 813 2 milliseconds
cell single block physical read 4096 681 4 milliseconds
cell single block physical read 8192 283 8 milliseconds
cell single block physical read 16384 231 16 milliseconds
cell single block physical read 32768 64 32 milliseconds
cell single block physical read 65536 11 65 milliseconds
cell single block physical read 131072 3 131 milliseconds

In the first run we see more reads around 8ms which confirms the previous guess that we had more flash cache hit on the second run.
The waits between 128 and 512 milliseconds are from the cell flash storage and this is where I want to see if virtualization has an overhead.
I’ve put it in color there where it’s easier to visualize that most of the reads are in the 128-256 range. Bare Metal in blue, OVM in orange.

CaptureX5BMVM

In Bare Metal, most of the reads are faster than 256 microseconds. In virtualized there are some significant reads are above. This may be cause by virtualization but anyway that’s not a big difference. I don’t think that virtualization overhead is an important criteria when choosing how to install your Exadata. Storage capacity planning is the major criteria: consolidate all storage in two diskgroups (DATA and RECO) for all databases, or partition them for each cluster. choice is about manageability and agility in provisioning vs. licence optimization.

 

Cet article Exadata X-5 Bare Metal vs. OVM performance est apparu en premier sur Blog dbi services.

Links for 2016-07-28 [del.icio.us]

Categories: DBA Blogs

Cascading calculated LAG values

Tom Kyte - Thu, 2016-07-28 17:06
Oracle 11.2.0.4 Example SQL to explain my question: https://livesql.oracle.com/apex/livesql/file/content_DMZ4GO3W9QL4BONQQB8I7NP5X.html The SQL I am working on is monthly data based on two categories (CATEGORY_A and CATEGORY_B). While we get valu...
Categories: DBA Blogs

DDL timeout parameter limit

Tom Kyte - Thu, 2016-07-28 17:06
Hi Tom I have read in the below link that DDL will not require any downtime starting from Oracle 11g (understood that same applies to Oracle 12c): http://www.oracle.com/technetwork/database/features/online-ops-087977.html I am using Oracle 12c...
Categories: DBA Blogs

Return first character of each word

Tom Kyte - Thu, 2016-07-28 17:06
Sir can you help me regarding this query in SQL by using character functions only.... I/P : ABC DEF GHI Output: ADG
Categories: DBA Blogs

ORA-00980 translation is no longer valid

Tom Kyte - Thu, 2016-07-28 17:06
Dear Sir, I am exporting a database from one system and importing it in another system. after importing its shows the error ORA-00980 tranlation is no longer valid everywhere i can see the solution but my question is 1)its working well with wit...
Categories: DBA Blogs

Invalid rowid in deadlock trace file...I must be missing something

Tom Kyte - Thu, 2016-07-28 17:06
Hi! We recently had an event where a trigger was changed by only a few lines resulting in hundreds of deadlocks a minute. We changed it back and all is well. I'm trying to figure out what happened though. The change in the trigger was if inserting s...
Categories: DBA Blogs

defferable

Tom Kyte - Thu, 2016-07-28 17:06
Hi Sir i am new for oracle. ____Q1) 2. Disabled validate - 3. Enabled novalidate 4. Enabled validate Please explain briefly with primary key constraints only with example (either 3Q or 4Q take any one ) Q2)while doing outer join(left/reght) ...
Categories: DBA Blogs

Code enchancement

Tom Kyte - Thu, 2016-07-28 17:06
Hell Chris, i have tables called t1 and t2. Table T1 Script as follows: <code> DROP TABLE T1 CASCADE CONSTRAINTS; CREATE TABLE T1 ( ID NUMBER ) Insert into T1(ID) Values(2001); Insert into T1(ID) Values(3001); Insert into...
Categories: DBA Blogs

CASE COLUMN WHEN NULL

Tom Kyte - Thu, 2016-07-28 17:06
hi Why doesn't NULL_CASE2 return the same result as NULL_CASE1? CREATE TABLE CASENULL (DUMMY VARCHAR(10)) INSERT INTO CASENULL VALUES (NULL); INSERT INTO CASENULL VALUES ('X'); COMMIT; SELECT NVL(DUMMY,'NULL') AS DUMMY, CASE WHE...
Categories: DBA Blogs

Banishing some PeopleSoft Myths

Duncan Davies - Thu, 2016-07-28 11:05

There seems to be quite a bit of uncertainty and misinformation in the PeopleSoft marketplace currently, so I thought it might be time to banish a few myths:

1. PeopleSoft isn’t being improved/getting investment

There are those with vested interests or other biases that will always deny this regardless of the evidence put in front of them, but PeopleSoft is improving in leaps and bounds. In the last couple of years PeopleSoft has been moving at an increasing speed, and one of the biggest questions on many client’s minds right now is “how do I keep up”. My Cedar colleague Graham Smith has a long list of recent new features in his PeopleSoft Predictions 2016 post, but if you just counted Fluid, Selective Adoption and Cloud Delivery Architecture I don’t think there’s ever been a time in which PeopleSoft has improved so much in a short space of time, pre or post-acquisition. Paco has repeatedly committed to keeping PeopleSoft around until at least 2027, and using the last few years as evidence I’m inclined to believe him.

2. PeopleSoft is Legacy/Old Fashioned

PeopleSoft is certainly very well established, with a long history and a wide customer base, that much is true. This can be a good thing, in that much of the system is tried and tested with the wrinkles ironed out. It is also true that some aspects of PeopleSoft do things the ‘old way’. The majority of deployments are on-premises, without the flexibility and efficiency that cloud deployment and automation can bring. You also pay for PeopleSoft up-front, rather than monthly. It doesn’t have to be this way, however. PeopleSoft can be deployed in an agile and efficient manner, taking advantage of some of the new technology available to us now, and the new enhancements within the product.

3. To be a future-proof system it needs to be SaaS

This is tricky as different people hold varying ideas about what constitutes SaaS, however PeopleSoft does get regular updates containing new functionality from the vendor (every ~10 weeks), can be deployed in the Cloud (to gain auto-scaling/elasticity of resources, a predictable monthly subscription and theoretically even multi-tenancy – at least above the database-level, if it was something that clients wanted). Furthermore, the application of the updates and the management of the cloud architecture can be handed over to a friendly partner in order to get even closer to purists’ definitions of what SaaS entails – if that is important to you.

4. PeopleSoft’s UI is out-dated

Up until a few years ago there wasn’t the focus on ‘consumer grade UI’ that there is now. In that era, PeopleSoft’s UI measured up pretty well. It was certainly more attractive than SAP and/or E-Business Suite. Recently this greater focus on the User Interface has introduced new competitors but PeopleSoft still keeps up well. Of course you can compare an old version of PeopleSoft with the latest from a competitor and it will appear dated in comparison, however if you use the latest PeopleSoft UI in the comparison it’ll fare rather better. The Fluid UI is (at least) the equal of anything out there, regardless of which device you view it on.

5. The PeopleSoft Market is Quiet

From what I can see there are fewer green-field implementations than 5 years ago, but there is a lot more upgrade activity. Clients are moving to 9.2 at a far greater rate than for 9.0 or 9.1, and then starting initiatives to really get value from the software once they get there. As a company I’m not sure Cedar has ever been busier.

So, now I’ve got that off my chest, I might do some predictions like Graham next …


Fishbowl’s Agile (like) Approach to Oracle WebCenter Portal Projects

In this video blog, Fishbowl Solutions’ Technical Project Manager, Justin Ames, and Marketing Team Lead, Jason Lamon, discuss Fishbowl’s Agile (like) approach to managing Oracle WebCenter portal projects. Justin shares an overview of what Agile and Scrum mean, how it is applied to portal development, and the customer benefits of applying Agile to an overall portal project.

Customer Testimonial:

“This is my first large project being managed with an Agile-like approach, and it has made a believer out of me. The Sprints and Scrum meetings led by the Fishbowl Solutions team enable us to focus on producing working portal features that can be quickly validated. And because it is an iterative build process, we can quickly make changes. This has lead to the desired functionality we are looking for within our new employee portal based on Oracle WebCenter.”

Michael Berry

Staff VP, Compensation and HRIS

Large Health Insurance Provider

The post Fishbowl’s Agile (like) Approach to Oracle WebCenter Portal Projects appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

How to manage your containers in Oracle Storage Cloud Service ?

Yann Neuhaus - Thu, 2016-07-28 07:57

Oracle Cloud Storage Service provides secure, scalable and consistent data from any environment.
To get a higher flexibility for your backups, I suggest to use a CloudBerry application in Oracle Public Cloud.

This application can be used on Cloud Backup and Recovery in PaaS and IaaS infrastructures.
We are going to use it for our DbaaS in PaaS infrastructure.
However, a user-friendly third party tool can also be used as CloudBerry Explorer for OpenStack as managing Storage Containers.

Before starting with CloudBerry it is necessary to subscribe to a trial access or to buy a billing frequency hourly or monthly. In both cases, we have the same functionalities.

Let’s go !

Follow all steps to subscribe your free trial access in the PaaS / DbaaS category on https://cloud.oracle.com/home.
Once your account is registered, you will receive an e-mail from Oracle with subscription details.

This e-mail contains a lot of services, but we will only use the Oracle Backup Service. We can notice that our Backup Service has a data region (UK) and a link to associated documentation (same for other services).

Oracle Database Backup Service
Data Region: UK001
Get Started: http://docs.oracle.com/cloud/latest/dbbackup_gs/index.html

In the next steps, we will have a look on different types of Backup Services. The access to the Backup Services is linked to your account.

You need :
Your email address
Your password
Your Identity Domain

Start with CloudBerry by downloading and installing the following link : http://www.cloudberrylab.com and install it.

Step 1 – Creating an Oracle connection

File > New Oracle Account

Process_to_connection

The account location depends on where you have subscribed your Datacenter. This account is notified when you receive your confirmation e-mail. Note that your storage space is linked with your identity domain and that all services, which depend on your subscription, will be stored in this area.

Once the connection is established we are going to create a container for the backup storage area (process below).

Process_to_connection_Crea_allt

This page shows other containers as well as our “SandBox” container we just created. The “_apaas” belongs to an Application Container Cloud I created as a service on my Oracle Public Cloud.

We will use our SandBox container to create our Oracle Backup Service. There are 2 kinds of containers, “Standard & Archive”.

Standard : Containers are similar to a directory structure but with a key distinction: unlike directories, containers cannot be nested.
By default, all containers are of the standard storage class (as opposed to the archive storage class).

Archive : You can use Archive containers to store data that won’t be accessed for a while or that will be accessed infrequently.

For example :

Storage_quota

In Cloud, we can monitor storage capacity of your Oracle Database Backup Service.

billing

Step 2 – Connection in your Oracle Public Cloud

Process_to_connection

Once the connection to your Oracle Public Cloud is established, we shall create a new database with a Backup storage area. I am not going to describe all steps to create a database in a Cloud, but rather some steps to backup the storage and a summary before we create it.

Step 3 – Creating Cloud Database with Backup and Recovery configuration Here is the summary of out DBaaS creation :

summary

As you will notice in the backup destination section, we have chosen a backup location option for the database in your service and, depending on your choice, provided information regarding the Oracle Storage Cloud service container where cloud backups have to be stored.

Backup Destination: we are going to store our backup in a Cloud or/in a local storage.

Capture20

For configuration, remember your :

Cloud Storage Container : <store service name>-<identity domain name>/<container name>

Step 4 – Backup Tests

First, we shall execute a backup of the database with RMAN command, then we will be able to check the results.

Rman_backup

The backup has been correctly performed with RMAN command. So we will see if we have our backup pieces in your container.

Backup_container

 As shown in the picture above, we find the backups in our “SandBox” container.

Step 5 – How to check my backup parameters ?

When you perform a “Show All” command in RMAN you can check how your channel is configured for backups. There you should have a library parameter set up in RMAN as follow :

Show_all

We can notice that there are “SBT_TAPE” channels for the device with its library (libopc.so) and the configuration which is limited to maxsize of 2 Go.

We see a parameter file called opcSDBX.or which contains the settings to access the backup cloud service Container. Then you can check it and open it with a text editor.

Config_cloud

Here we find our parameters entered previously when we created our access in the Oracle Cloud backup. We can observe one important thing: we use a wallet to secure backups and recovery in Cloud.

Conclusion

Easy to use, simple to set up, just try it !

 

Cet article How to manage your containers in Oracle Storage Cloud Service ? est apparu en premier sur Blog dbi services.

Oracle Buys NetSuite

Oracle Press Releases - Thu, 2016-07-28 06:52
Press Release
Oracle Buys NetSuite

Redwood Shores, Calif.—Jul 28, 2016

Oracle (NYSE: ORCL) today announced that it has entered into a definitive agreement to acquire NetSuite (NYSE: N), the very first cloud company. The transaction is valued at $109.00 per share in cash, or approximately $9.3 billion.

“Oracle and NetSuite cloud applications are complementary, and will coexist in the marketplace forever,” said Mark Hurd, Chief Executive Officer, Oracle. “We intend to invest heavily in both products—engineering and distribution.”

“We expect this acquisition to be immediately accretive to Oracle’s earnings on a non-GAAP basis in the first full fiscal year after closing,” said Safra Catz, Chief Executive Officer, Oracle.

“NetSuite has been working for 18 years to develop a single system for running a business in the cloud,” said Evan Goldberg, Founder, Chief Technology Officer and Chairman, NetSuite. “This combination is a winner for NetSuite’s customers, employees and partners.”

“NetSuite will benefit from Oracle’s global scale and reach to accelerate the availability of our cloud solutions in more industries and more countries,” said Zach Nelson, Chief Executive Officer, NetSuite. “We are excited to join Oracle and accelerate our pace of innovation.”

The evaluation and negotiation of the transaction was led by a Special Committee of Oracle’s Board of Directors consisting solely of independent directors. The Special Committee unanimously approved the transaction on behalf of Oracle and its Board of Directors.

The transaction is expected to close in 2016. The closing of the transaction is subject to receiving certain regulatory approvals and satisfying other closing conditions including NetSuite stockholders tendering a majority of NetSuite’s outstanding shares in the tender offer. In addition, the closing is subject to a condition that a majority of NetSuite’s outstanding shares not owned by executive officers or directors of NetSuite, or persons affiliated with Larry Ellison, his family members and any affiliated entities, be tendered in the tender offer.

More information about this announcement is available at www.oracle.com/netsuite.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

Cautionary Statement Regarding Forward-Looking Statements

This document contains certain forward-looking statements about Oracle and NetSuite, including statements that involve risks and uncertainties concerning Oracle's proposed acquisition of NetSuite, anticipated customer benefits and general business outlook. When used in this document, the words "anticipates", "can", "will", "look forward to", "expected" and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or NetSuite, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business, and the possibility that Oracle or NetSuite may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or NetSuite.

In addition, please refer to the documents that Oracle and NetSuite, respectively, file with the U.S. Securities and Exchange Commission (the "SEC") on Forms 10-K, 10-Q and 8-K. These filings identify and address other important factors that could cause Oracle's and NetSuite's respective operational and other results to differ materially from those contained in the forward-looking statements set forth in this document. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor NetSuite is under any duty to update any of the information in this document.

Oracle is currently reviewing the existing NetSuite product roadmap and will be providing guidance to customers in accordance with Oracle's standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle's review of NetSuite's product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by NetSuite or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision. It is intended for information purposes only, and may not be incorporated into any contract.

Additional Information about the Acquisition and Where to Find It
In connection with the proposed acquisition, Oracle will commence a tender offer for the outstanding shares of NetSuite. The tender offer has not yet commenced. This document is for informational purposes only and is neither an offer to purchase nor a solicitation of an offer to sell shares of NetSuite, nor is it a substitute for the tender offer materials that Oracle and its acquisition subsidiary will file with the SEC upon commencement of the tender offer. At the time the tender is commenced, Oracle and its acquisition subsidiary will file tender offer materials on Schedule TO, and NetSuite will file a Solicitation/Recommendation Statement on Schedule 14D-9 with the SEC with respect to the tender offer. The tender offer materials (including an Offer to Purchase, a related Letter of Transmittal and certain other tender offer documents) and the Solicitation/Recommendation Statement will contain important information. Holders of shares of NetSuite are urged to read these documents when they become available because they will contain important information that holders of NetSuite securities should consider before making any decision regarding tendering their securities. The Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, will be made available to all holders of shares of NetSuite at no expense to them. The tender offer materials and the Solicitation/Recommendation Statement will be made available for free at the SEC's web site at www.sec.gov.

In addition to the Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, Oracle and NetSuite file annual, quarterly and special reports and other information with the SEC. You may read and copy any reports or other information filed by Oracle or NetSuite at the SEC public reference room at 100 F Street, N.E., Washington, D.C. 20549. Please call the Commission at 1-800-SEC-0330 for further information on the public reference room. Oracle's and NetSuite's filings with the SEC are also available to the public from commercial document-retrieval services and at the website maintained by the SEC at http://www.sec.gov. 

Learn More
Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.607.0349

Pages

Subscribe to Oracle FAQ aggregator