Feed aggregator

CPU_COUNT

Yann Neuhaus - Fri, 2016-06-17 06:48

When you have less CPU threads than the number of processes that has something to run in CPU, the OS will schedule them to share the CPU resource. Increasing the workload at that point will not increase the throughput because you have reached the capacity of your system, and response time will increase because of queuing. Actually, performance will even decrease because of the overhead of context switching when trying to share the processors.
When you don’t want the OS scheduler to do the resource sharing job, you can, and should, use Instance Caging. For sure, the database instance can do resource sharing more intelligently than the OS as it knows the kind of workload and the performance requirement of each process.

I did some tests on a 8 CPU machine running SLOB from 32 concurrent sessions, then 31, then 30,… down to the last run with 1 sessions, each for 5 minutes. This is what you see on the right-most dark green triangle here:
CaptureSLOBCPUCOUNT01
After a very short library cache contention when all 32 sessions are parsing their statements. The each run go decreasing. The dark green here is labelled as ‘CPU + CPU wait’ and is coming from ASH where all sessions are on state ‘ON CPU’ even when they are actually in the OS runqueue. Of course, I’ve only 8 CPU threads, so I cannot have 32 sessions running on CPU.

The runs on the left where you can see the same but with some light green is from same runs but with Instance Caging active. I’ve a resource manager plan set and I’ve set CPU_COUT to 8 (the first run on the left), then 7, … down to 1. The dark green is still the ‘ON CPU’ state and with Instance Caging Oracle allows at maximum CPU_COUNT processes in that state. The remaining processes are switched to a waiting state, instrumented as ‘resmgr: cpu quantum’ and displayed in light green.

My goal is to show that you can increase the throughput with Instance Caging. I measured the logical reads per second and made an Excel chart from them. The blue lines are from different CPU_COUNT settings from 8 to 1. The orange line is from no setting CPU_COUNT which means that instance caging is not enabled. On the X axes you have the number of conccurent SLOB sessions I’ve run. What you see from the bluse lines is that the throughput increases linearly with the number of concurrent session until it reaches the limit: either the CPU_COUNT limit or the physical limit when CPU_COUNT is not set. Note that the CPU threads are not cores here. Tests were done on Oracle Public Cloud 4 OCPUs (aka OC5 compute shape) which are actually 8 threads from E5-2690 v2 Intel processors. This is why running on two threads here do not double the throughput. Actually, when running 8 sessions on 8 threads the throughput is only x6 from running one session on one thread.

CaptureSLOBCPUCOUNT

The second goal is to compare Oracle instance caging with OS scheduler when instance is using full capacity of the server. On the top you can see the darker blue line which is when CPU_COUT is set to the actual number of CPU threads (CPU_COUNT=8). The orange line is when no CPU_COUNT is set: instance caging is disabled. The maximum throughput then, 3.6 MLR/s, is reached when we run same number of sessions as the number of CPU threads. What you see here is that when the server is overloaded scheduling at instance level is more efficient than scheduling at OS level. Without instance caging, the orange line, the LR/s degrades because of context switching overhead. So the recommandation here is to always do instance caging even if you have only one instance on your server.

Why is the instance caging algorithm better than the OS scheduler? Because it is focused at database processes workload. Here is the graphs of the ‘resmgr: cpu quantum’ wait times.

CaptureSLOBCPUCOUNT02

On the left, I’ve run with CPU_COUNT=8. When I have 32 concurrent sessions each of them spend 3/4 of their time waiting for CPU. Those waits are about 300 milliseconds. When I’ve only 9 sessions, each one have to spend only small part of their response time on waiting. They wait about 25 milliseconds on ‘resmgr: cpu quantum’. The wait time is not fixed and depends on the load. This makes sens: when you know you will have to spend a long time waiting, it’s better to have longer waits in order to avoid too many context switches. On the right, it’s the same but with CPU_COUNT=1 which gives x8 less CPU time to the processes. They will have to spend more time on waiting. And we see that the wait time is adjusted: can go up to 4 seconds time slices. The OS scheduler will never do that, putting a process on runqueue wait for several seconds, because the scheduler tries focus on the response time. It’s different with instance caging. When you know that you will have to spend a long time waiting, then it’s better to optimize throughput by lowering the context switching.

The recommandation is to enable instance caging: set a resource manager plan and set cpu_count. It’s not an option. There’s no additional costs for it. And it will always be better than letting the OS manager CPU starvation.

On Standard Edition 2, it’s even easier: Oracle Corp. enabled instance caging for you ;)

 

Cet article CPU_COUNT est apparu en premier sur Blog dbi services.

database options - review and looking forward

Pat Shuff - Fri, 2016-06-17 02:07
For the past two weeks we have been looking at database as a service (DBaaS) offered as platform as a service (PaaS) on the Oracle Public Cloud. We started this journey on the 2nd of this month by looking at the differences between Standard Edition, Enterprise Edition, High Performance Edition, and Extreme Performance Edition. To quickly review, Standard Edition is the basic database with table encryption as the only option. This is a full feature database without the ability to replicate data with any tool other than copying files and RMAN backup. You can't do things like transportable table spaces, streams, Data Guard, or any other replication technologies to make a system more highly available. Enterprise Edition is a more full featured database that allows for data replication from the installation and comes with Advanced Security (TDE) as a basis for the installation. This edition does not come with Data Guard but does have the option for transportable tablespaces, external references, and ways of replicating data manually from the database that Standard Edition does not contain. We then looked at the High Performance Edition which comes with
  • Transparent Data Enctyption
  • Diagnostics
  • Tuning
  • Partitioning
  • Advanced Compression
  • Advanced Security
  • Data Guard
  • Label Security
  • Multitenant
  • Audit Vault
  • Database Vault
  • Real Application Testing
  • OLAP
  • Spatial and Graphics
We then looked at Extreme Performance Edition that contains all of the above plus
  • Active Data Guard
  • In Memory
  • Real Application Clusters (RAC)
  • RAC One
We then went into a simple description of each option and prepared for the following blogs that will go into more detail and code samples of not only what the options are but look at how to use them and try to tie them back to business benefits. Part of our description was a financial analysis of running a database in infrastructure as a service (IaaS) vs PaaS and the time and efficiency benefits that we get from PaaS over IaaS.

We wrapped up the week on the 3rd with a blog detailing what it takes to get a Windows desktop prepared to use a database in the cloud. The list of software is relatively generic and is not unique to Windows. We could just as easily have selected MacOSX or Linux but selected a Windows 2012 Server running in the Oracle Public Compute Cloud as IaaS. We did this primarily so that we would have a teaching platform that can be saved with a snapshot, reloaded for hands on classes, and accessible from a customer site to demonstrate cloud services. The software that we loaded on the Windows platform includes

  • To access cloud storage
    • Mozilla Firefox
    • RestClient extension for Firefox
    • Google Chrome
    • Postman extension for Chrome
    • CloudBerry for OpenStack (Windows only right now)
  • To access files in our instance
    • Putty
    • Filezilla
    • Cygwin (only needed on Windows)
  • To access our database instance
    • SQL Developer
    • Microsoft Visual C++ libraries (only needed on Windows)
We installed this on the Oracle Public Cloud because we have free accounts on this platform and we can keep them persistent. It would normally cost $150/month to keep this instance active if we purchased it as IaaS. We could just as easily have done this on Amazon EC2 or Microsoft Azure at a similar cost. We provisioned 30 GB of disk for the operating system and 10 GB for the binaries. We requested a 2 vCPU with 30 GB of RAM. If we were doing this on Amazon or Azure we probably would have gone for a smaller memory footprint but this is the base configuration for IaaS and Windows with 2 vCPUs in the Oracle Public Cloud. The idea is that a class of 15-30 people can log into this system with different user names and do minimal configuration to get started on workshops. We can refresh the users but not refresh the system for classes the following day or week. To provision this instance we went to the Oracle Cloud Marketplace to get a pre-configured Windows 2012 Server instance. We then downloaded the list of software and install them on the desktop.

On the 6th we dove into database partitioning to figure out that we can reduce storage costs and improve performance by fitting active data into memory rather than years or months of data that we typically throw away with a select statement. We talked about using partitioning to tier storage on premise and how this makes sense in the cloud but does not have as much impact as it does on premise. We talked about different partitioning strategies and how it can be beneficial to use tools like Enterprise Manager and the partition advisor to look at how you are accessing the data and how you might partition it to improve performance. On the 7th we looked at code samples for partitioning and talked about tableextents and file system storage. We talked about the drawbacks to Amazon RDS and how not having file system access, having to use custom system calls, and not having sys access to the database causes potential issues with partitioning. We walked through a range partition example where we segmented the data into dates and stored the different dates into different tablespaces.

On the 8th we focused on database compression. This in conjunction with partitioning allows us to take older data that we typically don't access and compress for query or historical storage. We talked about the different compression methodologies

  • using historic access patterns (heat map and ADO options)
  • using row compression (by analyzing update and insert operations as they occur)
  • file compression (duplicate file links and file compression of LOBS, BLOGS, and CLOBS)
  • backup data compression
  • Data Guard compression of redo logs before transmission
  • index compressions
  • network transmission compression of results to client systems
  • hybrid columnar compression (Exadata and ZFS only)
  • storage snapshot optimization (ZFS only)
We did not really dive into the code for compression but referred to a variety of books and blogs that have good code samples. We did look at the compression advisor and talked about how to use it to estimate how your mileage could potentially vary. On the 9th we dove into an Oracle by Example tutorial on compression and followed the example using DBaaS. The examples that we followed were for an 11g instance but could have been done in a 12c instance if we had the demo tables installed on the 12c instance.

On the 10th we focused on database tuning options and dove into how to use SQL Tuning Advisor. In the example that we used we referenced an external table that was not part of the select statement which caused an unnecessary table index and full table scan. The example we used was again for 11g to utilize the sample database that we have installed but could just as easily have worked with 12c. On the 13th we dove a little deeper into tuning with a focus on Enterprise Manager and the Performance Advisor utilities. We followed the Oracle by Example Tuning Tutorial to diagnose a performance problem with a sql statement.

On the 14th we looked at transparent data encryption (TDE) and how to enable and disable table compression in the cloud. We talked about the risks of not encrypting by default and tried to draw lesions from Target Corporate and how failure to protect credit card data with encryption led to job losses across the company.

On the 15th we looked at the backup and restore utilities in the cloud and how they differ from traditional RMAN utilities. You can use RMAN just like you do today and replicate your backup and restore as you do today but there are automation tools that monitor RMAN and kick off alternate processes if the backup fails. There are also tools to help restore for those unfamiliar with RMAN and the depths and details of this powerful package.

Today we are reviewing what we have done in the last week and a half and are looking forward to the next week or two. We are going to finish out the database options. On Monday we are going to dive into multi tenant and talk about pluggable databases. This discussion will probably spill over into Tuesday with an overview happening on Monday and code samples and demos on Tuesday. We will need to use a 12c database since this is a new feature that is specific to 12c only. We might split our code samples into using SQL Developer to clone and manage PDBs on Tuesday and cover the same functionality with Enterprise Manager on Wednesday. Following this discussion we will do a high level discussion on Data Guard and look at the change log and log replication strategies that can be used for physical and logical replication. The following days we will look at code samples and configurations from the command line, enterprise manager, and sql developer. We will look at what it will take to setup a primary on premise and standby in the cloud. We will also look at what is required to have both in the cloud and what it takes to flip primary and standby to emulate a failure or maintenance action then flip the two back.

Once we cover Data Guard we will be in a position to talk about real application testing. In essence Data Guard copies all of the writes that happen on the primary and replay them on the standby. Real Applicaiton Testing records the reads as well and replays the reads and writes to help measure performance differences between configurations. This is good for code change testing, patch testing, configuration change testing, and other compare/contrast changes to your production system in a safe environment.

Once we finish the high availability and data replication options we will dive into OLAP and Spatial options. OLAP reorganizes the data for data warehouse analysis and spatial allows you to run geographical select statements like show me all crimes that happened within a mile of this address. Both are optimizations on select statements to help optimize usage of the database in specific instances.

We will wrap up our coverage by looking at Audit Vault and Database Vault. Both of these options are additional levels of security that not only help us protect data but restrict and track access to data. Many financial and healthcare institutions require interfaces like this to show separation of duty as well as traceability to see who accessed what when.

Once we finish the High Performance Edition options we will dive into the Extreme Performance Edition options looking at Active Data Guard, In Memory, RAC and RAC One. Going through all of the options will probably take us through the month of June. We will probably look at the other PaaS options listed in cloud.oracle.com starting some time in July and see how they relate or differ from the DBaaS services that we are currently covering.

OTN Summit - July 12th, 13th and 14th!

OTN TechBlog - Thu, 2016-06-16 18:28

The Oracle Technology Network team invites you to attend the OTN Summit on July 12, 13 and 14, 2016 to explore the latest demos and insights from Oracle and community experts.

Inderjeet Singh, Executive Vice President, Fusion Middleware Development, will kick off the OTN Summit to discuss the Oracle Cloud based on questions from the Oracle Community.


This free, online event also includes:


· 3 presentations per track for each of the Database, Java, Middleware and Systems categories

· Over 12 hours of content in one 3.5 hour event

· Access to 15 community and Oracle product experts who will be online to answer your questions

Don’t miss out on the latest trends, sign up today for the live event!

July 12
 - 9:30 am - 1 pm (PST)

July 13
 - 9:30 am - 1 pm (BST)

July 14
 - 9:30 am - 1 pm (IST)


You can view the full agenda and abstracts here.





Kscope16 Scavenger Hunt

Oracle AppsLab - Thu, 2016-06-16 17:34

Kscope16_1024_500

Are you attending Kscope16? If so, you are in luck, @theappslab team will be back this year (by popular demand) to do a Scavenger Hunt. This year there are even more chances to win, plus check out these prizes:

  • First place: DJI Phantom Drone
  • Second place: Amazon Echo
  • Third place: Raspberry Pi

Our first scavenger hunt took place last year at Kscope15. Here’s a quick video detailing the whats, whys and wherefores of the game from our fearless leader and Group Vice President, Jeremy Ashley (@jrwashley) and me.

After that, we replicated the experience for an OTN Community Quest at OpenWorld and JavaOne 2015 and then for the UKOUG App15 and Tech15 Conference Explorer. We have had great fun seeing participants engaged. We are very proud of the game engine we built for the scavenger hunt, bringing together software and IoT. If you are interested to see how it all works check out our post “Game Mechanics of a Scavenger Hunt“.

Check the Kscope16 Scavenger Hunt site for more information on how to join and play during the annual ODTUG user group shindig. You can even signup to play during your registration process.

We have some interesting twists in store this year, and we’re hoping for an even larger group of engaged players this year.

See you there!Possibly Related Posts:

Multi-Factor Authentication possibilities in OAM 11g

Online Apps DBA - Thu, 2016-06-16 17:27

Readers, It’s been sometime since writing here, however I believe I am back with good informative post today. It is very common ask for MFA these days and there are multiple ways to achieve this in Oracle Access Management by using either OAM 11g or combination of OAM and OAAM 11g (and/or using 3rd party products). […]

The post Multi-Factor Authentication possibilities in OAM 11g appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Applying Patches in Oracle Fusion Middleware ? Welcome to Zero DownTime (ZDT) Patch ?

Online Apps DBA - Thu, 2016-06-16 15:29

 In my Oracle Fusion Middleware Training for Apps DBAs, DBAs, and Middleware Admin, I was discussing about Patching as part of Module 10, where we cover various patching tools like opatch, bsu, psa for various products like SOA, WebLogic, OHS, WebCenter etc including Schema Patches. One of the topic came was how to apply patches without impacting […]

The post Applying Patches in Oracle Fusion Middleware ? Welcome to Zero DownTime (ZDT) Patch ? appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Graph frequently executed SQL by FORCE_MATCHING_SIGNATURE

Bobby Durrett's DBA Blog - Thu, 2016-06-16 15:10

I made a new graph in my PythonDBAGraphs program. Here is an example with real data but the database name blanked out:

sql_matching_group_of_signatures_blog

My graphs are all sized for 1920 x 1080 monitors so I can see all the detail in the lines using my entire screen. The idea for this graph is to show how the performance of the queries that matter to the users changes as we add more load and data to this production database. I knew that this database had many queries with literals in their where clauses. I decided to pick a group of SQL by FORCE_MATCHING_SIGNATURE and to graph the average elapsed run time against the total number of executions.

I used this query to list all the SQL by signature:

column FORCE_MATCHING_SIGNATURE format 99999999999999999999

select FORCE_MATCHING_SIGNATURE,
sum(ELAPSED_TIME_DELTA)/1000000 total_seconds,
sum(executions_delta) total_executions,
count(distinct sql_id) number_sqlids,
count(distinct snap_id) number_hours,
min(PARSING_SCHEMA_NAME)
from DBA_HIST_SQLSTAT
group by FORCE_MATCHING_SIGNATURE
order by number_hours desc;

This is an edited version of the output – cut down to fit the page:

FORCE_MATCHING_SIGNATURE TOTAL_SECONDS TOTAL_EXECUTIONS NUMBER_HOURS
------------------------ ------------- ---------------- ------------
    14038313233049026256     22621.203         68687024         1019
    18385146879684525921    18020.9776        157888956         1013
     2974462313782736551    22875.4743           673687          993
    12492389898598272683    6203.78985         66412941          992
    14164303807833460050    4390.32324           198997          980
    10252833433610975622    6166.07675           306373          979
    17697983043057986874    17391.0907         25914398          974
    15459941437096211273    9869.31961          7752698          967
     2690518030862682918    15308.8561          5083672          952
     1852474737868084795    50095.5382          3906220          948
     6256114255890028779    380.095915          4543306          947
    16226347765919129545    9199.14289           215756          946
    13558933806438570935    394.913411          4121336          945
    12227994223267192558    369.784714          3970052          945
    18298186003132032869    296.887075          3527130          945
    17898820371160082776    184.125159          3527322          944
    10790121820101128903    2474.15195          4923888          943
     2308739084210563004    265.395538          3839998          941
    13580764457377834041    2807.68503         62923457          934
    12635549236735416450    1023.42959           702076          918
    17930064579773119626    2423.03972         61576984          914
    14879486686694324607     33.253284            17969          899
     9212708781170196788     7292.5267           126641          899
      357347690345658614    6321.51612           182371          899
    15436428048766097389     11986.082           334125          886
     5089204714765300123    6858.98913           190700          851
    11165399311873161545    4864.60469         45897756          837
    12042794039346605265    11223.0792           179064          835
    15927676903549361476    505.624771          3717196          832
     9120348263769454156    12953.0746           230090          828
    10517599934976061598     311.61394          3751259          813
     6987137087681155918    540.565595          3504784          809
    11181311136166944889      5018.309         59540417          808
      187803040686893225    3199.87327         12788206          800

I picked the ones that had executed in 800 or more hours. Our AWR has about 1000 hours of history so 800 hours represents about 80% of the AWR snapshots. I ended up pulling one of these queries out because it was a select for update and sometimes gets hung on row locks and skews the graph. So, the graph above has that one pulled out.

I based the graph above on this query:

select
sn.END_INTERVAL_TIME,
sum(ss.executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/((sum(executions_delta)+1))
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.snap_id=sn.snap_id
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
and ss.FORCE_MATCHING_SIGNATURE in
(
14038313233049026256,
18385146879684525921,
2974462313782736551,
12492389898598272683,
14164303807833460050,
10252833433610975622,
17697983043057986874,
15459941437096211273,
2690518030862682918,
6256114255890028779,
16226347765919129545,
13558933806438570935,
12227994223267192558,
18298186003132032869,
17898820371160082776,
10790121820101128903,
2308739084210563004,
13580764457377834041,
12635549236735416450,
17930064579773119626,
14879486686694324607,
9212708781170196788,
357347690345658614,
15436428048766097389,
5089204714765300123,
11165399311873161545,
12042794039346605265,
15927676903549361476,
9120348263769454156,
10517599934976061598,
6987137087681155918,
11181311136166944889,
187803040686893225
)
group by sn.END_INTERVAL_TIME
order by sn.END_INTERVAL_TIME;

Only time will tell if this really is a helpful way to check system performance as the load grows, but I thought it was worth sharing what I had done. Some part of this might be helpful to others.

Bobby

Categories: DBA Blogs

Q4 FY16 SAAS AND PAAS REVENUES WERE UP 66%, AND UP 68% IN CONSTANT CURRENCY

Oracle Press Releases - Thu, 2016-06-16 15:00
Press Release
Q4 FY16 SAAS AND PAAS REVENUES WERE UP 66%, AND UP 68% IN CONSTANT CURRENCY Q1 FY17 SaaS and PaaS Revenues Expected to be Up 75% to 80%

Redwood Shores, Calif.—Jun 16, 2016

Oracle Corporation (NYSE: ORCL) today announced fiscal 2016 Q4 results. Total Q4 Revenues were $10.6 billion, down 1% in U.S. dollars and flat in constant currency. Cloud plus On-Premise Software Revenues were $8.4 billion, flat in U.S. dollars and up 2% in constant currency. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars and up 68% in constant currency. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S dollars and up 51% in constant currency. Operating Income was $4.0 billion, and Operating Margin was 37%. Non-GAAP Operating Income was $4.8 billion, and the non-GAAP Operating Margin was 45%. Net Income was $2.8 billion while non-GAAP Net Income was $3.4 billion. Earnings Per Share was $0.66, while non-GAAP Earnings Per Share was $0.81. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP Earnings Per Share would have been 2 cents higher and non-GAAP Earnings Per Share would have been 1 cent higher.

Short-term deferred revenues were $7.7 billion, up 6% in U.S. dollars and up 7% in constant currency compared with a year ago. Operating cash flow on a trailing twelve-month basis was $13.6 billion.

For fiscal 2016, Total Revenues were $37.0 billion, down 3% in U.S. dollars and up 2% in constant currency. Cloud plus On-Premise Software Revenues were $29.0 billion, down 2% in U.S. dollars and up 3% in constant currency. Cloud SaaS and PaaS revenues were $2.2 billion, up 49% in U.S. dollars and up 52% in constant currency. Total Cloud revenues, including IaaS, were $2.9 billion, up 36% in U.S dollars and up 40% in constant currency. Operating Income was $12.6 billion, and Operating Margin was 34%. Non-GAAP Operating Income was $15.8 billion and non-GAAP Operating Margin was 43%. Net Income was $8.9 billion while non-GAAP Net Income was $11.2 billion. Earnings Per Share was $2.07, while Non-GAAP Earnings Per Share was $2.61. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP and non-GAAP Earnings Per Share would have been 17 cents higher.

“Fourth quarter SaaS and PaaS revenue growth accelerated to 68% in constant currency, significantly higher than my guidance,” said Oracle CEO, Safra Catz. “SaaS and PaaS gross margins continued to improve throughout the year, exiting FY16 at 56%. Bookings in Q4 were also very strong enabling us to raise our guidance for Q1 SaaS and PaaS revenue growth, which we now expect to be between 75% and 80%.”

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4,” said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud – that’s ten-times more cloud ERP customers than Workday.”

“We expect that the SaaS and PaaS hyper-growth we experienced in FY16 will continue on for the next few years,” said Oracle Executive Chairman and CTO, Larry Ellison. “That gives us a fighting chance to be the first cloud company to reach $10 billion in SaaS and PaaS revenue. We’re also very excited about the availability of version 2 of Oracle’s Infrastructure as a Service (IaaS)—which will enable us to speed up the growth of our IaaS business, which customers want to buy in conjunction with our SaaS and PaaS.”

The Board of Directors also declared a quarterly cash dividend of $0.15 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 6, 2016, with a payment date of July 27, 2016.

Q4 Fiscal 2016 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast of this event, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q4 results and fiscal 2016 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Pass Code: 28515799.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

"Safe Harbor" Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding our future guidance, and expectations of future SaaS, PaaS and IaaS growth, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware systems products or purchase or renew support contracts. (3) If the security measures for our software, hardware, services or Oracle Cloud offerings are compromised or if such offerings contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and financial exposure. (4) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) Economic, geopolitical and market conditions, including the continued slow economic recovery in the U.S. and other parts of the world, can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of June 16, 2016. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Patch analysis fails at step "Target Status" and at "Build Target Patch" for EM12cR5 GI+DB APR2016 Patches

Arun Bavera - Thu, 2016-06-16 14:10
Issue: Patch analysis fails at step "Target Status" and at "Build Target Patch" for EM12cR5 GI+DB APR2016 Patches
Requirement:
Patch to be applied for DB+GI for 12.1.0.2 using EM Patch Automation
Description DB PSU GI PSU Proactive Bundle Patch APR2016 22291127 (12.1.0.2.160419) 22646084 (12.1.0.2.160419) includes 22291127 22899531 - 1.7GBFrom <https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=417353674774707&id=1454618.1>
Reference Notes:
DB Patches:
12.1.0.2 Patch Set Updates - List of Fixes in each PSU (Doc ID 1924126.1)12.1.0.2 Patch Set - Availability and Known Issues (Doc ID 1683799.1)Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)From <http://oraforms.blogspot.com/2015/09/em12c-12105-upgrade-tasks.html> EM12c Patches:
Applying Enterprise Manager 12c Recommended Patches (Doc ID 1664074.1)
From <http://oraforms.blogspot.com/2015/09/em12c-12105-upgrade-tasks.html>
Enterprise Manager 12.1.0.5.0 (PS4) Master Bundle Patch List (Doc ID 2038446.1)
Enterprise Manager Patches required for setting up Provisioning, Patching and Cloning (Deployment Procedures) (Doc ID 427577.1)


System Details:
EM 12.1.0.5 DB Plugin 12.1.0.8
OMS side Patches Agent Side Patches OMS PSU APR Patch: 22570344
OMS System Side Bundle Patch: 23218275
Patch: BUG:21364690

Agent-Side 12.1.0.5.160531: (May 31, 2016) Patch 23124338
Agent-Side Monitoring 12.1.0.8.160531:(May 31, 2016) Patch 23217639
* Agent-Side Discovery 12.1.0.8.5: (November 30, 2015) Patch 22135601


Steps Taken:
 
1. We started with GRID Infrastructure PSU APR2016 Patch in the lab cluster 22646084 (12.1.0.2.160419).
During the EM Patch analysis we encountered mismatch between the targets configuration info collected from “EM Repository from agent” with “oplan” tool of OPatch.
Similar Bug(this issue  HAS having smallcase and UPPERcase with EM collection and Oplan collection, customer had uppercase shortname host in /etc/hosts)
BUG 20323468 - Run Configuration Compare fails with - invalid instances monitored by EM  -> BUG 21258331 - CONFIGURATIONCOMPARE SHOULD BE CASE INSENSITIVE REGARDING THE HOSTNAME
In our case we were seeing the discrapancies in targetname with FQDN and shortname


2. We then tried DB PSU 22291127 (12.1.0.2.160419), which failed in the step ‘Target Status Check’ which we determined as concurrency issue- we were able to overcome this by removing additional ClusterDBService and Patch analysis was successful for DB side patch only. We decided to create one more ClusterDBService to test again this DB PSU. If we retry this step one by one on each node we can overcome this concurrency.
Erros:Another datapatch session is currently running.
Waiting for that session to complete before continuing...
Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual' and/or check the invocation log /ora01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_54049_2016_06_16_14_37_26/sqlpatch_invocation.log for the complete error.
Prereq check failed, exiting without installing any patches.
Error Codes:ORA-20009: Job Load_opatch_inventory_1execution failedWe have a somewhat workaround, but to fully automate we need fix for:
a. BUG 22204310 - CANNOT INVOKE DATAPATCH -PREREQ IN A CONCURRENT FASHION 
b. Bug 19178275 - fixes concurrent access to QI when multiple DBs access it in a shared OH  - is not backported yet
c. Bug 20879709 - concurrency with dbms_qopatch apis with fixed table implementation  -  is not backported yet




3. We again tried troubleshooting GI PSU, we are able to find the root cause, as 2 of the nodes were referring to wrong Oracle Central Inventory. Where the EM patch analysis was failing during compare configuration step. We fixed this.
The GI PSU Patch plan failed at the last step: In the EMOMS.log related errors as:
2016-06-15 15:46:35,246 [RJob Step 250083] ERROR dpGenerator.MultiPatchProcessor logp.251 - 22646084{, root=/%emd_emstagedir%/p22646084_600000000009300_226_0/oracle, oracle=/%emd_emstagedir%/p22646084_600000000009300_226_0/oracle} 2
2016-06-15 15:48:26,137 [RJob Step 250083] ERROR dpIntg.BuildPrereqProcAndConfig logp.251 - Exception while building the prerequisite procedure
oracle.oplan.sdk.oplan.OPlanInternalException: Unsupported Action Type:oracle.opatch.system.automation.jaxb.CommandAction subtype:preRACPatchApplyshell
Error: oracle.opatch.system.automation.jaxb.CommandAction subtype:preRACPatchApplyshell



Analysis:
The first hit you get on this issue is: Bug 21919097 : The Prerequsite Checks Required by OPlan Fails With Unsupported Action Type  - Closed as not a bug due to issue in-à BUG 20950255 - UNABLE TO APPLY APR PSU VIA EM - FAILS AT ANALYSIS AND UNABLE TO PROCEED  -- This is fixed in  EM DB Plugin 12.1.0.8  ---- Which we have already applied

Another, fix which is required for this issue is in Patch:
Patch 21364690: Tracking bug to deliver one-off patch for oplan bugs 20978207 and 20995318 DB Plugin (OMS) 12.1.0.8.0
Action: Fix available at: https://support.oracle.com/epmos/faces/PatchDetail?requestId=19551356&_afrLoop=418222758860502&patchId=21364690&_afrWindowMode=0&_adf.ctrl-state=n3gg18298_1124 – We need to apply this patch




Summary:
We have two issue here:
4. Concurrency issue with “Target Status Check” step – We can check with support whether its related and get fixes for the BUGS: BUG 22204310, Bug 19178275 and 20879709
5. GI PSU failing in last but one step  - “building the prerequisite procedure “ - we applied the fix for 21364690 and it fixed the issue.

















Categories: Development

SQLcl and Query Change Notification

Kris Rice - Thu, 2016-06-16 12:23
The database has had Query Change Notification for a while but to use it required a custom program. Such as Tim outlines on his blog https://oracle-base.com/articles/10g/dbms_change_notification_10gR2 Since SQLcl has Nashorn, now it can be integrated with a few lines of javascript to get notified when changes happen.  The script is below and posted.  The catch is QCN only works on Varchars and

Services -- 2 : Starting and Connecting to Services (non-RAC)

Hemant K Chitale - Thu, 2016-06-16 10:22
Continuing with the 12.1.0.2 non-RAC MultiTenant environment and two services demonstrated earlier,

I have restarted the environment today :

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 22:57:17

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 22:57:31 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1040191008 bytes
Database Buffers 587202560 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database open;
alter pluggable database open
*
ERROR at line 1:
ORA-65000: missing or invalid pluggable database name


SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:00:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


The two manually created services (NEW_APP1 and NEW_APP2) don't startup automatically.

SQL> exec dbms_service.start_service('NEW_APP1');
BEGIN dbms_service.start_service('NEW_APP1'); END;

*
ERROR at line 1:
ORA-44773: Cannot perform requested service operation.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 63
ORA-06512: at "SYS.DBMS_SERVICE", line 395
ORA-06512: at line 1


SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus sys/oracle@PDB1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:03:05 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>


[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:04:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


When I attempted to start the custom service (that, per the previous post, was created in PDB1) when still connected to CDB$ROOT, the command fails.  I had to connect to PDB1 to start the service.

SQL> create tablespace hemant ;                           

Tablespace created.

SQL> create user hemant identified by hemant default tablespace hemant quota unlimited on hemant;

User created.

SQL> grant create session to hemant;

Grant succeeded.

SQL>
[oracle@ora12102 Desktop]$ tnsping NEW_APP1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:00

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP1)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping NEW_APP2

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:05

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP2)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:29 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:48 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show conn_name
SP2-0158: unknown SHOW option "conn_name"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>


The newly created user HEMANT belongs to the Database.  The user is not tied to a Service.  He can use either Service (whichever is running) to connect to the Database.  But the two Services have two different TNSNAMES.ORA entries --- differing by the SERVICE_NAME specification.

Thus, the DBA could configure some application servers to use one service name and other application servers to use another service name.

An example of such a configuration is where the first set of application servers could be for Finance Applications called "FINANCE" and the second set of servers could be for HR Applications (in the *same* database) called "HR".  Here I create the two services but start only the FINANCE service.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@pdb1 

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:13:55 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.create_service('FINANCE','FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:15:45

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "FINANCE" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


Now the user should be able to connect to FINANCE, but not to HR.

[oracle@ora12102 Desktop]$ tnsping FINANCE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:16

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = FINANCE)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping HR

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:18

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = HR)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:18:57 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:48 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@HR

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:19:23 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:


The user could connect to FINANCE but the request to HR returned ORA-12514.
(Notice how "tnsping HR" is successful but the connection is not ?  That is because tnsping only tests if the listener is running, it does not test if the database instance and service are both running).

So, using multiple services, the DBA can "provision" the same database to multiple applications.  The entry-point is the SERVICE_NAME,  not the USERNAME.   Users and Services are independent of each other.

.
.
.
Categories: DBA Blogs

ORA-00020: maximum number of processes exceeded

Learn DB Concepts with me... - Thu, 2016-06-16 09:45


ORA-00020: maximum number of processes



This error occurs when your total numbers of sessions connecting to oracle database has exceeded the max limit set in parameter file. Simplest way to overcome this error is to reset the max sessions value to a greater value than existing.Here is how to do it
 



oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:20:26 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded




oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:23:42 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name:

Disconnected from ORACLE

I wasn't able get into the oracle database to kill some database session. So I tried to kill few sessions on OS to make my way into DB.

oracle@LINUX201:[~] $ ps -ef|grep oracle
.
.
.
.
.
.
.

oracle   64373     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64540     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64637     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65192     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
root     65407 65381  0 May16 pts/2    00:00:00 sudo -u oracle -i
oracle   65408 65407  0 May16 pts/2    00:00:00 -bash
oracle   65458 65408  0 May16 pts/2    00:00:00 sqlplus
oracle   65459 65458  0 May16 ?        00:00:00 oracleQPDEV (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   65518     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65520     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
oracle   65534     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)

oracle@LINUX201:[~] $ kill -9 64785
oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:26:25 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name: ^C

Killing few processes on Linux :

oracle@LINUX201:[~] $ kill -9 65192 65085 64785 64777 64655 64653 64637


oracle@LINUX201:[~] $ ps -ef|grep 65192 65085 64785 64777 64655 64653 64637

.
.
.
.
oracle   50258     1  0 Jun07 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   50264     1  0 Jun07 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   50268     1  0 Jun07 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.

oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:30:07 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string


Now reset the max processes to a greater value:

SQL> alter system set processes=1200 scope=spfile;

System altered.

SQL>  show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
QPDEV     READ WRITE

This will need a restart to take affect


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area 3206836224 bytes
Fixed Size                  2257520 bytes
Variable Size            1275071888 bytes
Database Buffers         1912602624 bytes
Redo Buffers               16904192 bytes
Database mounted.
Database opened.

SQL> show parameter process

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     1200
processor_group_name                 string

SQL>
Categories: DBA Blogs

Oracle Partner Community - EPM BI Big Data Bulletin June 2016

When you prototype and pilot a Big Data Hadoop solution, you may start on the cloud or a cluster of commodity servers.  However, I have heard from many partners that the transition to production...

We share our skills to maximize your revenue!
Categories: DBA Blogs

database option - backup and restore

Pat Shuff - Thu, 2016-06-16 02:07
Backup and recovery abilities are arguably the most critical skills required of a database administrator. Recovery Manager (RMAN) is Oracle’s standard backup and recovery tool; every Oracle DBA should be familiar with utilizing RMAN. Some DBAs use alternate tools since RMAN is an Oracle specific tool to backup data in a database. Alternatives include Veritas Backup Exec, Comvault Sympana, Legato Networker, EMC and NetApp tools, and other packages. I am not going to list books and software packages in this blog. When I did a search on Safari Books search for rman we get 9 books published in 2016, 16 in 2015, and 20 in 2014. There are also a ton of blogs so I suggest that you go with your favorite blog and search for rman or backup. There are hundreds of different ways to backup a database and restore the database as well as optimize how much space the database takes up in cold storage.

The important things to consider when you look at backup and recovery are

  • full or incremental backups
  • backing up to disk, tape, or cloud
  • replicating data to another site with disk mirroring, Data Guard, or Golden Gate
  • hot backup or cold backup along with middleware and file system backup synchronization
  • recovery point objective and recovery time objective
  • compression, deduplication, and encryption of data at rest
  • backup windows and restore windows

It is important to note that when you purchase DBaaS, independent of any edition, you get backup done for you based on the options you select at creation. When you create a database you can opt for no backup, local backup, and full backups. The no backup can be used for development and test instances. We might just want a sandbox to play in and don't care about keeping this data so that we can restore. If we loose the data for any reason we can recreate it from our production system. When you select local backups you get incremental backups daily at 2am and a full backup Sunday morning at 2am. This gives you a seven day window for recovery so that you can restore data back to your last full backup with daily incrementals. These backups go to the /u03 file system on the instance that you create. Nothing is copied off the instance that you create so a complete failure of the system could result in potential data loss. The full backup does an incremental to /u03 daily and a full backup Sunday morning at 2am to /u03 as well. Prior to the full backup, the backup file is copied to the Cloud Object Storage Container that you created prior to creating the database. When you created the database you specify the days that you want to retain backups. If, for example, you ask for a 90 day backup you get 13 full backups copied to the object storage. If you have a Java Cloud Service connected to this database, the Java configuration and war files are also copied to this same area. The backup is automatically done for you and can be reconfigured and done manually using the cloud specific commands. Refer to the Using Oracle Database Cloud - Database as a Service Documentation (chapter 6) to understand how to backup using the cloud commands to keep backups in sync with the automated utilities rather than doing an rman manually.

You can generate a backup before you make a change to the database with the following command

sudo /var/opt/oracle/bkup_api/bkup_api bkup_start
This command must be executed from the opc account and not the oracle account because it needs root roles to store data and update the logs associated with cron jobs. To restore the database from the last backup you execute the following command
sudo dbaascli orec --args -latest
You can list the database backups that exist with
sudo dbaascli orec --args -list
To restore from a specific backup you execute
sudo dbaascli orec --args -pitr backup-tag
where backup-tag is the name listed with the -list command.

The dbaascli command shuts down the database, extracts and restores configuration files, prepares for recovery, performs the recovery, and restarts the database instance after recovery. You can use the rman utility to restore individual tables or tablespaces as usual but tools exist to manage the rman repository that are accessible to do scheduled backups, full backups, and restores.

One of the nice features of RMAN is that you can duplicate a database to an alternate location and restore from an alternate location. For example, you can take a backup of your on premise database and write to the Oracle Object Cloud Service. You can then create a database in the cloud using the DBaaS and load the data into this database from your backup in cloud object storage. You can also provision an on premise database, execute replication commands to configure the second database as a Data Guard replica, initiate the backup of your on premise instance, initiate the restore on your cloud instance, and configure Data Guard to maintain the log shipping, and kick off the Data Guard service on both instances. We will cover this in an upcoming blog.

Using DBaaS does get rid of the discussion of snap mirror backups. This is where you take the tablespace.dbf files, shutdown the database, copy these files to another database, and ingest the dbf file into the second database. Many users like to use this method for backup and recovery because it works for MySQL and Postgress. It is not a recommended backup and restore mechanism for an Oracle database. Synchronization issues can result if changes are being made and data is partitioned across multiple dbf files. Using tools like EMC or NetApp mirroring software work well for VMWare to clone data between data centers. This technology does not work to the cloud and data replication is more difficult if the source and target are from different hardware vendors. The Oracle database also has problems ingesting this data at times and fails to startup from mirror instances if file consistency is not maintained between the system.dbf, user.dbf, and other tablespace files.

Data compression is also a tricky issue. The Oracle database has a compression option that it uses to compress data to minimize disk usage in the tablespace files. If you take this compressed dbf file and then try to again compress it, it typically just consumes excess processor cycles and takes upto eight times as long to restore the data once it is doubly compressed. It is important to not only look at your backup window but your restore window. It might take an hour to backup and compress your data but it might take upto 8 hours to restore it. One of the key performance benefits that Oracle brings to the table is using ZFS storage as a back end for a database. Since the database knows what the ZFS storage appliance can and can't do, it can offload the compression processing and algorithms to the storage appliance. Unfortunately, this does not work for other storage vendors. The cloud database instance uses this to it's advantage so data is automatically compressed when copies are done to the object storage. The compression does not take processing power from the database engine but is offloaded to the storage engine.

You can customize the backup definition and parameters. All of the files are located in /var/opt/oracle/ocde/assistants/bkup/ and the bkup.cfg file defines how backup operates. It is important to note that there is a different command line set of tools if your database is a RAC configuration since you have to reconfigure files on both systems that comprise your cluster but they both use this directory and file to configure the backup process, timing, and procedures.

In summary, database backup is part of the database as a service on all levels. The foundation tool that is used is RMAN. There are command line api tools that allow you to backup and restore without having to learn RMAN. If you want to reconfigure and drill down into the depths and bowels of RMAN you can do this. The backups are automatically encrypted using your wallet that we talked about a couple of days ago. You can also use Enterprise Manager to run RMAN for your cloud service just like you do with your on premise database instances. The important thing to remember is that you don't need to learn RMAN unless you want to. If you deploy a database on infrastructure as a service you have to learn RMAN because it is not automated for you or configured. If you use Amazon RDS, backups are done by doing a disk clone to another zone. You can not setup Data Guard replication to the RDS instance because you don't have file system access to copy the change logs from your on premise database to the RDS storage. You can do this with EC2 and S3 but you loose the automated configurations and services that you get with database as a service. You will also need to purchase Advanced Security to do secure backups to IaaS instances from all cloud vendors but not with DBaaS or PaaS from Oracle.

Links for 2016-06-15 [del.icio.us]

Categories: DBA Blogs

ORACLE AND SNAPCHAT MEASURE IMPACT OF SNAP ADS ON SALES IN STORES

Oracle Press Releases - Wed, 2016-06-15 18:38
Press Release
ORACLE AND SNAPCHAT MEASURE IMPACT OF SNAP ADS ON SALES IN STORES New Measurement Research for Snapchat Advertisers Finds 92% of Ad Campaigns Saw Positive Offline Sales Increases

Redwood Shores Calif—Jun 15, 2016

Oracle and Snapchat today announced a partnership to measure the impact of digital advertising in the physical world. By teaming together, Oracle Data Cloud and Snapchat measure incremental store sales resulting from marketing campaigns run on Snapchat. By analyzing the impact of campaigns on in-store sales, Oracle Data Cloud will help consumer packaged goods (CPG) advertisers quantify and improve their ongoing marketing efforts on Snapchat.
 
Oracle Data Cloud also released new research analyzing the in-store sales lift from 12 CPG campaigns run on Snapchat for leading cosmetic, personal care, cleaning, packaged food, and beverage brands. Among the findings:
 
92 percent of the Snapchat ad campaigns drove a positive lift in in-store sales
 
Snapchat ad campaigns exceeded Oracle Data Cloud norms on all key metrics with sales lift being the primary driver.
 
“We are delighted to be working with one of the fastest-growing platforms in history to help Snapchat advertisers understand and improve the effectiveness of their campaigns,” said Eric Roza, Senior Vice President of the Oracle Data Cloud. “By combining offline sales data with cutting-edge analytics, we can help Snapchat advertisers maximize their return on investment and drive higher profits.”
 
“We’ve been listening closely to advertisers and delivering the data they need,” said Clement Xue, Global Head of Revenue Operations at Snapchat. “This research shows that Snapchat is more effective at driving sales for advertisers, and we’re delighted to offer this measurement solution going forward.”
 
Oracle Data Cloud is the largest global data-as-a-service solution, with access to more than $3 trillion dollars in consumer transaction data, 2 billion global consumer profiles, and 1,500+ data partners. Oracle Data Cloud integrates that data with more than 200 major media companies, including publisher exchanges, ad networks, DSPs, DMPs, and agency trading desks.
 
Contact Info
Erik Kingham
Oracle
650-506-8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Get Started
Talk to a Press Contact

Erik Kingham

  • 650-506-8298

5 Days Expert Oracle Security Training In Paris - 20th June 2016

Pete Finnigan - Wed, 2016-06-15 17:35

I will be teaching 5 days on my Oracle security classes in Paris from 20th June to 24th June with Oracle University at their offices and training suite. Details of the Oracle Security Event and how to register on Oracles....[Read More]

Posted by Pete On 06/06/16 At 09:59 AM

Categories: Security Blogs

Amis Conference June 2nd and 3rd

Pete Finnigan - Wed, 2016-06-15 17:35

I will be at the Amis conference next Friday in Leiden not far from Amsterdam in Holland. The conference is held over two days, June 2nd and 3rd But I will be there just on the Friday due to other....[Read More]

Posted by Pete On 26/05/16 At 11:28 AM

Categories: Security Blogs

Are Zero Days or Bugs Fixed by CPU The Worst?

Pete Finnigan - Wed, 2016-06-15 17:35

I spoke yesterday about compartmentalising Oracle Security and one element that comes out of this is the need to consider what you are trying to achieve; secure actual data and also secure the platform. In general applying security patches will....[Read More]

Posted by Pete On 25/05/16 At 12:51 PM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator