DBA Blogs

Create second Ora 12c environment on same AWS server, now CDB/PDB type

Tom Kyte - Thu, 2016-06-23 15:09
Hi, I have an Oracle 12c running on AWS on CentOS Linux rel 7.1. The database has been installed as a standalone non-CDB database. This DEV database is used for development. I have to install a second environment, for QA, on the same server. This ...
Categories: DBA Blogs

Oracle Partner Community - SOA and BPM Suite 12c Resources

SOA Suite 12c and BPM Suite 12c bring exciting new features around the key themes of Developer Productivity and Industrial SOA or Integration & Process Cloud Services. To support & grow...

We share our skills to maximize your revenue!
Categories: DBA Blogs

With a Modern Storage Infrastructure, Companies Must Find an Excellent Data Management Tool

Kubilay Çilkara - Tue, 2016-06-21 15:54
One of the more “weighty” questions within the IT world is in reference to the value of each company’s particular data.  Many wonder what the true value of protected data is in the long-run, eventually view it as a cost center where money continuously gets used up.  In order to make data work in the favor of a business and to help generate some income, companies must get smarter with their approaches to business and stop looking at their data this way!

The majority of companies out there admit to wanting a Big Data system as a part of their layout, but ultimately have nothing to show for these desires.  There have been many failed implementations despite lots of money spent on resources to help.  You might say that many businesses have the best intentions when comes to incorporating a data management plan, yet intention without action is arguably useless at the end of the day.  You might wonder why companies fail to follow through on their desires for a Big Data system.  Well, the answer is really quite simple.  The amount of data out there is staggering, and trying to manage it all would be like trying to boil the vast ocean.  You can imagine how difficult-if not impossible-that would be! 

Many question if the cloud would be a good solution, and if everyone should just get started on moving their data up there.  Or perhaps virtualization?  Would that be the answer?  These two tools are valuable, but the question stands on whether or not they are the best way to utilize company resources.  What companies really need is a tool that will aid in organizing data into appropriate categories.  In other words, this tool would be able to filter out what data should be implemented in order to create the most value for the company.

As stated above, the majority of corporations view their data as simply a cost center, aiding in the draining of company resources.  As human nature would have it, a lot of times the attitudes in reference to existing data reflects the heart of "out with the old, in with the new," and new projects that allow for more capacity or faster processing take precedence in time and funding.  It is as if the old data is a bothersome reality, always needing a little extra attention and focus, demanding greater capacity, and continuously needing backups and protection.  What companies forget to keep at the forefront of their minds however, is the fact that taking care of this old and “bothersome” data is a great way to clear up space on the primary storage, and who doesn’t appreciate that?  It also aids in generating a faster backup process (with a reduced backup size), which is again, a bonus for any company!  It should not be so quickly forgotten that this nagging and consistently needy data is essentially the ESSENCE of the company.  Would you believe that this “cost center” is also where it gains some of its income?  If companies kept these points in their minds, we would not be seeing such poor practices when it comes to data management.  One thing is clear however, and that is the point that initiative with managing data can be difficult, and many view the overwhelming ocean of growing data and the daunting task of trying to manage it as too great a calling to handle.
 
However, IT departments and administrators must bear in mind that there are tools out there to help them classify and organize data, which ultimately will be their proverbial life-boat when it comes time to accepting the challenge of managing data.  Look at it this way.  Let's say you are trying to find every single chemical engineer on earth.  Does that sound a bit overwhelming?  The question is, how would you go about doing this?  After all, there are over 7 billion people on this planet!  Where do you begin?  Do you profile EVERY person?  Of course not.  What you would likely do, in order to simplify this complex process, is organizing people into broad groups, maybe by profession.  After that, you would probably do something like research what particular people in that profession graduated with a degree in engineering.  Though basic, you can use these same principles when narrowing down data and trying to sort through the piles of information in a company.  One must use their intelligence and business knowledge to better grasp corporate data, and in return, this will help companies benefit from their data assets.  In order to do this, there are tools available to help administrators better comprehend and manage their data. 

These tools exist to give IT departments the upper hand in managing their spheres.  Can you imagine trying to manage large amounts of company data on your own?  Luckily, we don’t have to do such things, and we live in an age in which a variety of solutions are available to help companies not only survive, but thrive.  These tools are out there to empower IT teams to successfully weather the storms and changing tides of data types and volumes.  After using such tools, companies often experience improved storage capacity, less costs associated with backup and data protection -- and let’s not forget compliance!  Compliance is a hot topic, and with the help of appropriate data management solutions, companies will be guaranteed to meet the various regulatory compliance rules in today’s business world.

In closing, it is important to note that more networking tools will not do anything close to what the appropriate data management solutions can do.  Companies should be looking for solutions that can help them as well with tracking, classifying, and organizing file systems over their whole lifespan.  When the right tools get into the right hands, IT managers are better able to do their jobs! 


About the author: Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.

Categories: DBA Blogs

Time Slider Bug In Solaris 11: Snapshots Created Only For Leaf Datasets

Pythian Group - Tue, 2016-06-21 14:06

Time Slider is a feature in Solaris that allows you to open past versions of your files.

It is implemented via a service which creates automatic ZFS snapshots every 15 minutes (frequent), hourly, daily, weekly and monthly. By default it retains only 3 frequent, 23 hourly, 6 daily, 3 weekly and 12 monthly snapshots.

I am using Time Slider on several Solaris 11.x servers and I found the same problem on all of them – it doesn’t create any automatic snapshots for some datasets.
For example, it doesn’t create any snapshots for the Solaris root dataset rpool/ROOT/solaris. However it creates snapshots for the leaf dataset rpool/ROOT/solaris/var. The rpool/ROOT dataset also doesn’t have any automatic snapshots, but rpool itself has snapshots, so it’s not easy to understand what is happening.

I searched for this problem and found that other people have noticed it as well.

There is a thread about it in the Oracle Community:

Solaris 11, 11.1, 11.2 and 11.3: Possible Bug in Time-Slider/ZFS Auto-Snapshot

The last message mentions the following bug in My Oracle Support:

Bug 15775093 : SUNBT7148449 time-slider only taking snapshots of leaf datasets

This bug has been created on 24-Feb-2012 but still isn’t fixed.

After more searching, I found this issue in the bug tracker of the illumos project:

Bug #1013 : time-slider fails to take snapshots on complex dataset/snapshot configuration

The original poster (OP) has encountered a problem with a complex pool configuration with many nested datasets having different values for the com.sun:auto-snapshot* properties.
He has dug into the Time Slider Python code and has proposed a change, which has been blindly accepted without proper testing and has ended up in Solaris 11.
Unfortunately, this change has introduced a serious bug which has destroyed the logic for creating recursive snapshots where they are possible.

Let me quickly explain how this is supposed to work.
If a pool has com.sun:auto-snapshot=true for the main dataset and all child datasets inherit this property, Time Slider can create a recursive snapshot for the main dataset and skip all child datasets, because they should already have the same snapshot.
However, if any child dataset has com.sun:auto-snapshot=false, Time Slider can no longer do this.
In this case the intended logic is to create recursive snapshots for all sub-trees which don’t have any excluded children and then create non-recursive snapshots for the remaining datasets which also have com.sun:auto-snapshot=true.
The algorithm is building separate lists of datasets for recursive snapshots and for single snapshots.

Here is an excerpt from /usr/share/time-slider/lib/time_slider/zfs.py:

        # Now figure out what can be recursively snapshotted and what
        # must be singly snapshotted. Single snapshot restrictions apply
        # to those datasets who have a child in the excluded list.
        # 'included' is sorted in reverse alphabetical order.
        for datasetname in included:
            excludedchild = False
            idx = bisect_right(everything, datasetname)
            children = [name for name in everything[idx:] if \
                        name.find(datasetname) == 0]
            for child in children:
                idx = bisect_left(excluded, child)
                if idx < len(excluded) and excluded[idx] == child:
                    excludedchild = True
                    single.append(datasetname)
                    break
            if excludedchild == False:
                # We want recursive list sorted in alphabetical order
                # so insert instead of append to the list.
                recursive.append (datasetname)

This part is the same in all versions of Solaris 11 (from 11-11 to 11.3, which is currently the latest).
If we look at the comment above the last line, it says that it should do “insert instead of append to the list”.
This is because the included list is sorted in reverse alphabetical order when it is built.
And this is the exact line that has been modified by the OP. When append is used instead of insert the recursive list becomes sorted in reverse alphabetical order as well.
The next part of the code is traversing the recursive list and is trying to skip all child datasets which already have their parent marked for recursive snapshot:

        for datasetname in recursive:
            parts = datasetname.rsplit('/', 1)
            parent = parts[0]
            if parent == datasetname:
                # Root filesystem of the Zpool, so
                # this can't be inherited and must be
                # set locally.
                finalrecursive.append(datasetname)
                continue
            idx = bisect_right(recursive, parent)
            if len(recursive) > 0 and \
               recursive[idx-1] == parent:
                # Parent already marked for recursive snapshot: so skip
                continue
            else:
                finalrecursive.append(datasetname)

This code heavily relies on the sort order and fails to do its job when the list is sorted in reverse order.
What happens is that all datasets remain in the list with child datasets being placed before their parents.
Then the code tries to create recursive snapshot for each of these datasets.
The operation is successful for the leaf datasets, but fails for the parent datasets because their children already have a snapshot with the same name.
The snapshots are also successful for the datasets in the single list (ones that have excluded children).
The rpool/dump and rpool/swap volumes have com.sun:auto-snapshot=false. That’s why rpool has snapshots.

Luckily, the original code was posted in the same thread so I just reverted the change:

            if excludedchild == False:
                # We want recursive list sorted in alphabetical order
                # so insert instead of append to the list.
                recursive.insert(0, datasetname)

After doing this, Time Slider immediately started creating snapshots for all datasets that have com.sun:auto-snapshot=true, including rpool/ROOT and rpool/ROOT/solaris.
So far I haven’t found any issue and snapshots work as expected.
There may be some issues with very complex structure like the OP had, but his change has completely destroyed the clever algorithm for doing recursive snapshots where they are possible.

Final Thoughts.

It is very strange that Oracle hasn’t paid attention to this bug and has left it hanging for more than 4 years. Maybe they consider Time Slider a non-important Desktop feature. However I think that it’s fairly useful for servers as well.

The solution is simple – a single line change, but it will be much better if this is resolved in a future Solaris 11.3 SRU. Until then I hope that my blog post will be useful for anyone who is trying to figure out why the automatic snapshots are not working as intended.

Categories: DBA Blogs

ASMCMD&gt: A Better DU, Version 2

Pythian Group - Tue, 2016-06-21 13:03

A while ago, I posted a better “du” for asmcmd . Since then, Oracle 12cR2 beta has been released but it seems that our poor old “du” will not be improved.

I then wrote a better “better du for asmcmd” with some cool new features compared to the previous one which was quite primitive.

In this second version you will find :

  • No need to set up your environment, asmdu will do it for you
  • If no parameter is passed to the script, asmdu will show you a summary of all the diskgroups : asmdu_1
  • If a parameter (a diskgroup) is passed to the script, asmdu will show you a summary of the diskgroup size with its filling rate and the list of the directories it contains with their sizes :asmdu_2Note : you can quickly see in this screenshot that “DB9” consumes the most space in the FRA diskgroup; it is perhaps worth to have a closer look
  • A list of all running instances on the server is now shown on top of the asmdu output; I found that handy to have that list here
  • I also put some colored thresholds (red, yellow and green) to be able to quickly see which diskgroup has a space issue; you can modify it easily in the script :
#
# Colored thresholds (Red, Yellow, Green)
#
 CRITICAL=90
 WARNING=75
  • The only pre-requisite is that oraenv has to work

Here is the code :

#!/bin/bash
# Fred Denis -- denis@pythian.com -- 2016
#
# - If no parameter specified, show a du of each DiskGroup
# - If a parameter, print a du of each subdirectory
#

D=$1

#
# Colored thresholds (Red, Yellow, Green)
#
 CRITICAL=90
 WARNING=75

#
# Set the ASM env
#
ORACLE_SID=`ps -ef | grep pmon | grep asm | awk '{print $NF}' | sed s'/asm_pmon_//' | egrep "^[+]"`
export ORAENV_ASK=NO
. oraenv > /dev/null 2>&1

#
# A quick list of what is running on the server
#
ps -ef | grep pmon | grep -v grep | awk '{print $NF}' | sed s'/.*_pmon_//' | egrep "^([+]|[Aa-Zz])" | sort | awk -v H="`hostname -s`" 'BEGIN {printf("%s", "Databases on " H " : ")} { printf("%s, ", $0)} END{printf("\n")}' | sed s'/, $//'

#
# Manage parameters
#
if [[ -z $D ]]
then # No directory provided, will check all the DG
 DG=`asmcmd lsdg | grep -v State | awk '{print $NF}' | sed s'/\///'`
 SUBDIR="No" # Do not show the subdirectories details if no directory is specified
else
 DG=`echo $D | sed s'/\/.*$//g'`
fi

#
# A header
#
printf "\n%25s%16s%16s%14s" "DiskGroup" "Total_MB" "Free_MB" "% Free"
printf "\n%25s%16s%16s%14s\n" "---------" "--------" "-------" "------"

#
# Show DG info
#
for X in ${DG}
do
 asmcmd lsdg ${X} | tail -1 |\
 awk -v DG="$X" -v W="$WARNING" -v C="$CRITICAL" '\
 BEGIN \
 {COLOR_BEGIN = "\033[1;" ;
 COLOR_END = "\033[m" ;
 RED = COLOR_BEGIN"31m" ;
 GREEN = COLOR_BEGIN"32m" ;
 YELLOW = COLOR_BEGIN"33m" ;
 COLOR = GREEN ;
 }
 { FREE = sprintf("%12d", $8/$7*100) ;
 if ((100-FREE) > W) {COLOR=YELLOW ;}
 if ((100-FREE) > C) {COLOR=RED ;}
 printf("%25s%16s%16s%s\n", DG, $7, $8, COLOR FREE COLOR_END) ; }'
done
printf "\n"

#
# Subdirs info
#
if [ -z ${SUBDIR} ]
then
(for DIR in `asmcmd ls ${D}`
do
 echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
done) | awk -v D="$D" ' BEGIN { printf("\n\t\t%40s\n\n", D " subdirectories size") ;
 printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB") ;
 printf("%25s%16s%16s\n", "------", "-------", "---------") ;}
 {
 printf("%25s%16s%16s\n", $1, $2, $3) ;
 use += $2 ;
 mir += $3 ;
 }
 END { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------") ;
 printf("%25s%16s%16s\n\n", "Total", use, mir) ;} '
fi


#************************************************************************#
#* E N D O F S O U R C E *#
#************************************************************************#

We use it a lot in my team and found no issue with the script so far. Let me know if you find one and enjoy!

 

Categories: DBA Blogs

Oracle Mobile Application Accelerator (MAX): Constructing Mobile Apps for Business

As you may already know, Oracle Mobile Cloud Service 2.0 was released, and with this release we got a new capability known as Oracle Mobile Application Accelerator (MAX). This tool was built to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pester: Cannot bind argument to parameter ‘Actual’ because it is an empty string.

Matt Penny - Tue, 2016-06-21 03:23

I’m just getting started with Pester and I got this error

   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1

The code I’m testing is very simple – it just separates a ‘Property Name’ and a ‘Property Value’.

So, when it’s working it does this:

get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
DEBUG: 09:15:37.6806 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}

PropertyName PropertyValue
------------ -------------
Weighting    103          

When I ran it from Pester I got this

GetHugoNameAndValue 06/21/2016 08:45:19 $ invoke-pester
Describing get-HugoNameAndValue
DEBUG: 08:45:56.3377 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [-] returns name and value 189ms
   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1
Tests completed in 189ms
Passed: 0 Failed: 1 Skipped: 0 Pending: 0

My Pester code was:

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.Value
        $value | Should Be '103'
    }
}

The problem here was simply that I’d got the name of the Property wrong. It was ‘PropertyName’ not just ‘Name’

So I changed the Pester

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.PropertyValue
        $value | Should Be '103'
    }
}

….and then it worked

invoke-pester
Describing get-HugoNameAndValue
DEBUG: 09:22:21.2291 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [+] returns name and value 99ms
Tests completed in 99ms
Passed: 1 Failed: 0 Skipped: 0 Pending: 0

Categories: DBA Blogs

Links for 2016-06-20 [del.icio.us]

Categories: DBA Blogs

HOWTO solve any problem recursively, PL/SQL edition…

RDBMS Insight - Mon, 2016-06-20 17:47
PROCEDURE solve (my_problem IN varchar2) IS
BEGIN
  my_idea := have_great_idea (my_problem) ;
  my_code := start_coding (my_idea) ;
  IF i_hit_complications (my_idea)
  THEN 
    new_problem := the_complications (my_idea);
    solve (new_problem);
  ELSE
    NULL; --we will never get here
  END IF;
END solve;

This abuse of recursion was inspired by @ThePracticalDev !

Categories: DBA Blogs

Services -- 3 : Monitoring Usage of Custom Services

Hemant K Chitale - Mon, 2016-06-20 10:04
In my previous blog post, I had demonstrated a few custom services created and started with DBMS_SERVICE.

Let's look at a couple of examples of monitoring usage of these services.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:51:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:23:50 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> execute dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant select_Catalog_role to hemant;

Grant succeeded.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:52:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:28:01 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t1 tablespace hemant as select * from dba_objects;

Table created.

SQL> insert into obj_t1 select * from obj_t1;

90935 rows created.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:53:54 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Jun 20 2016 22:52:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t2_small tablespace hemant as select * from obj_T1 where rownum < 11;

Table created.

SQL>

SQL> show user
USER is "SYSTEM"
SQL> select sid,serial#, to_char(logon_time,'DD-MON HH24:MI:SS'), service_name
2 from v$session
3 where username = 'HEMANT'
4 order by logon_time
5 /

SID SERIAL# TO_CHAR(LOGON_TIME,'DD-M
---------- ---------- ------------------------
SERVICE_NAME
----------------------------------------------------------------
61 50587 20-JUN 22:52:27
NEW_APP1

76 43919 20-JUN 22:53:54
FINANCE


SQL>


Thus, we can see that V$SESSION tracks the SERVICE_NAME in use --- even though the USERNAME is the same in both sessions, the SERVICE_NAME is different.

SQL> col svc_name format a10
SQL> col stat_name format a25 trunc
SQL> select
2 con_id, service_name SVC_NAME, stat_name, value
3 from v$service_stats
4 where service_name in ('NEW_APP1','FINANCE')
5 and
6 (stat_name like 'DB%' or stat_name like '%block%' or stat_name like 'redo%')
7 order by 1,2,3
8 /

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 FINANCE DB CPU 168973
3 FINANCE DB time 771742
3 FINANCE db block changes 653
3 FINANCE gc cr block receive time 0
3 FINANCE gc cr blocks received 0
3 FINANCE gc current block receive 0
3 FINANCE gc current blocks receive 0
3 FINANCE redo size 100484

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 NEW_APP1 DB CPU 869867
3 NEW_APP1 DB time 17415363
3 NEW_APP1 db block changes 11101
3 NEW_APP1 gc cr block receive time 0
3 NEW_APP1 gc cr blocks received 0
3 NEW_APP1 gc current block receive 0
3 NEW_APP1 gc current blocks receive 0
3 NEW_APP1 redo size 25057520

16 rows selected.

SQL>


So, even some statistics (unfortunately, not all -- the last time I checked in 11.2) are reported at the Service Level.  Thus, I can see that the users of NEW_APP1 consumed more CPU and DB Time and generated more changes and redo than users of FINANCE !  (Obviously, V$SERVICE_STATS reports statistics from the beginning of the instance so you should either user StatsPack (I haven't verified StatsPack reporting of statistics by individual service) or AWR (if you have the Diagnostic Pack licence) or your own collection scripts to report statistics for a specific window of time).
.
.
.


Categories: DBA Blogs

What’s in a name? – “Brittany” edition

RDBMS Insight - Mon, 2016-06-20 07:46

In my last post, I loaded US SSA names data into my dev instance to play with. In this post, I’ll play around with it a bit and take a look at the name “Brittany” and all its variant spellings.

I found nearly 100 different spellings of “Brittany” in the US SSA data thanks to a handy regexp:

SELECT name nm, SUM(freq) FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'
GROUP BY name
ORDER BY SUM(freq) DESC;
NM				SUM(FREQ)
------------------------------ ----------
Brittany			   357159
Brittney			    81648
Britney 			    34182
Brittani			    11703
Britany 			     6291
Brittni 			     5985
Brittanie			     4725
Britni				     4315
Brittny 			     3584
Brittaney			     3280
...
Bryttnee			       10
Britttany				7
Brytanie				7
Brittanae				6
Bryttnii				6
...
Brittanii				5
Brittiana				5
 
91 rows selected.

The regexp isn’t perfect. It returns a few uncommon names which aren’t pronounced “Brittany”: “Brittiana”, “Brittiani”, “Britane”, “Brittina”, “Britanya”, “Brittine” – and one I’m not sure about, “Brittnae”. But on the other hand, it did let me discover that 7 “Britttany”s applied for SSNs in 1990. Yes, that’s “Britttany” with 3 “T”s.

Fortunately, all the “non-Brittanys” the regexp returns are quite uncommon and not even in the top 20. So the regexp will do for a graph of the top spellings. Let’s get the data by year and look at the percentage of girls in each year named Brittany/Brittney/Britney/Brittani:

WITH n AS (SELECT name nm, YEAR yr, sex, freq FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'),
y AS (SELECT  YEAR yr, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT y.yr, 
decode(n.nm,'Brittany','Brittany', -- like Brittany Furlan
'Brittney','Brittney', -- like Brittney Griner
'Britney','Britney', -- like Britney Spears
'Brittani','Brittani', -- like Brittani Fulfer
'Other Brits') AS thename,
nvl(100*freq/tot,0) pct  FROM n, y 
WHERE  n.sex(+)=y.sex AND n.yr(+)=y.yr AND y.yr >= 1968
ORDER BY y.yr, nvl(n.nm,' ')

I graphed this in SQL Developer:
britts
From the graph it’s clear that “Brittany” is by far the most popular spelling, followed by “Brittney”. The sum of all Brittany-spellings peaked in 1989, but “Britney” has a sharp peak in 2000 – the year that singer Britney Spears released Oops I Did It Again, “one of the best-selling albums of all time” per Wikipedia.

This makes Brittany, however you spell it, a very early-90s-baby kind of name. “Brittany” was the #3 girls’ name in 1989, behind Jessica and Ashley, and was not nearly as popular in decades before or since. In subsequent posts I’ll look some more at names we can identify with specific decades.

Categories: DBA Blogs

Next Round Of ANZ “Let’s Talk Database” Events (July/August 2016)

Richard Foote - Mon, 2016-06-20 00:51
I’ll be presenting the next round of “Let’s Talk Database” events around Australia and NZ this winter in July/August 2016. These are free events but due to limited places have often “sold out” in the past, so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. We always have […]
Categories: DBA Blogs

Links for 2016-06-18 [del.icio.us]

Categories: DBA Blogs

An Eye Opener - Oracle Data Visualization

Making sense of your data shouldn't be tough! Visualizing data is a big part of making it understandable, actionable and in general useful. Oracle Data Visualization is stunningly visual and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How to Clean Up GoldenGate Integrated Extract if the Delete Extract fails?

VitalSoftTech - Thu, 2016-06-16 15:13
If the extract process was not un-registered from the database then Cleaning Up GoldenGate Integrated Extract Using DBMS_CAPTURE_ADM.DROP_CAPTURE will be necessary.
Categories: DBA Blogs

Graph frequently executed SQL by FORCE_MATCHING_SIGNATURE

Bobby Durrett's DBA Blog - Thu, 2016-06-16 15:10

I made a new graph in my PythonDBAGraphs program. Here is an example with real data but the database name blanked out:

sql_matching_group_of_signatures_blog

My graphs are all sized for 1920 x 1080 monitors so I can see all the detail in the lines using my entire screen. The idea for this graph is to show how the performance of the queries that matter to the users changes as we add more load and data to this production database. I knew that this database had many queries with literals in their where clauses. I decided to pick a group of SQL by FORCE_MATCHING_SIGNATURE and to graph the average elapsed run time against the total number of executions.

I used this query to list all the SQL by signature:

column FORCE_MATCHING_SIGNATURE format 99999999999999999999

select FORCE_MATCHING_SIGNATURE,
sum(ELAPSED_TIME_DELTA)/1000000 total_seconds,
sum(executions_delta) total_executions,
count(distinct sql_id) number_sqlids,
count(distinct snap_id) number_hours,
min(PARSING_SCHEMA_NAME)
from DBA_HIST_SQLSTAT
group by FORCE_MATCHING_SIGNATURE
order by number_hours desc;

This is an edited version of the output – cut down to fit the page:

FORCE_MATCHING_SIGNATURE TOTAL_SECONDS TOTAL_EXECUTIONS NUMBER_HOURS
------------------------ ------------- ---------------- ------------
    14038313233049026256     22621.203         68687024         1019
    18385146879684525921    18020.9776        157888956         1013
     2974462313782736551    22875.4743           673687          993
    12492389898598272683    6203.78985         66412941          992
    14164303807833460050    4390.32324           198997          980
    10252833433610975622    6166.07675           306373          979
    17697983043057986874    17391.0907         25914398          974
    15459941437096211273    9869.31961          7752698          967
     2690518030862682918    15308.8561          5083672          952
     1852474737868084795    50095.5382          3906220          948
     6256114255890028779    380.095915          4543306          947
    16226347765919129545    9199.14289           215756          946
    13558933806438570935    394.913411          4121336          945
    12227994223267192558    369.784714          3970052          945
    18298186003132032869    296.887075          3527130          945
    17898820371160082776    184.125159          3527322          944
    10790121820101128903    2474.15195          4923888          943
     2308739084210563004    265.395538          3839998          941
    13580764457377834041    2807.68503         62923457          934
    12635549236735416450    1023.42959           702076          918
    17930064579773119626    2423.03972         61576984          914
    14879486686694324607     33.253284            17969          899
     9212708781170196788     7292.5267           126641          899
      357347690345658614    6321.51612           182371          899
    15436428048766097389     11986.082           334125          886
     5089204714765300123    6858.98913           190700          851
    11165399311873161545    4864.60469         45897756          837
    12042794039346605265    11223.0792           179064          835
    15927676903549361476    505.624771          3717196          832
     9120348263769454156    12953.0746           230090          828
    10517599934976061598     311.61394          3751259          813
     6987137087681155918    540.565595          3504784          809
    11181311136166944889      5018.309         59540417          808
      187803040686893225    3199.87327         12788206          800

I picked the ones that had executed in 800 or more hours. Our AWR has about 1000 hours of history so 800 hours represents about 80% of the AWR snapshots. I ended up pulling one of these queries out because it was a select for update and sometimes gets hung on row locks and skews the graph. So, the graph above has that one pulled out.

I based the graph above on this query:

select
sn.END_INTERVAL_TIME,
sum(ss.executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/((sum(executions_delta)+1))
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.snap_id=sn.snap_id
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
and ss.FORCE_MATCHING_SIGNATURE in
(
14038313233049026256,
18385146879684525921,
2974462313782736551,
12492389898598272683,
14164303807833460050,
10252833433610975622,
17697983043057986874,
15459941437096211273,
2690518030862682918,
6256114255890028779,
16226347765919129545,
13558933806438570935,
12227994223267192558,
18298186003132032869,
17898820371160082776,
10790121820101128903,
2308739084210563004,
13580764457377834041,
12635549236735416450,
17930064579773119626,
14879486686694324607,
9212708781170196788,
357347690345658614,
15436428048766097389,
5089204714765300123,
11165399311873161545,
12042794039346605265,
15927676903549361476,
9120348263769454156,
10517599934976061598,
6987137087681155918,
11181311136166944889,
187803040686893225
)
group by sn.END_INTERVAL_TIME
order by sn.END_INTERVAL_TIME;

Only time will tell if this really is a helpful way to check system performance as the load grows, but I thought it was worth sharing what I had done. Some part of this might be helpful to others.

Bobby

Categories: DBA Blogs

Services -- 2 : Starting and Connecting to Services (non-RAC)

Hemant K Chitale - Thu, 2016-06-16 10:22
Continuing with the 12.1.0.2 non-RAC MultiTenant environment and two services demonstrated earlier,

I have restarted the environment today :

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 22:57:17

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 22:57:31 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1040191008 bytes
Database Buffers 587202560 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database open;
alter pluggable database open
*
ERROR at line 1:
ORA-65000: missing or invalid pluggable database name


SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:00:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


The two manually created services (NEW_APP1 and NEW_APP2) don't startup automatically.

SQL> exec dbms_service.start_service('NEW_APP1');
BEGIN dbms_service.start_service('NEW_APP1'); END;

*
ERROR at line 1:
ORA-44773: Cannot perform requested service operation.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 63
ORA-06512: at "SYS.DBMS_SERVICE", line 395
ORA-06512: at line 1


SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus sys/oracle@PDB1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:03:05 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>


[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:04:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


When I attempted to start the custom service (that, per the previous post, was created in PDB1) when still connected to CDB$ROOT, the command fails.  I had to connect to PDB1 to start the service.

SQL> create tablespace hemant ;                           

Tablespace created.

SQL> create user hemant identified by hemant default tablespace hemant quota unlimited on hemant;

User created.

SQL> grant create session to hemant;

Grant succeeded.

SQL>
[oracle@ora12102 Desktop]$ tnsping NEW_APP1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:00

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP1)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping NEW_APP2

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:05

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP2)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:29 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:48 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show conn_name
SP2-0158: unknown SHOW option "conn_name"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>


The newly created user HEMANT belongs to the Database.  The user is not tied to a Service.  He can use either Service (whichever is running) to connect to the Database.  But the two Services have two different TNSNAMES.ORA entries --- differing by the SERVICE_NAME specification.

Thus, the DBA could configure some application servers to use one service name and other application servers to use another service name.

An example of such a configuration is where the first set of application servers could be for Finance Applications called "FINANCE" and the second set of servers could be for HR Applications (in the *same* database) called "HR".  Here I create the two services but start only the FINANCE service.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@pdb1 

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:13:55 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.create_service('FINANCE','FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:15:45

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "FINANCE" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


Now the user should be able to connect to FINANCE, but not to HR.

[oracle@ora12102 Desktop]$ tnsping FINANCE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:16

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = FINANCE)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping HR

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:18

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = HR)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:18:57 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:48 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@HR

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:19:23 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:


The user could connect to FINANCE but the request to HR returned ORA-12514.
(Notice how "tnsping HR" is successful but the connection is not ?  That is because tnsping only tests if the listener is running, it does not test if the database instance and service are both running).

So, using multiple services, the DBA can "provision" the same database to multiple applications.  The entry-point is the SERVICE_NAME,  not the USERNAME.   Users and Services are independent of each other.

.
.
.
Categories: DBA Blogs

ORA-00020: maximum number of processes exceeded

Learn DB Concepts with me... - Thu, 2016-06-16 09:45


ORA-00020: maximum number of processes



This error occurs when your total numbers of sessions connecting to oracle database has exceeded the max limit set in parameter file. Simplest way to overcome this error is to reset the max sessions value to a greater value than existing.Here is how to do it
 



oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:20:26 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded




oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:23:42 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name:

Disconnected from ORACLE

I wasn't able get into the oracle database to kill some database session. So I tried to kill few sessions on OS to make my way into DB.

oracle@LINUX201:[~] $ ps -ef|grep oracle
.
.
.
.
.
.
.

oracle   64373     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64540     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64637     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65192     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
root     65407 65381  0 May16 pts/2    00:00:00 sudo -u oracle -i
oracle   65408 65407  0 May16 pts/2    00:00:00 -bash
oracle   65458 65408  0 May16 pts/2    00:00:00 sqlplus
oracle   65459 65458  0 May16 ?        00:00:00 oracleQPDEV (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   65518     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65520     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
oracle   65534     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)

oracle@LINUX201:[~] $ kill -9 64785
oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:26:25 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name: ^C

Killing few processes on Linux :

oracle@LINUX201:[~] $ kill -9 65192 65085 64785 64777 64655 64653 64637


oracle@LINUX201:[~] $ ps -ef|grep 65192 65085 64785 64777 64655 64653 64637

.
.
.
.
oracle   50258     1  0 Jun07 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   50264     1  0 Jun07 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   50268     1  0 Jun07 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.

oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:30:07 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string


Now reset the max processes to a greater value:

SQL> alter system set processes=1200 scope=spfile;

System altered.

SQL>  show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
QPDEV     READ WRITE

This will need a restart to take affect


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area 3206836224 bytes
Fixed Size                  2257520 bytes
Variable Size            1275071888 bytes
Database Buffers         1912602624 bytes
Redo Buffers               16904192 bytes
Database mounted.
Database opened.

SQL> show parameter process

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     1200
processor_group_name                 string

SQL>
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs