DBA Blogs

Maintaining Partitioned Tables

Tom Kyte - Thu, 2016-06-23 15:09
<code>Hi Tom, I need to build a table that will hold read-only data for up to 2 months. The table will have a load (via a perl script run half hourly) of 3 million new rows a day. Queries will be using the date col in the table for data eliminati...
Categories: DBA Blogs

Handling ORA-12170: TNS:Connect timeout occurred & ORA-03114: not connected to ORACLE failures

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom, We had a scenario where the sqlplus connection failed with "ORA-12170: TNS:Connect timeout occurred" in one instance & "ORA-03114: not connected to ORACLE" in another instance while executing from a shell script, but in both the cases retu...
Categories: DBA Blogs

Single Sign on ( SSO ) in Oracle Apex

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom , I am suffering in implementing Single Sign on ( SSO ) in Oracle apex using Custom authentication scheme. i have two applications like <code> <b> App id</b> <b> Application Name</b> 101 - ...
Categories: DBA Blogs

Version Control for PLSQL

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom, A couple years ago, I saw demo of how to version your PLSQL code. I have been searching for the code, all morning and I cannot find it anywhere. Can you point me to where/how I can version my PLSQL package? Thanks, Greg
Categories: DBA Blogs

Create second Ora 12c environment on same AWS server, now CDB/PDB type

Tom Kyte - Thu, 2016-06-23 15:09
Hi, I have an Oracle 12c running on AWS on CentOS Linux rel 7.1. The database has been installed as a standalone non-CDB database. This DEV database is used for development. I have to install a second environment, for QA, on the same server. This ...
Categories: DBA Blogs

Oracle Partner Community - SOA and BPM Suite 12c Resources

SOA Suite 12c and BPM Suite 12c bring exciting new features around the key themes of Developer Productivity and Industrial SOA or Integration & Process Cloud Services. To support & grow...

We share our skills to maximize your revenue!
Categories: DBA Blogs

With a Modern Storage Infrastructure, Companies Must Find an Excellent Data Management Tool

Kubilay Çilkara - Tue, 2016-06-21 15:54
One of the more “weighty” questions within the IT world is in reference to the value of each company’s particular data.  Many wonder what the true value of protected data is in the long-run, eventually view it as a cost center where money continuously gets used up.  In order to make data work in the favor of a business and to help generate some income, companies must get smarter with their approaches to business and stop looking at their data this way!

The majority of companies out there admit to wanting a Big Data system as a part of their layout, but ultimately have nothing to show for these desires.  There have been many failed implementations despite lots of money spent on resources to help.  You might say that many businesses have the best intentions when comes to incorporating a data management plan, yet intention without action is arguably useless at the end of the day.  You might wonder why companies fail to follow through on their desires for a Big Data system.  Well, the answer is really quite simple.  The amount of data out there is staggering, and trying to manage it all would be like trying to boil the vast ocean.  You can imagine how difficult-if not impossible-that would be! 

Many question if the cloud would be a good solution, and if everyone should just get started on moving their data up there.  Or perhaps virtualization?  Would that be the answer?  These two tools are valuable, but the question stands on whether or not they are the best way to utilize company resources.  What companies really need is a tool that will aid in organizing data into appropriate categories.  In other words, this tool would be able to filter out what data should be implemented in order to create the most value for the company.

As stated above, the majority of corporations view their data as simply a cost center, aiding in the draining of company resources.  As human nature would have it, a lot of times the attitudes in reference to existing data reflects the heart of "out with the old, in with the new," and new projects that allow for more capacity or faster processing take precedence in time and funding.  It is as if the old data is a bothersome reality, always needing a little extra attention and focus, demanding greater capacity, and continuously needing backups and protection.  What companies forget to keep at the forefront of their minds however, is the fact that taking care of this old and “bothersome” data is a great way to clear up space on the primary storage, and who doesn’t appreciate that?  It also aids in generating a faster backup process (with a reduced backup size), which is again, a bonus for any company!  It should not be so quickly forgotten that this nagging and consistently needy data is essentially the ESSENCE of the company.  Would you believe that this “cost center” is also where it gains some of its income?  If companies kept these points in their minds, we would not be seeing such poor practices when it comes to data management.  One thing is clear however, and that is the point that initiative with managing data can be difficult, and many view the overwhelming ocean of growing data and the daunting task of trying to manage it as too great a calling to handle.
 
However, IT departments and administrators must bear in mind that there are tools out there to help them classify and organize data, which ultimately will be their proverbial life-boat when it comes time to accepting the challenge of managing data.  Look at it this way.  Let's say you are trying to find every single chemical engineer on earth.  Does that sound a bit overwhelming?  The question is, how would you go about doing this?  After all, there are over 7 billion people on this planet!  Where do you begin?  Do you profile EVERY person?  Of course not.  What you would likely do, in order to simplify this complex process, is organizing people into broad groups, maybe by profession.  After that, you would probably do something like research what particular people in that profession graduated with a degree in engineering.  Though basic, you can use these same principles when narrowing down data and trying to sort through the piles of information in a company.  One must use their intelligence and business knowledge to better grasp corporate data, and in return, this will help companies benefit from their data assets.  In order to do this, there are tools available to help administrators better comprehend and manage their data. 

These tools exist to give IT departments the upper hand in managing their spheres.  Can you imagine trying to manage large amounts of company data on your own?  Luckily, we don’t have to do such things, and we live in an age in which a variety of solutions are available to help companies not only survive, but thrive.  These tools are out there to empower IT teams to successfully weather the storms and changing tides of data types and volumes.  After using such tools, companies often experience improved storage capacity, less costs associated with backup and data protection -- and let’s not forget compliance!  Compliance is a hot topic, and with the help of appropriate data management solutions, companies will be guaranteed to meet the various regulatory compliance rules in today’s business world.

In closing, it is important to note that more networking tools will not do anything close to what the appropriate data management solutions can do.  Companies should be looking for solutions that can help them as well with tracking, classifying, and organizing file systems over their whole lifespan.  When the right tools get into the right hands, IT managers are better able to do their jobs! 


About the author: Jason Zhang is the product marketing person for Rocket Software's Backup, Storage, and Cloud solutions.

Categories: DBA Blogs

Time Slider Bug In Solaris 11: Snapshots Created Only For Leaf Datasets

Pythian Group - Tue, 2016-06-21 14:06

Time Slider is a feature in Solaris that allows you to open past versions of your files.

It is implemented via a service which creates automatic ZFS snapshots every 15 minutes (frequent), hourly, daily, weekly and monthly. By default it retains only 3 frequent, 23 hourly, 6 daily, 3 weekly and 12 monthly snapshots.

I am using Time Slider on several Solaris 11.x servers and I found the same problem on all of them – it doesn’t create any automatic snapshots for some datasets.
For example, it doesn’t create any snapshots for the Solaris root dataset rpool/ROOT/solaris. However it creates snapshots for the leaf dataset rpool/ROOT/solaris/var. The rpool/ROOT dataset also doesn’t have any automatic snapshots, but rpool itself has snapshots, so it’s not easy to understand what is happening.

I searched for this problem and found that other people have noticed it as well.

There is a thread about it in the Oracle Community:

Solaris 11, 11.1, 11.2 and 11.3: Possible Bug in Time-Slider/ZFS Auto-Snapshot

The last message mentions the following bug in My Oracle Support:

Bug 15775093 : SUNBT7148449 time-slider only taking snapshots of leaf datasets

This bug has been created on 24-Feb-2012 but still isn’t fixed.

After more searching, I found this issue in the bug tracker of the illumos project:

Bug #1013 : time-slider fails to take snapshots on complex dataset/snapshot configuration

The original poster (OP) has encountered a problem with a complex pool configuration with many nested datasets having different values for the com.sun:auto-snapshot* properties.
He has dug into the Time Slider Python code and has proposed a change, which has been blindly accepted without proper testing and has ended up in Solaris 11.
Unfortunately, this change has introduced a serious bug which has destroyed the logic for creating recursive snapshots where they are possible.

Let me quickly explain how this is supposed to work.
If a pool has com.sun:auto-snapshot=true for the main dataset and all child datasets inherit this property, Time Slider can create a recursive snapshot for the main dataset and skip all child datasets, because they should already have the same snapshot.
However, if any child dataset has com.sun:auto-snapshot=false, Time Slider can no longer do this.
In this case the intended logic is to create recursive snapshots for all sub-trees which don’t have any excluded children and then create non-recursive snapshots for the remaining datasets which also have com.sun:auto-snapshot=true.
The algorithm is building separate lists of datasets for recursive snapshots and for single snapshots.

Here is an excerpt from /usr/share/time-slider/lib/time_slider/zfs.py:

        # Now figure out what can be recursively snapshotted and what
        # must be singly snapshotted. Single snapshot restrictions apply
        # to those datasets who have a child in the excluded list.
        # 'included' is sorted in reverse alphabetical order.
        for datasetname in included:
            excludedchild = False
            idx = bisect_right(everything, datasetname)
            children = [name for name in everything[idx:] if \
                        name.find(datasetname) == 0]
            for child in children:
                idx = bisect_left(excluded, child)
                if idx < len(excluded) and excluded[idx] == child:
                    excludedchild = True
                    single.append(datasetname)
                    break
            if excludedchild == False:
                # We want recursive list sorted in alphabetical order
                # so insert instead of append to the list.
                recursive.append (datasetname)

This part is the same in all versions of Solaris 11 (from 11-11 to 11.3, which is currently the latest).
If we look at the comment above the last line, it says that it should do “insert instead of append to the list”.
This is because the included list is sorted in reverse alphabetical order when it is built.
And this is the exact line that has been modified by the OP. When append is used instead of insert the recursive list becomes sorted in reverse alphabetical order as well.
The next part of the code is traversing the recursive list and is trying to skip all child datasets which already have their parent marked for recursive snapshot:

        for datasetname in recursive:
            parts = datasetname.rsplit('/', 1)
            parent = parts[0]
            if parent == datasetname:
                # Root filesystem of the Zpool, so
                # this can't be inherited and must be
                # set locally.
                finalrecursive.append(datasetname)
                continue
            idx = bisect_right(recursive, parent)
            if len(recursive) > 0 and \
               recursive[idx-1] == parent:
                # Parent already marked for recursive snapshot: so skip
                continue
            else:
                finalrecursive.append(datasetname)

This code heavily relies on the sort order and fails to do its job when the list is sorted in reverse order.
What happens is that all datasets remain in the list with child datasets being placed before their parents.
Then the code tries to create recursive snapshot for each of these datasets.
The operation is successful for the leaf datasets, but fails for the parent datasets because their children already have a snapshot with the same name.
The snapshots are also successful for the datasets in the single list (ones that have excluded children).
The rpool/dump and rpool/swap volumes have com.sun:auto-snapshot=false. That’s why rpool has snapshots.

Luckily, the original code was posted in the same thread so I just reverted the change:

            if excludedchild == False:
                # We want recursive list sorted in alphabetical order
                # so insert instead of append to the list.
                recursive.insert(0, datasetname)

After doing this, Time Slider immediately started creating snapshots for all datasets that have com.sun:auto-snapshot=true, including rpool/ROOT and rpool/ROOT/solaris.
So far I haven’t found any issue and snapshots work as expected.
There may be some issues with very complex structure like the OP had, but his change has completely destroyed the clever algorithm for doing recursive snapshots where they are possible.

Final Thoughts.

It is very strange that Oracle hasn’t paid attention to this bug and has left it hanging for more than 4 years. Maybe they consider Time Slider a non-important Desktop feature. However I think that it’s fairly useful for servers as well.

The solution is simple – a single line change, but it will be much better if this is resolved in a future Solaris 11.3 SRU. Until then I hope that my blog post will be useful for anyone who is trying to figure out why the automatic snapshots are not working as intended.

Categories: DBA Blogs

ASMCMD&gt: A Better DU, Version 2

Pythian Group - Tue, 2016-06-21 13:03

A while ago, I posted a better “du” for asmcmd . Since then, Oracle 12cR2 beta has been released but it seems that our poor old “du” will not be improved.

I then wrote a better “better du for asmcmd” with some cool new features compared to the previous one which was quite primitive.

In this second version you will find :

  • No need to set up your environment, asmdu will do it for you
  • If no parameter is passed to the script, asmdu will show you a summary of all the diskgroups : asmdu_1
  • If a parameter (a diskgroup) is passed to the script, asmdu will show you a summary of the diskgroup size with its filling rate and the list of the directories it contains with their sizes :asmdu_2Note : you can quickly see in this screenshot that “DB9” consumes the most space in the FRA diskgroup; it is perhaps worth to have a closer look
  • A list of all running instances on the server is now shown on top of the asmdu output; I found that handy to have that list here
  • I also put some colored thresholds (red, yellow and green) to be able to quickly see which diskgroup has a space issue; you can modify it easily in the script :
#
# Colored thresholds (Red, Yellow, Green)
#
 CRITICAL=90
 WARNING=75
  • The only pre-requisite is that oraenv has to work

Here is the code :

#!/bin/bash
# Fred Denis -- denis@pythian.com -- 2016
#
# - If no parameter specified, show a du of each DiskGroup
# - If a parameter, print a du of each subdirectory
#

D=$1

#
# Colored thresholds (Red, Yellow, Green)
#
 CRITICAL=90
 WARNING=75

#
# Set the ASM env
#
ORACLE_SID=`ps -ef | grep pmon | grep asm | awk '{print $NF}' | sed s'/asm_pmon_//' | egrep "^[+]"`
export ORAENV_ASK=NO
. oraenv > /dev/null 2>&1

#
# A quick list of what is running on the server
#
ps -ef | grep pmon | grep -v grep | awk '{print $NF}' | sed s'/.*_pmon_//' | egrep "^([+]|[Aa-Zz])" | sort | awk -v H="`hostname -s`" 'BEGIN {printf("%s", "Databases on " H " : ")} { printf("%s, ", $0)} END{printf("\n")}' | sed s'/, $//'

#
# Manage parameters
#
if [[ -z $D ]]
then # No directory provided, will check all the DG
 DG=`asmcmd lsdg | grep -v State | awk '{print $NF}' | sed s'/\///'`
 SUBDIR="No" # Do not show the subdirectories details if no directory is specified
else
 DG=`echo $D | sed s'/\/.*$//g'`
fi

#
# A header
#
printf "\n%25s%16s%16s%14s" "DiskGroup" "Total_MB" "Free_MB" "% Free"
printf "\n%25s%16s%16s%14s\n" "---------" "--------" "-------" "------"

#
# Show DG info
#
for X in ${DG}
do
 asmcmd lsdg ${X} | tail -1 |\
 awk -v DG="$X" -v W="$WARNING" -v C="$CRITICAL" '\
 BEGIN \
 {COLOR_BEGIN = "\033[1;" ;
 COLOR_END = "\033[m" ;
 RED = COLOR_BEGIN"31m" ;
 GREEN = COLOR_BEGIN"32m" ;
 YELLOW = COLOR_BEGIN"33m" ;
 COLOR = GREEN ;
 }
 { FREE = sprintf("%12d", $8/$7*100) ;
 if ((100-FREE) > W) {COLOR=YELLOW ;}
 if ((100-FREE) > C) {COLOR=RED ;}
 printf("%25s%16s%16s%s\n", DG, $7, $8, COLOR FREE COLOR_END) ; }'
done
printf "\n"

#
# Subdirs info
#
if [ -z ${SUBDIR} ]
then
(for DIR in `asmcmd ls ${D}`
do
 echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
done) | awk -v D="$D" ' BEGIN { printf("\n\t\t%40s\n\n", D " subdirectories size") ;
 printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB") ;
 printf("%25s%16s%16s\n", "------", "-------", "---------") ;}
 {
 printf("%25s%16s%16s\n", $1, $2, $3) ;
 use += $2 ;
 mir += $3 ;
 }
 END { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------") ;
 printf("%25s%16s%16s\n\n", "Total", use, mir) ;} '
fi


#************************************************************************#
#* E N D O F S O U R C E *#
#************************************************************************#

We use it a lot in my team and found no issue with the script so far. Let me know if you find one and enjoy!

 

Categories: DBA Blogs

Oracle Mobile Application Accelerator (MAX): Constructing Mobile Apps for Business

As you may already know, Oracle Mobile Cloud Service 2.0 was released, and with this release we got a new capability known as Oracle Mobile Application Accelerator (MAX). This tool was built to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pester: Cannot bind argument to parameter ‘Actual’ because it is an empty string.

Matt Penny - Tue, 2016-06-21 03:23

I’m just getting started with Pester and I got this error

   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1

The code I’m testing is very simple – it just separates a ‘Property Name’ and a ‘Property Value’.

So, when it’s working it does this:

get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
DEBUG: 09:15:37.6806 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}

PropertyName PropertyValue
------------ -------------
Weighting    103          

When I ran it from Pester I got this

GetHugoNameAndValue 06/21/2016 08:45:19 $ invoke-pester
Describing get-HugoNameAndValue
DEBUG: 08:45:56.3377 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [-] returns name and value 189ms
   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1
Tests completed in 189ms
Passed: 0 Failed: 1 Skipped: 0 Pending: 0

My Pester code was:

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.Value
        $value | Should Be '103'
    }
}

The problem here was simply that I’d got the name of the Property wrong. It was ‘PropertyName’ not just ‘Name’

So I changed the Pester

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.PropertyValue
        $value | Should Be '103'
    }
}

….and then it worked

invoke-pester
Describing get-HugoNameAndValue
DEBUG: 09:22:21.2291 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [+] returns name and value 99ms
Tests completed in 99ms
Passed: 1 Failed: 0 Skipped: 0 Pending: 0

Categories: DBA Blogs

Links for 2016-06-20 [del.icio.us]

Categories: DBA Blogs

HOWTO solve any problem recursively, PL/SQL edition…

RDBMS Insight - Mon, 2016-06-20 17:47
PROCEDURE solve (my_problem IN varchar2) IS
BEGIN
  my_idea := have_great_idea (my_problem) ;
  my_code := start_coding (my_idea) ;
  IF i_hit_complications (my_idea)
  THEN 
    new_problem := the_complications (my_idea);
    solve (new_problem);
  ELSE
    NULL; --we will never get here
  END IF;
END solve;

This abuse of recursion was inspired by @ThePracticalDev !

Categories: DBA Blogs

Services -- 3 : Monitoring Usage of Custom Services

Hemant K Chitale - Mon, 2016-06-20 10:04
In my previous blog post, I had demonstrated a few custom services created and started with DBMS_SERVICE.

Let's look at a couple of examples of monitoring usage of these services.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:51:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:23:50 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> execute dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant select_Catalog_role to hemant;

Grant succeeded.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:52:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:28:01 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t1 tablespace hemant as select * from dba_objects;

Table created.

SQL> insert into obj_t1 select * from obj_t1;

90935 rows created.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:53:54 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Jun 20 2016 22:52:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t2_small tablespace hemant as select * from obj_T1 where rownum < 11;

Table created.

SQL>

SQL> show user
USER is "SYSTEM"
SQL> select sid,serial#, to_char(logon_time,'DD-MON HH24:MI:SS'), service_name
2 from v$session
3 where username = 'HEMANT'
4 order by logon_time
5 /

SID SERIAL# TO_CHAR(LOGON_TIME,'DD-M
---------- ---------- ------------------------
SERVICE_NAME
----------------------------------------------------------------
61 50587 20-JUN 22:52:27
NEW_APP1

76 43919 20-JUN 22:53:54
FINANCE


SQL>


Thus, we can see that V$SESSION tracks the SERVICE_NAME in use --- even though the USERNAME is the same in both sessions, the SERVICE_NAME is different.

SQL> col svc_name format a10
SQL> col stat_name format a25 trunc
SQL> select
2 con_id, service_name SVC_NAME, stat_name, value
3 from v$service_stats
4 where service_name in ('NEW_APP1','FINANCE')
5 and
6 (stat_name like 'DB%' or stat_name like '%block%' or stat_name like 'redo%')
7 order by 1,2,3
8 /

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 FINANCE DB CPU 168973
3 FINANCE DB time 771742
3 FINANCE db block changes 653
3 FINANCE gc cr block receive time 0
3 FINANCE gc cr blocks received 0
3 FINANCE gc current block receive 0
3 FINANCE gc current blocks receive 0
3 FINANCE redo size 100484

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 NEW_APP1 DB CPU 869867
3 NEW_APP1 DB time 17415363
3 NEW_APP1 db block changes 11101
3 NEW_APP1 gc cr block receive time 0
3 NEW_APP1 gc cr blocks received 0
3 NEW_APP1 gc current block receive 0
3 NEW_APP1 gc current blocks receive 0
3 NEW_APP1 redo size 25057520

16 rows selected.

SQL>


So, even some statistics (unfortunately, not all -- the last time I checked in 11.2) are reported at the Service Level.  Thus, I can see that the users of NEW_APP1 consumed more CPU and DB Time and generated more changes and redo than users of FINANCE !  (Obviously, V$SERVICE_STATS reports statistics from the beginning of the instance so you should either user StatsPack (I haven't verified StatsPack reporting of statistics by individual service) or AWR (if you have the Diagnostic Pack licence) or your own collection scripts to report statistics for a specific window of time).
.
.
.


Categories: DBA Blogs

What’s in a name? – “Brittany” edition

RDBMS Insight - Mon, 2016-06-20 07:46

In my last post, I loaded US SSA names data into my dev instance to play with. In this post, I’ll play around with it a bit and take a look at the name “Brittany” and all its variant spellings.

I found nearly 100 different spellings of “Brittany” in the US SSA data thanks to a handy regexp:

SELECT name nm, SUM(freq) FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'
GROUP BY name
ORDER BY SUM(freq) DESC;
NM				SUM(FREQ)
------------------------------ ----------
Brittany			   357159
Brittney			    81648
Britney 			    34182
Brittani			    11703
Britany 			     6291
Brittni 			     5985
Brittanie			     4725
Britni				     4315
Brittny 			     3584
Brittaney			     3280
...
Bryttnee			       10
Britttany				7
Brytanie				7
Brittanae				6
Bryttnii				6
...
Brittanii				5
Brittiana				5
 
91 rows selected.

The regexp isn’t perfect. It returns a few uncommon names which aren’t pronounced “Brittany”: “Brittiana”, “Brittiani”, “Britane”, “Brittina”, “Britanya”, “Brittine” – and one I’m not sure about, “Brittnae”. But on the other hand, it did let me discover that 7 “Britttany”s applied for SSNs in 1990. Yes, that’s “Britttany” with 3 “T”s.

Fortunately, all the “non-Brittanys” the regexp returns are quite uncommon and not even in the top 20. So the regexp will do for a graph of the top spellings. Let’s get the data by year and look at the percentage of girls in each year named Brittany/Brittney/Britney/Brittani:

WITH n AS (SELECT name nm, YEAR yr, sex, freq FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'),
y AS (SELECT  YEAR yr, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT y.yr, 
decode(n.nm,'Brittany','Brittany', -- like Brittany Furlan
'Brittney','Brittney', -- like Brittney Griner
'Britney','Britney', -- like Britney Spears
'Brittani','Brittani', -- like Brittani Fulfer
'Other Brits') AS thename,
nvl(100*freq/tot,0) pct  FROM n, y 
WHERE  n.sex(+)=y.sex AND n.yr(+)=y.yr AND y.yr >= 1968
ORDER BY y.yr, nvl(n.nm,' ')

I graphed this in SQL Developer:
britts
From the graph it’s clear that “Brittany” is by far the most popular spelling, followed by “Brittney”. The sum of all Brittany-spellings peaked in 1989, but “Britney” has a sharp peak in 2000 – the year that singer Britney Spears released Oops I Did It Again, “one of the best-selling albums of all time” per Wikipedia.

This makes Brittany, however you spell it, a very early-90s-baby kind of name. “Brittany” was the #3 girls’ name in 1989, behind Jessica and Ashley, and was not nearly as popular in decades before or since. In subsequent posts I’ll look some more at names we can identify with specific decades.

Categories: DBA Blogs

Next Round Of ANZ “Let’s Talk Database” Events (July/August 2016)

Richard Foote - Mon, 2016-06-20 00:51
I’ll be presenting the next round of “Let’s Talk Database” events around Australia and NZ this winter in July/August 2016. These are free events but due to limited places have often “sold out” in the past, so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. We always have […]
Categories: DBA Blogs

Links for 2016-06-18 [del.icio.us]

Categories: DBA Blogs

An Eye Opener - Oracle Data Visualization

Making sense of your data shouldn't be tough! Visualizing data is a big part of making it understandable, actionable and in general useful. Oracle Data Visualization is stunningly visual and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs