DBA Blogs

Pester: Cannot bind argument to parameter ‘Actual’ because it is an empty string.

Matt Penny - Tue, 2016-06-21 03:23

I’m just getting started with Pester and I got this error

   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1

The code I’m testing is very simple – it just separates a ‘Property Name’ and a ‘Property Value’.

So, when it’s working it does this:

get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
DEBUG: 09:15:37.6806 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}

PropertyName PropertyValue
------------ -------------
Weighting    103          

When I ran it from Pester I got this

GetHugoNameAndValue 06/21/2016 08:45:19 $ invoke-pester
Describing get-HugoNameAndValue
DEBUG: 08:45:56.3377 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [-] returns name and value 189ms
   Cannot bind argument to parameter 'Actual' because it is an empty string.
   at line: 18 in C:\Program Files\WindowsPowerShell\Modules\pester\3.3.5\Functions\Assertions\Be.ps1
Tests completed in 189ms
Passed: 0 Failed: 1 Skipped: 0 Pending: 0

My Pester code was:

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.Value
        $value | Should Be '103'
    }
}

The problem here was simply that I’d got the name of the Property wrong. It was ‘PropertyName’ not just ‘Name’

So I changed the Pester

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "get-HugoNameAndValue" {
    It "returns name and value" {
        $Hugo = get-HugoNameAndValue -FrontMatterLine "Weighting: 103"
        $value = $Hugo.PropertyValue
        $value | Should Be '103'
    }
}

….and then it worked

invoke-pester
Describing get-HugoNameAndValue
DEBUG: 09:22:21.2291 Start: get-HugoNameAndValue
DEBUG: - FrontMatterLine=Weighting: 103
DEBUG: - get-HugoNameAndValue.ps1: line 5
DEBUG: $PositionOfFirstColon: 9
DEBUG: $PropertyName : {Weighting}
DEBUG: $PropertyValue : { 103}
DEBUG: $PropertyValue : {103}
 [+] returns name and value 99ms
Tests completed in 99ms
Passed: 1 Failed: 0 Skipped: 0 Pending: 0

Categories: DBA Blogs

Links for 2016-06-20 [del.icio.us]

Categories: DBA Blogs

HOWTO solve any problem recursively, PL/SQL edition…

RDBMS Insight - Mon, 2016-06-20 17:47
PROCEDURE solve (my_problem IN varchar2) IS
BEGIN
  my_idea := have_great_idea (my_problem) ;
  my_code := start_coding (my_idea) ;
  IF i_hit_complications (my_idea)
  THEN 
    new_problem := the_complications (my_idea);
    solve (new_problem);
  ELSE
    NULL; --we will never get here
  END IF;
END solve;

This abuse of recursion was inspired by @ThePracticalDev !

Categories: DBA Blogs

Services -- 3 : Monitoring Usage of Custom Services

Hemant K Chitale - Mon, 2016-06-20 10:04
In my previous blog post, I had demonstrated a few custom services created and started with DBMS_SERVICE.

Let's look at a couple of examples of monitoring usage of these services.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:51:08 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:23:50 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> execute dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant select_Catalog_role to hemant;

Grant succeeded.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:52:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:28:01 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t1 tablespace hemant as select * from dba_objects;

Table created.

SQL> insert into obj_t1 select * from obj_t1;

90935 rows created.

SQL>

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 20 22:53:54 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Jun 20 2016 22:52:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create table obj_t2_small tablespace hemant as select * from obj_T1 where rownum < 11;

Table created.

SQL>

SQL> show user
USER is "SYSTEM"
SQL> select sid,serial#, to_char(logon_time,'DD-MON HH24:MI:SS'), service_name
2 from v$session
3 where username = 'HEMANT'
4 order by logon_time
5 /

SID SERIAL# TO_CHAR(LOGON_TIME,'DD-M
---------- ---------- ------------------------
SERVICE_NAME
----------------------------------------------------------------
61 50587 20-JUN 22:52:27
NEW_APP1

76 43919 20-JUN 22:53:54
FINANCE


SQL>


Thus, we can see that V$SESSION tracks the SERVICE_NAME in use --- even though the USERNAME is the same in both sessions, the SERVICE_NAME is different.

SQL> col svc_name format a10
SQL> col stat_name format a25 trunc
SQL> select
2 con_id, service_name SVC_NAME, stat_name, value
3 from v$service_stats
4 where service_name in ('NEW_APP1','FINANCE')
5 and
6 (stat_name like 'DB%' or stat_name like '%block%' or stat_name like 'redo%')
7 order by 1,2,3
8 /

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 FINANCE DB CPU 168973
3 FINANCE DB time 771742
3 FINANCE db block changes 653
3 FINANCE gc cr block receive time 0
3 FINANCE gc cr blocks received 0
3 FINANCE gc current block receive 0
3 FINANCE gc current blocks receive 0
3 FINANCE redo size 100484

CON_ID SVC_NAME STAT_NAME VALUE
---------- ---------- ------------------------- ----------
3 NEW_APP1 DB CPU 869867
3 NEW_APP1 DB time 17415363
3 NEW_APP1 db block changes 11101
3 NEW_APP1 gc cr block receive time 0
3 NEW_APP1 gc cr blocks received 0
3 NEW_APP1 gc current block receive 0
3 NEW_APP1 gc current blocks receive 0
3 NEW_APP1 redo size 25057520

16 rows selected.

SQL>


So, even some statistics (unfortunately, not all -- the last time I checked in 11.2) are reported at the Service Level.  Thus, I can see that the users of NEW_APP1 consumed more CPU and DB Time and generated more changes and redo than users of FINANCE !  (Obviously, V$SERVICE_STATS reports statistics from the beginning of the instance so you should either user StatsPack (I haven't verified StatsPack reporting of statistics by individual service) or AWR (if you have the Diagnostic Pack licence) or your own collection scripts to report statistics for a specific window of time).
.
.
.


Categories: DBA Blogs

What’s in a name? – “Brittany” edition

RDBMS Insight - Mon, 2016-06-20 07:46

In my last post, I loaded US SSA names data into my dev instance to play with. In this post, I’ll play around with it a bit and take a look at the name “Brittany” and all its variant spellings.

I found nearly 100 different spellings of “Brittany” in the US SSA data thanks to a handy regexp:

SELECT name nm, SUM(freq) FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'
GROUP BY name
ORDER BY SUM(freq) DESC;
NM				SUM(FREQ)
------------------------------ ----------
Brittany			   357159
Brittney			    81648
Britney 			    34182
Brittani			    11703
Britany 			     6291
Brittni 			     5985
Brittanie			     4725
Britni				     4315
Brittny 			     3584
Brittaney			     3280
...
Bryttnee			       10
Britttany				7
Brytanie				7
Brittanae				6
Bryttnii				6
...
Brittanii				5
Brittiana				5
 
91 rows selected.

The regexp isn’t perfect. It returns a few uncommon names which aren’t pronounced “Brittany”: “Brittiana”, “Brittiani”, “Britane”, “Brittina”, “Britanya”, “Brittine” – and one I’m not sure about, “Brittnae”. But on the other hand, it did let me discover that 7 “Britttany”s applied for SSNs in 1990. Yes, that’s “Britttany” with 3 “T”s.

Fortunately, all the “non-Brittanys” the regexp returns are quite uncommon and not even in the top 20. So the regexp will do for a graph of the top spellings. Let’s get the data by year and look at the percentage of girls in each year named Brittany/Brittney/Britney/Brittani:

WITH n AS (SELECT name nm, YEAR yr, sex, freq FROM names 
 WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$' )
 AND sex='F'),
y AS (SELECT  YEAR yr, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT y.yr, 
decode(n.nm,'Brittany','Brittany', -- like Brittany Furlan
'Brittney','Brittney', -- like Brittney Griner
'Britney','Britney', -- like Britney Spears
'Brittani','Brittani', -- like Brittani Fulfer
'Other Brits') AS thename,
nvl(100*freq/tot,0) pct  FROM n, y 
WHERE  n.sex(+)=y.sex AND n.yr(+)=y.yr AND y.yr >= 1968
ORDER BY y.yr, nvl(n.nm,' ')

I graphed this in SQL Developer:
britts
From the graph it’s clear that “Brittany” is by far the most popular spelling, followed by “Brittney”. The sum of all Brittany-spellings peaked in 1989, but “Britney” has a sharp peak in 2000 – the year that singer Britney Spears released Oops I Did It Again, “one of the best-selling albums of all time” per Wikipedia.

This makes Brittany, however you spell it, a very early-90s-baby kind of name. “Brittany” was the #3 girls’ name in 1989, behind Jessica and Ashley, and was not nearly as popular in decades before or since. In subsequent posts I’ll look some more at names we can identify with specific decades.

Categories: DBA Blogs

Next Round Of ANZ “Let’s Talk Database” Events (July/August 2016)

Richard Foote - Mon, 2016-06-20 00:51
I’ll be presenting the next round of “Let’s Talk Database” events around Australia and NZ this winter in July/August 2016. These are free events but due to limited places have often “sold out” in the past, so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. We always have […]
Categories: DBA Blogs

Links for 2016-06-18 [del.icio.us]

Categories: DBA Blogs

An Eye Opener - Oracle Data Visualization

Making sense of your data shouldn't be tough! Visualizing data is a big part of making it understandable, actionable and in general useful. Oracle Data Visualization is stunningly visual and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How to Clean Up GoldenGate Integrated Extract if the Delete Extract fails?

VitalSoftTech - Thu, 2016-06-16 15:13
If the extract process was not un-registered from the database then Cleaning Up GoldenGate Integrated Extract Using DBMS_CAPTURE_ADM.DROP_CAPTURE will be necessary.
Categories: DBA Blogs

Graph frequently executed SQL by FORCE_MATCHING_SIGNATURE

Bobby Durrett's DBA Blog - Thu, 2016-06-16 15:10

I made a new graph in my PythonDBAGraphs program. Here is an example with real data but the database name blanked out:

sql_matching_group_of_signatures_blog

My graphs are all sized for 1920 x 1080 monitors so I can see all the detail in the lines using my entire screen. The idea for this graph is to show how the performance of the queries that matter to the users changes as we add more load and data to this production database. I knew that this database had many queries with literals in their where clauses. I decided to pick a group of SQL by FORCE_MATCHING_SIGNATURE and to graph the average elapsed run time against the total number of executions.

I used this query to list all the SQL by signature:

column FORCE_MATCHING_SIGNATURE format 99999999999999999999

select FORCE_MATCHING_SIGNATURE,
sum(ELAPSED_TIME_DELTA)/1000000 total_seconds,
sum(executions_delta) total_executions,
count(distinct sql_id) number_sqlids,
count(distinct snap_id) number_hours,
min(PARSING_SCHEMA_NAME)
from DBA_HIST_SQLSTAT
group by FORCE_MATCHING_SIGNATURE
order by number_hours desc;

This is an edited version of the output – cut down to fit the page:

FORCE_MATCHING_SIGNATURE TOTAL_SECONDS TOTAL_EXECUTIONS NUMBER_HOURS
------------------------ ------------- ---------------- ------------
    14038313233049026256     22621.203         68687024         1019
    18385146879684525921    18020.9776        157888956         1013
     2974462313782736551    22875.4743           673687          993
    12492389898598272683    6203.78985         66412941          992
    14164303807833460050    4390.32324           198997          980
    10252833433610975622    6166.07675           306373          979
    17697983043057986874    17391.0907         25914398          974
    15459941437096211273    9869.31961          7752698          967
     2690518030862682918    15308.8561          5083672          952
     1852474737868084795    50095.5382          3906220          948
     6256114255890028779    380.095915          4543306          947
    16226347765919129545    9199.14289           215756          946
    13558933806438570935    394.913411          4121336          945
    12227994223267192558    369.784714          3970052          945
    18298186003132032869    296.887075          3527130          945
    17898820371160082776    184.125159          3527322          944
    10790121820101128903    2474.15195          4923888          943
     2308739084210563004    265.395538          3839998          941
    13580764457377834041    2807.68503         62923457          934
    12635549236735416450    1023.42959           702076          918
    17930064579773119626    2423.03972         61576984          914
    14879486686694324607     33.253284            17969          899
     9212708781170196788     7292.5267           126641          899
      357347690345658614    6321.51612           182371          899
    15436428048766097389     11986.082           334125          886
     5089204714765300123    6858.98913           190700          851
    11165399311873161545    4864.60469         45897756          837
    12042794039346605265    11223.0792           179064          835
    15927676903549361476    505.624771          3717196          832
     9120348263769454156    12953.0746           230090          828
    10517599934976061598     311.61394          3751259          813
     6987137087681155918    540.565595          3504784          809
    11181311136166944889      5018.309         59540417          808
      187803040686893225    3199.87327         12788206          800

I picked the ones that had executed in 800 or more hours. Our AWR has about 1000 hours of history so 800 hours represents about 80% of the AWR snapshots. I ended up pulling one of these queries out because it was a select for update and sometimes gets hung on row locks and skews the graph. So, the graph above has that one pulled out.

I based the graph above on this query:

select
sn.END_INTERVAL_TIME,
sum(ss.executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/((sum(executions_delta)+1))
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.snap_id=sn.snap_id
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
and ss.FORCE_MATCHING_SIGNATURE in
(
14038313233049026256,
18385146879684525921,
2974462313782736551,
12492389898598272683,
14164303807833460050,
10252833433610975622,
17697983043057986874,
15459941437096211273,
2690518030862682918,
6256114255890028779,
16226347765919129545,
13558933806438570935,
12227994223267192558,
18298186003132032869,
17898820371160082776,
10790121820101128903,
2308739084210563004,
13580764457377834041,
12635549236735416450,
17930064579773119626,
14879486686694324607,
9212708781170196788,
357347690345658614,
15436428048766097389,
5089204714765300123,
11165399311873161545,
12042794039346605265,
15927676903549361476,
9120348263769454156,
10517599934976061598,
6987137087681155918,
11181311136166944889,
187803040686893225
)
group by sn.END_INTERVAL_TIME
order by sn.END_INTERVAL_TIME;

Only time will tell if this really is a helpful way to check system performance as the load grows, but I thought it was worth sharing what I had done. Some part of this might be helpful to others.

Bobby

Categories: DBA Blogs

Services -- 2 : Starting and Connecting to Services (non-RAC)

Hemant K Chitale - Thu, 2016-06-16 10:22
Continuing with the 12.1.0.2 non-RAC MultiTenant environment and two services demonstrated earlier,

I have restarted the environment today :

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 22:57:17

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 22:57:31 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1040191008 bytes
Database Buffers 587202560 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database open;
alter pluggable database open
*
ERROR at line 1:
ORA-65000: missing or invalid pluggable database name


SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:00:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


The two manually created services (NEW_APP1 and NEW_APP2) don't startup automatically.

SQL> exec dbms_service.start_service('NEW_APP1');
BEGIN dbms_service.start_service('NEW_APP1'); END;

*
ERROR at line 1:
ORA-44773: Cannot perform requested service operation.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 63
ORA-06512: at "SYS.DBMS_SERVICE", line 395
ORA-06512: at line 1


SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus sys/oracle@PDB1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:03:05 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>


[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:04:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


When I attempted to start the custom service (that, per the previous post, was created in PDB1) when still connected to CDB$ROOT, the command fails.  I had to connect to PDB1 to start the service.

SQL> create tablespace hemant ;                           

Tablespace created.

SQL> create user hemant identified by hemant default tablespace hemant quota unlimited on hemant;

User created.

SQL> grant create session to hemant;

Grant succeeded.

SQL>
[oracle@ora12102 Desktop]$ tnsping NEW_APP1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:00

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP1)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping NEW_APP2

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:05

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP2)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:29 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:48 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show conn_name
SP2-0158: unknown SHOW option "conn_name"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>


The newly created user HEMANT belongs to the Database.  The user is not tied to a Service.  He can use either Service (whichever is running) to connect to the Database.  But the two Services have two different TNSNAMES.ORA entries --- differing by the SERVICE_NAME specification.

Thus, the DBA could configure some application servers to use one service name and other application servers to use another service name.

An example of such a configuration is where the first set of application servers could be for Finance Applications called "FINANCE" and the second set of servers could be for HR Applications (in the *same* database) called "HR".  Here I create the two services but start only the FINANCE service.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@pdb1 

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:13:55 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.create_service('FINANCE','FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:15:45

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "FINANCE" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


Now the user should be able to connect to FINANCE, but not to HR.

[oracle@ora12102 Desktop]$ tnsping FINANCE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:16

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = FINANCE)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping HR

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:18

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = HR)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:18:57 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:48 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@HR

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:19:23 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:


The user could connect to FINANCE but the request to HR returned ORA-12514.
(Notice how "tnsping HR" is successful but the connection is not ?  That is because tnsping only tests if the listener is running, it does not test if the database instance and service are both running).

So, using multiple services, the DBA can "provision" the same database to multiple applications.  The entry-point is the SERVICE_NAME,  not the USERNAME.   Users and Services are independent of each other.

.
.
.
Categories: DBA Blogs

ORA-00020: maximum number of processes exceeded

Learn DB Concepts with me... - Thu, 2016-06-16 09:45


ORA-00020: maximum number of processes



This error occurs when your total numbers of sessions connecting to oracle database has exceeded the max limit set in parameter file. Simplest way to overcome this error is to reset the max sessions value to a greater value than existing.Here is how to do it
 



oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:20:26 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded




oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:23:42 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name:

Disconnected from ORACLE

I wasn't able get into the oracle database to kill some database session. So I tried to kill few sessions on OS to make my way into DB.

oracle@LINUX201:[~] $ ps -ef|grep oracle
.
.
.
.
.
.
.

oracle   64373     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64540     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64637     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65192     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
root     65407 65381  0 May16 pts/2    00:00:00 sudo -u oracle -i
oracle   65408 65407  0 May16 pts/2    00:00:00 -bash
oracle   65458 65408  0 May16 pts/2    00:00:00 sqlplus
oracle   65459 65458  0 May16 ?        00:00:00 oracleQPDEV (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   65518     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65520     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
oracle   65534     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)

oracle@LINUX201:[~] $ kill -9 64785
oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:26:25 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name: ^C

Killing few processes on Linux :

oracle@LINUX201:[~] $ kill -9 65192 65085 64785 64777 64655 64653 64637


oracle@LINUX201:[~] $ ps -ef|grep 65192 65085 64785 64777 64655 64653 64637

.
.
.
.
oracle   50258     1  0 Jun07 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   50264     1  0 Jun07 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   50268     1  0 Jun07 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.

oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:30:07 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string


Now reset the max processes to a greater value:

SQL> alter system set processes=1200 scope=spfile;

System altered.

SQL>  show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
QPDEV     READ WRITE

This will need a restart to take affect


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area 3206836224 bytes
Fixed Size                  2257520 bytes
Variable Size            1275071888 bytes
Database Buffers         1912602624 bytes
Redo Buffers               16904192 bytes
Database mounted.
Database opened.

SQL> show parameter process

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     1200
processor_group_name                 string

SQL>
Categories: DBA Blogs

Oracle Partner Community - EPM BI Big Data Bulletin June 2016

When you prototype and pilot a Big Data Hadoop solution, you may start on the cloud or a cluster of commodity servers.  However, I have heard from many partners that the transition to production...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-06-15 [del.icio.us]

Categories: DBA Blogs

Services -- 1 : Services in non-RAC 12c MultiTenant

Hemant K Chitale - Tue, 2016-06-14 10:22
It is generally accepted that service definition is required in RAC environments.

However, the concept of Services was made available in 8i --- predating RAC.   Services can be defined in non-OPS / non-RAC / non-MultiTenant / MultiTenant environments.  A single PDB in a 12c MultiTenant database can host multiple services.

A quick start to the implementation.

Note : srvctl is to be used to create and manage services in a RAC environment.  srvctl registers the services with the Cluster Registry.  In a Non-RAC environment, use DBMS_SERVICE.

First, no services are running :

[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:14:48

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 0 min. 19 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$


Next, I startup the MultiTenant CDB database.

[oracle@ora12102 ~]$ . oraenv
ORACLE_SID = [CDB1] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 14 23:15:47 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1056968224 bytes
Database Buffers 570425344 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>


[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:17:23

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 2 min. 54 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora12102)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/CDB1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$


So, now I have the default pdb1 srevice for pluggable database PDB1 running.

Next, I manually create two new services and start them.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 ~]$ sqlplus 'sys/oracle@PDB1 as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jun 14 23:20:30 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>
SQL> exec dbms_service.create_service(service_name=>'NEW_APP1',network_name=>'NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('NEW_APP2','NEW_APP2');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>
[grid@ora12102 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-JUN-2016 23:22:54

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 14-JUN-2016 23:14:28
Uptime 0 days 0 hr. 8 min. 26 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/ora12102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora12102)(PORT=5501))(Security=(my_wallet_directory=/u01/app/oracle/admin/CDB1/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@ora12102 ~]$



The two new services NEW_APP1 and NEW_APP2 that I created in PDB1 are now registered with the listener.  Remember that these services are in the Database Instance CDB1.

My next blog post will be about using these services.

.
.
.

Categories: DBA Blogs

The Cost of Doing Nothing

Kubilay Çilkara - Mon, 2016-06-13 12:18
For a business to become optimally successful, it absolutely must incorporate a quality life-cycle management system.  This begs the question:  Why do so many vendors miss the mark when it comes to providing the necessary updates and enhancements?  Developers and software companies should embrace their respective ALM systems as their staunch allies; and progressive IT organizations stay well ahead of the game by using progressive technology and best practices to ensure that high-quality products are on time and on budget while remaining fully compliant.  The goal of any ALM supplier should be to cater to its clients by properly supporting them by staying abreast of platform enhancements and being familiar with new languages, new devices, mobile demands, ever-changing compliance regulations and other real-time demands that must be continually addressed.

The bottom line remains:  in order for development leaders to not only survive, but thrive, they must make the transition to the most-updated ALM solution, possible.  Surprisingly, however, development leaders can be hesitant to utilize a modern ALM solution; but the cost of doing nothing can be more expensive than one might imagine. 

There are a handful of misguided reasons why an updated ALM solution might not be employed.  A few of those fallacies can include the following:

A New ALM Solution Would Be Too Cost-Prohibitive

Being the lead dog and staying ahead of the pack in this competitive world is absolutely paramount which is why a vendor must provide the crucial components such as product enhancements, platform updates, etc.  Research reveals some unsettling data:

  • 84% of IT projects become overdue or over budget
  • 31% of IT projects never reach completion due to being canceled
  • Completed IT projects deliver, on average, only 42% of their expected benefits

Accuracy and efficiency become the name of the game as it applies to profit; but developers' profit margins will be sorely compromised without up-to-date functionality and access to current tools via an up-to-date ALM system.  Additionally, if no automated system is integrated, IT will be forced to spend a good deal of valuable time addressing compliance-related issues; and that can be costly. 

Your vendor's R&D department should certainly be acutely aware of new trends in the industry as well as responsive to customer’s requests.  A coveted ALM solution will incorporate 1) on-board templates for compliance reporting 2) compatibility and remote access with any mobile device 3) tools such as dashboards, real-time reports & analytics and automated work-flows –  all, of which, enable every team-member to stay up-to-date. 

The cost of doing nothing can take a titanic toll when one considers that not meeting app-release time-lines as well as opportunities that become lost in the shuffle plus valuable time addressing compliance concerns and/or audits all cost a business, big-time!  The question, then, becomes obvious:  You believe you can't afford the integration of a modern ALM solution – but can you afford NOT to??

Our Current ALM Solution Seems to be Working Fine
In order to effectively, efficiently and optimally monitor, manage and coordinate all the people, processes, changes and dynamics that are intricately involved in the application life-cycle, utilizing the most sophisticated ALM solution is key!  Development personnel feel the demands of deploying functionality and fixes, very quickly.  The IT setting is extremely complex; and in this environment, database servers, web servers, diverse clientele and every type of mobile device equate to sophisticated development and release processes.  All this must be intricately orchestrated without a hitch; and a modern ALM solution is what it takes to fully ensure a flawless and seamless operations in every department. 

With the most modern ALM solution, users can enjoy the ease at which systems and work-flows come together in addition to the minimization of production errors and the maximization of collaboration efforts.  Then, imagine all this coupled with data access from any mobile device, compliance reports with point-and-click ease and automation processes that are as easy as child's play.

Older ALM solutions are just that 'old' and with that, comes the inability for an archaic solution tool to offer the newest of technologies which equates to lost time due to fixing bad code and dealing with coding errors, as only a single example.  And then, of course, there is the lost revenue.  In the end, the growth of a company is stifled.  Again, a modern ALM solution keeps a business in position as the 'alpha' and leader of the competitive pack since the people and processes involved are all humming like a fine-tuned engine – no, barricades, no inefficiency and virtually no errors.

Transitioning to a New ALM Would Be Too Time-Consuming

How one chooses a vendor can make the difference between reaping the benefits of a dedicated and seasoned professional with an unparalleled product that he or she is excited to share, verses a vendor whose interest in your goals and progress is marginal, at best.  Assuming the right vendor has been selected, the time required to get the system fully running will be miniscule.  Personnel can very quickly enjoy immediate visibility, coordination and management across distributed systems, teams and tools.  In the end, previously-lost revenue due to outdated ALM systems becomes a distant memory since teams will no longer contend with drawn-out, manual processes but will, now, have the updated abilities to very quickly communicate, collaborate, update etc. regarding any and all application projects. 

Not one single team-member needs to concern him or herself with transitioning into an updated system.  A committed vendor will make sure the necessary and expected support is entirely available for everyone involved.  Again, in the end, any time invested in becoming familiar with a new ALM solution will begin to immediately pay for itself due to optimized usability involving real-time visibility, flexibility, accuracy and automation.

Our Current ALM Serves Only Development

When a business chooses stagnation over progress, it can become the 'kiss of death' for the organization.  Because technology will never slow down or even reach an apex, a business absolutely must stay on track with innovative ideas, processes and insights.  An integrated ALM system ensures that users can take full advantage of managing, in real-time, every aspect of development and delivery.  A top-tier ALM solution will provide instantaneous updates on every component ranging from code to work-flow to dashboards and everything in-between and beyond.  Smarter, more-insightful decisions become common-place among everyone involved – whether development personnel, auditors, programmers, etc.  Since DevOps departments evolve and advance in the enterprise, so too, must the ALM system by functioning as the centralized collaborative arena where inter-department communications are available whenever and wherever required.

After it's all said and done, switching to a modern ALM solution will, realistically, save money over the long haul since time is being dramatically saved – and time is money!  Those few words serve as a cliché as well as a fact.  Whether one is speaking of departments collaborating on changes at any level, or enhanced visibility that maximizes work-flow or whether one is talking about users gaining advanced capabilities resulting in succinct, precise and quick decision-making, it all adds up, once again, to saving copious amounts of time which translates into saving impressive amounts of revenue.

A reliable vendor will provide the kind of support one would expect from a supplier that operates as a top-tier contender in the industry.  Vendor support should include:

  • Access to the most up-dated interfaces and devices
  • Assistance with any existing OS
  • Intervention for all platforms, on which, code is being developed
  • Mobile and web development
  • Out-of-the-box plug-ins to converge with other tools
  • Compliance-report templates
  • Delivery of single-screen visibility with all IT involvement
  • Adjustable point-and-click distribution and deployment and mobile functionality with everything

It is an ever-changing business climate where technology is king.  And...

                                            Adaptation equals growth and growth equals SUCCESS!    


About the author: Daniel Magid is Rocket’s IBM i solution leader and Director of the Rocket Application Lifecycle Management (ALM) and DevOps lab. Having started his career at IBM in 1981 in the midrange computer division, Daniel brings to Rocket Software more than 30 years of experience in the IBM midrange marketplace. Prior to coming to Rocket as part of the acquisition of Aldon in 2011, Daniel was Aldon’s CEO and Chief Product Strategist. Daniel led the growth of Aldon from a small 4 person consulting company to the largest provider of ALM and DevOps solutions in the IBM i market. Daniel is a recognized expert in application development and DevOps in the IBM i market and a well-known presence at IBM i conferences.




Categories: DBA Blogs

Data Recovery Advisor (11g)

Hemant K Chitale - Mon, 2016-06-13 03:13
Here's my YouTube video on using the 11g Data Recovery Advisor from the RMAN command line.

Other videos are in my YouTube Channel.

.
.
.

Categories: DBA Blogs

Links for 2016-06-12 [del.icio.us]

Categories: DBA Blogs

Create Temporary Tables in Oracle

Learn DB Concepts with me... - Fri, 2016-06-10 13:33

Global Temporary Tables in Oracle

Temporary tables are useful in applications where a result set is to be buffered, perhaps because it is constructed by running multiple DML operations. For example, consider the following:

A Web-based airlines reservations application allows a customer to create several optional itineraries. Each itinerary is represented by a row in a temporary table. The application updates the rows to reflect changes in the itineraries. When the customer decides which itinerary she wants to use, the application moves the row for that itinerary to a persistent table.

During the session, the itinerary data is private. At the end of the session, the optional itineraries are dropped.

This statement creates a temporary table that is transaction specific:

NOTE : Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.

*********************************************************************************
HERE is an example to create a global temporary table with on commit DELETE ROWS :
*********************************************************************************

sql>  CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT DELETE ROWS;
      
      
    
sql>  insert into ADMIN_WORK_AREA values (sysdate,sysdate+ 1,'A');

1 row inserted.


sql> select * from ADMIN_WORK_AREA;

commit;

Commit complete.

sql> select * from ADMIN_WORK_AREA;


NOTE: records in this temp table will be deleted upon commit. This is equivalent to truncating table on commit.

*********************************************************************************
HERE is an example to create a global temporary table with on commit PRESERVE ROWS :
*********************************************************************************


sql>  CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT PRESERVE ROWS;
      
            
sql>  insert into ADMIN_WORK_AREA values (sysdate,sysdate+ 1,'A');

1 row inserted.

1 row inserted.

sql>  select * from ADMIN_WORK_AREA;

commit;

Commit complete.

sql>  select * from ADMIN_WORK_AREA;


NOW exit the session and login back and select the table.

sql>  select * from ADMIN_WORK_AREA;

table is empty

NOTE: records (rows) in this temp table will be deleted upon session exit only, as long as you are using same session you can see these rows. 
This is equivalent to truncating table on session exit.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs