Skip navigation.

DBA Blogs

Simple Issue with ORA-00108

Surachart Opun - Wed, 2014-02-12 12:47
My friend asked me to check about error in alert log file - "dispatcher 'D000' encountered error getting listening address". After checked, I found.
Wed Feb 12 09:46:27 2014
dispatcher 'D000' encountered error getting listening address
Wed Feb 12 09:46:27 2014
found dead dispatcher 'D000', pid = (17, 154)I checked trace file about d000 processed.
Trace file /u01/app/oracle/diag/rdbms/prod/PROD/trace/PROD_d000_31988.trc
Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
System name:    Linux
Node name:      linux-host
Release:        2.6.39-400.21.1.el6uek.x86_64
Version:        #1 SMP Thu Apr 4 03:49:00 PDT 2013
Machine:        x86_64
Instance name: PROD
Redo thread mounted by this instance: 1
Oracle process number: 17
Unix process pid: 31988, image: oracle@linux-host (D000)


*** 2014-02-12 09:57:35.577
*** CLIENT ID:() 2014-02-12 09:57:35.577
*** SERVICE NAME:() 2014-02-12 09:57:35.577
*** MODULE NAME:() 2014-02-12 09:57:35.577
*** ACTION NAME:() 2014-02-12 09:57:35.577

network error encountered getting listening address:
  NS Primary Error: TNS-12533: TNS:illegal ADDRESS parameters
  NS Secondary Error: TNS-12560: TNS:protocol adapter error
  NT Generic Error: TNS-00503: Illegal ADDRESS parameters
OPIRIP: Uncaught error 108. Error stack:
ORA-00108: failed to set up dispatcher to accept connection asynchronously

(END)I tried to find out... but no idea... So, checked /etc/hosts
[oracle@linux-host trace]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[oracle@linux-host trace]$ vi /etc/hosts
[oracle@linux-host trace]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.30.6.10     linux-hostIssue was fixed!  Oh! no... I found out on Oracle Support & Internet, but not get solution, but issue was about hostname. - -"Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

login.sql does not require a login

Hemant K Chitale - Wed, 2014-02-12 07:55
Oracle's sqlplus can use a login.sql file to execute commands -- e.g. setup options.
This file is read and executed when you start sqlplus, even without having logged in to a database.
Here's a quick demo :
I start an sqlplus session without a login.sql
[oracle@localhost ~]$ pwd
/home/oracle
[oracle@localhost ~]$ ls -l login.sql
ls: login.sql: No such file or directory
[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 12 08:01:43 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show pagesize
pagesize 14
SQL> show linesize
linesize 80
SQL> show sqlprompt
sqlprompt "SQL> "
SQL>

Now, I create a login.sql and invoke sqlplus without logging in to the database.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ vi login.sql
[oracle@localhost ~]$ cat login.sql
set pagesize 60
set linesize 132
set sqlprompt 'HemantSQL>'
[oracle@localhost ~]$
[oracle@localhost ~]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 12 08:05:24 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

HemantSQL>show pagesize
pagesize 60
HemantSQL>show linesize
linesize 132
HemantSQL>show user
USER is ""
HemantSQL>

Without having connected to a database (and created a database session), the login.sql was executed.

I can also have it dynamically use a variable --- e.g. the sqlprompt changing based on my login username.

HemantSQL>exit
[oracle@localhost ~]$ vi login.sql
[oracle@localhost ~]$ cat login.sql
set pagesize 60
set linesize 132
set sqlprompt '_USER>'
[oracle@localhost ~]$
[oracle@localhost ~]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 12 08:08:12 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

>
>show user
USER is ""
>connect hemant/hemant
Connected.
HEMANT>show user
USER is "HEMANT"
HEMANT>connect hr/oracle
Connected.
HR>show user
USER is "HR"
HR>
HR>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$

Notice how the sqlprompt was simply ">" when no user was logged in ? On the "HEMANT" and "HR" logins, the prompt did change.

.
.
.
Categories: DBA Blogs

Library cache lock scripts for RAC

Bobby Durrett's DBA Blog - Tue, 2014-02-11 18:16

I’ve been having issues for a long time now with an Exadata RAC database that has user reports experiencing library cache lock waits.  The challenge is to figure out what is holding the library cache locks that the queries are waiting on.

My starting point on library cache locks has always been this Oracle support document:

How to Find which Session is Holding a Particular Library Cache Lock (Doc ID 122793.1)

But it doesn’t tell you how to find the session across nodes of a RAC database.

I also found this helpful blog post that briefly addresses finding the session across RAC nodes: Library cache lock and library cache pin waits

I’ve spent many hours over more than a year now dealing with these waits without a lot of success so I finally tried to build a script that I could run regularly to try to capture information about the sessions holding the library cache locks.

First, I knew from Oracle’s document that the x$kgllk table could be used to find the blocking session on a single node so I included queries against this table in my script and set it up so I would ssh into every node of the cluster and run a queries like this against each node:

-- sessions on this instance that are waiting on
-- library cache lock waits
-- unioned with
-- sessions on this instance that are holding locks that other
-- sessions on this instance are waiting on.

insert into myuser1.library_cache_lock_waits
(
SOURCETABLE,
BLOCKER,
SAMPLE_TIME,
INST_ID,
SID,
USERNAME,
STATUS,
OSUSER,
MACHINE,
PROGRAM,
LOGON_TIME,
LAST_CALL_ET,
KGLLKHDL,
KGLLKREQ,
USER_NAME,
KGLNAOBJ,
sql_id,
SQL_FULLTEXT
)
(select
'X\$KGLLK',
'N',
sysdate,
(select INSTANCE_NUMBER from v\$instance),
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
x.KGLLKHDL,
x.KGLLKREQ,
x.USER_NAME,
x.KGLNAOBJ,
s.sql_id,
q.SQL_FULLTEXT
from 
v\$session s, x\$kgllk x, v\$sql q
where
x.kgllkses=s.saddr and
s.sql_id=q.sql_id(+) and
s.event='library cache lock' and
x.KGLLKREQ > 0 and
q.child_number(+)=0)
union all
(select
'X\$KGLLK',
'Y',
sysdate,
(select INSTANCE_NUMBER from v\$instance),
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
x.KGLLKHDL,
x.KGLLKREQ,
x.USER_NAME,
x.KGLNAOBJ,
s.sql_id,
q.SQL_FULLTEXT
from 
v\$session s, x\$kgllk x, v\$sql q,
x\$kgllk x2
where
x.kgllkses=s.saddr and
s.sql_id=q.sql_id(+) and
x.KGLLKREQ = 0 and
x2.KGLLKREQ > 0 and
x2.KGLLKHDL = x.KGLLKHDL and
q.child_number(+)=0);

commit;

The dollar signs are escaped with a backslash because these queries are part of a Unix shell script.  I picked a few columns that I thought would be helpful from v$session and joined to v$sql to get the text of the blocking and blocked SQL.  Note that I ran these queries as SYSDBA.  Here is an example of my test case where the blocker and blocked sessions are both on one node:

SOURCETABLE                    B Sample Time            INST_ID        SID USERNAME                       STATUS   OSUSER                         MACHINE                                                          PROGRAM                                          Logon Time          LAST_CALL_ET KGLLKHDL           KGLLKREQ USER_NAME                      KGLNAOBJ                                                     SQL_ID        RESOURCE_NAME1                 RESOURCE_NAME2                 SQL_FULLTEXT
------------------------------ - ------------------- ---------- ---------- ------------------------------ -------- ------------------------------ ---------------------------------------------------------------- ------------------------------------------------ ------------------- ------------ ---------------- ---------- ------------------------------ ------------------------------------------------------------ ------------- ------------------------------ ------------------------------ --------------------------------------
X$KGLLK                        N 2014-02-17 17:19:01          1       1183 MYUSER1                        ACTIVE   myuser2                        MYMACHINE                                                        sqlplus.exe                                      2014-02-17 17:18:57            5 00000005F9E7D148          2 MYUSER1                        TEST                                                         g4b4j3a8mms0z                                                               select sum(b) from test
X$KGLLK                        Y 2014-02-17 17:19:03          1        995 MYUSER1                        ACTIVE   myuser2                        MYMACHINE                                                        sqlplus.exe                                      2014-02-17 17:18:52           10 00000005F9E7D148          0 MYUSER1                        TEST                                                         gv7dyp7zvspqg                                                               alter table test modify (a char(100))

Next, I noticed that on gv$session when a session was waiting on library cache lock waits sometimes FINAL_BLOCKING_INSTANCE and FINAL_BLOCKING_SESSION were populated and that might lead me to the session holding the lock.  Also, this query and the ones following can run in a less privileged account – you don’t need SYSDBA.

drop table lcl_blockers;

create table lcl_blockers as
select distinct
s.INST_ID,
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
s.sql_id
from
gv\$session s, 
gv\$session s2
where
s2.FINAL_BLOCKING_INSTANCE=s.INST_ID and
s2.FINAL_BLOCKING_SESSION=s.SID and
s2.event='library cache lock';

insert into myuser1.library_cache_lock_waits
(
SOURCETABLE,
BLOCKER,
SAMPLE_TIME,
INST_ID,
SID,
USERNAME,
STATUS,
OSUSER,
MACHINE,
PROGRAM,
LOGON_TIME,
LAST_CALL_ET,
sql_id,
SQL_FULLTEXT
)
select
'GV\$SESSION',
'Y',
sysdate,
s.INST_ID,
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
s.sql_id,
q.SQL_FULLTEXT
from 
lcl_blockers s, gv\$sql q
where
s.sql_id=q.sql_id(+) and
s.INST_ID=q.INST_ID(+) and
q.child_number(+)=0
order by s.INST_ID,s.sid;

commit;

When this works – sporadically in my tests – it shows the same sort of information the previous queries do for same node locking.  Here is an example of these gv$session queries catching the blocker:

SOURCETABLE                    B Sample Time            INST_ID        SID USERNAME                       STATUS   OSUSER                         MACHINE                                                          PROGRAM                                          Logon Time          LAST_CALL_ET KGLLKHDL           KGLLKREQ USER_NAME                      KGLNAOBJ                                                     SQL_ID        RESOURCE_NAME1                 RESOURCE_NAME2                 SQL_FULLTEXT
------------------------------ - ------------------- ---------- ---------- ------------------------------ -------- ------------------------------ ---------------------------------------------------------------- ------------------------------------------------ ------------------- ------------ ---------------- ---------- ------------------------------ ------------------------------------------------------------ ------------- ------------------------------ ------------------------------ --------------------------------------
GV$SESSION                     Y 2014-02-17 17:19:05          1        995 MYUSER1                        ACTIVE   myuser2                        MYMACHINE                                                        sqlplus.exe                                      2014-02-17 17:18:52           12                                                                                                                         gv7dyp7zvspqg                                                               alter table test modify (a char(100))

Lastly, I got a cross node query working that uses the view gv$ges_blocking_enqueue.  The key to making this query was that the pid column in gv$ges_blocking_enqueue is the same as the spid column in gv$process.

-- join gv$ges_blocking_enqueue, gv$session, gv$process to show 
-- cross node library cache lock blockers.  Blocked session will 
-- have event=library cache lock.

drop table ges_blocked_blocker;

create table ges_blocked_blocker as
(select distinct
'N' blocker,
s.INST_ID,
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
s.sql_id,
s.process,
p.spid,
e.RESOURCE_NAME1,
e.RESOURCE_NAME2
from
gv\$session s, 
gv\$process p,
gv\$ges_blocking_enqueue e
where
s.event='library cache lock' and 
s.inst_id=p.inst_id and
s.paddr=p.addr and
p.inst_id=e.inst_id and
p.spid=e.pid and
e.blocked > 0)
union
(select distinct
'Y',
s.INST_ID,
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
s.sql_id,
s.process,
p.spid,
e.RESOURCE_NAME1,
e.RESOURCE_NAME2
from
gv\$session s, 
gv\$process p,
ges_blocked b,
gv\$ges_blocking_enqueue e
where
s.inst_id=p.inst_id and
s.paddr=p.addr and
p.inst_id=e.inst_id and
p.spid=e.pid and
b.RESOURCE_NAME1=e.RESOURCE_NAME1 and
b.RESOURCE_NAME2=e.RESOURCE_NAME2 and
e.blocker > 0);

insert into myuser1.library_cache_lock_waits
(
SOURCETABLE,
BLOCKER,
SAMPLE_TIME,
INST_ID,
SID,
USERNAME,
STATUS,
OSUSER,
MACHINE,
PROGRAM,
LOGON_TIME,
LAST_CALL_ET,
sql_id,
SQL_FULLTEXT,
RESOURCE_NAME1,
RESOURCE_NAME2
)
select
'GV\$GES_BLOCKING_ENQUEUE',
s.blocker,
sysdate,
s.INST_ID,
s.SID,
s.USERNAME,
s.STATUS,
s.OSUSER,
s.MACHINE,
s.PROGRAM,
s.LOGON_TIME,
s.LAST_CALL_ET,
s.sql_id,
q.SQL_FULLTEXT,
s.RESOURCE_NAME1,
s.RESOURCE_NAME2
from 
ges_blocked_blocker s, gv\$sql q
where
s.sql_id=q.sql_id(+) and
s.INST_ID=q.INST_ID(+) and
q.child_number(+)=0
order by s.INST_ID,s.sid;

commit;

Here is some example output from my gv$ges_blocking_enqueue script.  I edited my username, machine name, etc. to obscure these.

SOURCETABLE                    B Sample Time            INST_ID        SID USERNAME                       STATUS   OSUSER                         MACHINE                                                          PROGRAM                                          Logon Time          LAST_CALL_ET KGLLKHDL           KGLLKREQ USER_NAME                      KGLNAOBJ                                                     SQL_ID        RESOURCE_NAME1                 RESOURCE_NAME2                 SQL_FULLTEXT
------------------------------ - ------------------- ---------- ---------- ------------------------------ -------- ------------------------------ ---------------------------------------------------------------- ------------------------------------------------ ------------------- ------------ ---------------- ---------- ------------------------------ ------------------------------------------------------------ ------------- ------------------------------ ------------------------------ --------------------------------------
GV$GES_BLOCKING_ENQUEUE        N 2014-02-17 17:19:55          2        301 MYUSER1                        ACTIVE   myuser2                        MYMACHINE                                                        sqlplus.exe                                      2014-02-17 17:19:46            7                                                                                                                         g4b4j3a8mms0z [0x426d0373][0x224f1299],[LB][ 1114440563,575607449,LB        select sum(b) from test
GV$GES_BLOCKING_ENQUEUE        Y 2014-02-17 17:19:55          1        497 MYUSER1                        ACTIVE   myuser2                        MYMACHINE                                                        sqlplus.exe                                      2014-02-17 17:19:41           13                                                                                                                         gv7dyp7zvspqg [0x426d0373][0x224f1299],[LB][ 1114440563,575607449,LB        alter table test modify (a char(100))

The alter table command on node 1 is holding the lock while the select statement on node 2 is waiting on the library cache lock.

So, I've got this going on a script that runs every 15 minutes in production.  It worked great in my test case but time will tell if it yields any useful information for our real problems.

- Bobby

p.s. I've uploaded a zip of my scripts: zip

Here is a description of the included files:

Testcase to create a library cache lock:

create.sql - creates a table with one character first column CHAR(1)
alter.sql - alters table expanding CHAR column
query.sql - queries table - waits on library cache lock wait if run while alter.sql is running

all.sh - top level script - you will need to edit to have the host names for your RAC cluster and to have your own userid and password

lcl.sh - x$ table script that is run on each node.  Only key thing is that our profile required a 1 to be entered to choose the first database from a list.  You may not need that line.

resultstable.sql - create table to save results

dumpresults.sql - dump out all results

dumpresultsnosql.sql - dump out all results except sql text so easier to read.

Here is the definition of the results table:

create table myuser1.library_cache_lock_waits
(
 SOURCETABLE    VARCHAR2(30),
 BLOCKER        VARCHAR2(1),
 SAMPLE_TIME    DATE,
 INST_ID        NUMBER,
 SID            NUMBER,
 USERNAME       VARCHAR2(30),
 STATUS         VARCHAR2(8),
 OSUSER         VARCHAR2(30),
 MACHINE        VARCHAR2(64),
 PROGRAM        VARCHAR2(48),
 LOGON_TIME     DATE,
 LAST_CALL_ET   NUMBER,
 KGLLKHDL       RAW(8),
 KGLLKREQ       NUMBER,
 USER_NAME      VARCHAR2(30),
 KGLNAOBJ       VARCHAR2(60),
 SQL_ID         VARCHAR2(13),
 RESOURCE_NAME1 VARCHAR2(30),
 RESOURCE_NAME2 VARCHAR2(30),
 SQL_FULLTEXT   CLOB
);

P.P.S. This was all tested only on Exadata running 11.2.0.2.

Oracle documentation on Library Cache:

The library cache is a shared pool memory structure that stores executable SQL and PL/SQL code. This cache contains the shared SQL and PL/SQL areas and control structures such as locks and library cache handles. In a shared server architecture, the library cache also contains private SQL areas.

Oracle 12c Concepts manual diagram with library cache

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Categories: DBA Blogs

Kafka or Flume?

Chen Shapira - Tue, 2014-02-11 16:25

A question that keeps popping up is “Should we use Kafka or Flume to load data to Hadoop clusters?”

This question implies that Kafka and Flume are interchangeable components. It makes as much sense to me as “Should we use cars or umbrellas?”. Sure, you can hide from the rain in your car and you can use your umbrella when moving from place to place. But in general, these are different tools intended for different use-cases.

Flume’s main use-case is to ingest data into Hadoop. It is tightly integrated with Hadoop’s monitoring system, file system, file formats, and utilities such a Morphlines. A lot of the Flume development effort goes into maintaining compatibility with Hadoop. Sure, Flume’s design of sources, sinks and channels mean that it can be used to move data between other systems flexibly, but the important feature is its Hadoop integration.

Kafka’s main use-case is a distributed publish-subscribe messaging system. Most of the development effort is involved with allowing subscribers to read exactly the messages they are interested in, and in making sure the distributed system is scalable and reliable under many different conditions. It was not written to stream data specifically for Hadoop, and using it to read and write data to Hadoop is significantly more challenging than it is in Flume.

To summarize:
Use Flume if you have an non-relational data sources such as log files that you want to stream into Hadoop.
Use Kafka if you need a highly reliable and scalable enterprise messaging system to connect many multiple systems, one of which is Hadoop.


Categories: DBA Blogs

Partner Webcast – Oracle ISV Application Modernization: It's All about the Business

Infographic: See How Cloud  Empowers Innovation Technology is changing the world in ways we haven’t seen before. Industries are evolving,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The MySQL Community Pay-Your-Own-Way Dinner

Pythian Group - Mon, 2014-02-10 10:26

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members, as many of you will be in town for the MySQL Conference that week. Here are the details:

What: The MySQL Community Pay-Your-Own-Way Dinner

When: Wednesday April 2, 2014 – Meet us at 6:30 PM in the lobby of the Hyatt Santa Clara, or at 7 PM at Pedro’s (You are welcome to show up later, too!)

Cost: The meal will be $25 USD including tax and gratuities. Please bring cash. (See menu below)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

How: RSVP through Eventbrite

Please note: Due to the historically high attendance for this event, Pedro’s has asked that each person pays in cash to simplify billing. Pedro’s can handle large groups of people, but we would like to have an idea of how many people are attending to give Pedro’s an accurate number of attendees so that they can be adequately prepared.

Pythian attendees:

1. Paul Vallee

2. Wagner Bianchi

3. Danil Zburivsky

4. Alex Gorbachev

5. Derek Downey

6. Chad Scheiter

7. Add your name…

Looking forward to seeing you all at the event!

menu_pedros
Categories: DBA Blogs

2013 Year in Review – Oracle E-Business Suite

Pythian Group - Mon, 2014-02-10 10:25

Here are the Top 5 things in Oracle E-Business Suite world that will have major impact in 2014 and beyond.

1. Oracle E-Business Suite 12.2 Now Available

2013 started on a low note in Oracle E-Business Suite (EBS) World. Many people were expecting some announcement related to upcoming EBS release during Openworld 2012. But all they got it is a extension of support deadline  for existing 11i EBS customers. Oracle finally announced Oracle EBS R12.2 few days before Openworld 2013. This releases packs exciting features like Online Patching, which elevates Oracle E-Business Suite ranking in ERP systems domain. Online Patching will enable large multi-national customers consolidate their ERP systems in different Countries into one Single Global Oracle E-Business Suite instance, as it cuts down the downtime required for patching maintenance window to all most nil.  This is a big plus point for clients who cannot afford downtime because their user base is spread all over the world. 2014 will be a years of upgrades to R12.2 for all clients.

2. 12.1.0.1 Database Certified with Oracle E-Business Suite

Around the same time as R12.2 announcement, Oracle certified 12c Database with Oracle EBS. The good news here is they certified Oracle 11i also with 12c Database. This should give EBS clients option to get onto newest version of Oracle Database and take advantage of the new features of 12c database. The effort involved in upgrading database is significantly less than upgrading to newer version of EBS. So i believe many customers will uptake 12c database upgrade before the R12.2 EBS upgrade. Also upgrading database to newer version earlier than EBS, will save some hours during future R12.2 upgrade downtime window.

3. E-Business Suite Support Timelines Updated at OpenWorld 2013

Oracle once again extended the support timelines for 11i customers. They named it as Exception support and it ends on December 2015. During this Exception support period, Oracle will primarily provide fixes for Sev 1 issues and Security patches. So this gives 2 years of additional time to Customers on 11i to migrate to latest R12.2. With typical R12 upgrades taking around 1 year time, The sooner you plan and start your R12.2 migration the better.

4. No to Third-Party Tools to Modify Your EBS Database

Oracle Development warned officially in their blog about use third party tools to modify, archive & purge data in Oracle E-Business suite. Managing data growth in Oracle EBS is a known problem. Now Oracle wants customers to use Oracle Database technologies like ILM, Advanced Compression and Partitioning, to archive the data instead of using third party utilities. Note that all these database features will cost customers additional money in licensing costs. So get your bargaining hat on with your Oracle Account Manager and score some discounts using this oracle Achilles heel namely EBS purging and archiving data.

5. Sign E-Business Suite JAR Files Now

Do you remember the days when Oracle EBS moved from Oracle Jinitiator to Sun JRE for oracle forms? Then be prepared for one more similar thing around oracle forms. With stream of viruses and malware that exploit bugs in Oracle/Sun JRE made Oracle to tighten security around Oracle JRE. Its now required to sign forms jar files with a real certificate. In future releases of Oracle JRE7, Unsigned Oracle forms will stop working completely. So customers caught unaware of this will be in for big trouble with user complaints.

Categories: DBA Blogs

Dancer In Chains

Pythian Group - Mon, 2014-02-10 08:46

Sometimes, you idly think about a problem, and an answer comes to you. And it has the simplicity and the elegance of a shodo brush-stroke. It is so exquisitely perfect, you have to wonder… Have you reached the next level of enlightenment, or did the part of your brain responsible for discernment suddenly called it quits?

I’ll let you be judge of it.

Chains of Command

Something that Catalyst has and Dancer doesn’t is chained routes. A little while ago, I came up with a way to mimic chains using megasplats. People agreed: it was cute, but not quite de par with Catalyst’s offering.

Which brings us to my epiphany of yesterday…

Forging the Links

What if we did things just a tad different than what Catalyst does? What if the bits making the chain segments were defined outside of routes:


my $country = chain '/country/:country' => sub {
    # silly example. Typically much more work would 
    # go on in here
    var 'site' => param('country');
};

my $event = chain '/event/:event' => sub {
    var 'event' => param('event');
};

And what if we could put them together as we define the final routes:


# will match /country/usa/event/yapc
get chain $country, $event, '/schedule' => sub {
    return sprintf "schedule of %s in %s\n", map { var $_ } qw/ site event/;
};

Or we could forge some of those segments together as in-between steps too:


my $continent = chain '/continent/:continent' => sub {
    var 'site' => param('continent');
};

my $continent_event = chain $continent, $event;

# will match /continent/europe/event/yapc
get chain $continent_event, '/schedule' => sub {
    return sprintf "schedule of %s in %s\n", map { var $_ } qw/ event site /;
};

Or, heck, we could even insert special in-situ operations directly in the route when the interaction between two already-defined segments needs a little bit of fudging:


# will match /continent/asia/country/japan/event/yapc
# and will do special munging in-between!

get chain $continent, 
          sub { var temp => var 'site' },
          $country, 
          sub {
              var 'site' => join ', ', map { var $_ } qw/ site temp /
          },
          $event, 
          '/schedule' 
            => sub {
                return sprintf "schedule of %s in %s\n", map { var $_ } 
                               qw/ event site /;
          };

Wouldn’t that be something nice?

The Wind of Chains…ge

Here’s the shocker: all the code above is functional. Here’s the double-shocker: the code required to make it happen is ridiculously short.

First, I had to create a class that represents chain bits. The objects are simple things keeping track of the pieces of path and code chunks constituting the segment.


package Chain;

use Moose;

has "path_segments" => (
    traits => [ qw/ Array /],
    isa => 'ArrayRef',
    is => 'ro',
    default => sub { [] },
    handles => {
        add_to_path       => 'push',
        all_path_segments => 'elements'
    },
);

sub path {
    my $self = shift;
    return join '', $self->all_path_segments;
}

has code_blocks => (
    traits => [ qw/ Array /],
    isa => 'ArrayRef',
    is => 'ro',
    default => sub { [] },
    handles => {
        add_to_code     => 'push',
        all_code_blocks => 'elements'
    },
);

sub code {
    my $self = shift;

    my @code = $self->all_code_blocks;
    return sub {
        my $result;
        $result = $_->(@_) for @code;
        return $result;
    }
}

sub BUILD {
    my $self = shift;
    my @args = @{ $_[0]{args} };

    my $code;
    $code = pop @args if ref $args[-1] eq 'CODE';

    for my $segment ( @args ) {
        if ( ref $segment eq 'Chain' ) {
            $self->add_to_path( $segment->all_path_segments );
            $self->add_to_code( $segment->all_code_blocks );
        }
        elsif( ref $segment eq 'CODE' ) {
            $self->add_to_code($segment);
        } 
        else {
            $self->add_to_path( $segment );
        }
    }

    $self->add_to_code($code) if $code;
}

sub as_route {
    my $self = shift;

    return ( $self->path, $self->code );
}

Then, I had to write the chain() DSL keyword making the junction between the objects and the Dancer world:


sub chain(@) {
    my $chain = Chain->new( args => [ @_ ] );

    return wantarray ? $chain->as_route : $chain;
}

And then… Well, I was done.

What Happens Next

Obviously, there will be corner cases to consider. For example, the current code doesn’t deal with regex-based paths, and totally ignore prefixes. But I have the feeling it could go somewhere…

So, yeah, that’s something that should make it to CPAN relatively soon. Stay tuned!

Categories: DBA Blogs

Database Technology Index

Hemant K Chitale - Sat, 2014-02-08 08:41
A very useful page / link.Oracle Database Technology Index      (not strictly 12c only).
.
.
Categories: DBA Blogs

speakers on the agenda for NEOOUG March 28 2014 meeting

Grumpy old DBA - Fri, 2014-02-07 11:32
We have two topics being presented and as usual great free food beginning at noon.  This should be a good mix of topics Java related for developers/dbas and Oracle database internals stuff for dbas/developers.

First up on the agenda is Scott Seighman a Principal Sales Consultant from Oracle Corporation.  Scott will be talking about Java 8 which is just around the corner from being released.

Specifically his presentation is: Java.Next: An Overview of Java 8

The March release of Java (8) introduced a variety of new features, including Lamda expressions, annotations and a new date/time API. We'll review these and other notable additions to the Java platform, plus provide code samples and demonstrations of the new features of Java 8.

The other presentation will be given by me ok it is the same one I am doing at Hotsos 2014 in early March in Dallas.

Three Approaches to Shared Pool Monitoring for Oracle Database Systems

The shared pool area in Oracle has become a huge memory area over the last ten years, and there is much more than SQL and execution plans held in the shared pool. This presentation will cover three approaches to gaining increased visibility into the contents of the shared pool: 1) using standard oracle views and diagnostics, 2) implementing an in-depth custom monitoring procedure, and 3) shared pool application SQL monitoring.

The approaches here were learned from the school of hard knocks and should be illuminating to many Developers and many DBAs. This presentation will include 12c relevant content.




Categories: DBA Blogs

Log Buffer #358, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-02-07 08:36

Here in your hands, there is yet another fresh sizzling hot edition of the celebrated Log Buffer. Enjoy the collection of assorted gems from Oracle, SQL Server and MySQL.

Oracle:

Oracle Cloud—The Modern Cloud for Your Modern Business.

Oracle ADF Virtual Developer Day – On Demand in YouTube.

When Impala started, many of the early adopters were experts in HiveQL, so the Impala documentation assumed some familiarity with Hive SQL syntax and behavior.

Improve Your Customer Experience Through World-Class Data Quality.

There continues to be a disproportionate amount of hype around ‘NoSQL‘ data stores.

SQL Server:

What are your options for sending a variable number of choices in a parameter to a stored procedure? Alex Grinberg looks at three techniques you can use.

Window Functions in SQL greatly simplify a whole range of financial and statistical aggregations on sets of data.

When should you use a SQL CLR Aggregate? Lots of people have struggled with this one, but David Poole found a use, and has some interesting performance data analysis as well.

Free eBook: Understanding SQL Server Concurrency.

Fix SQL Server Log Shipping After a New Database File has been Added.

MySQL:

MariaDB 5.5.35 was recently released (it is the latest MariaDB 5.5), and is available for download.

jQuery and GIS distance in MariaDB.

Filesort optimization in 5.7.3: pack values in the sort buffer

Keith Hollman has written about –use-tts backup & restore.

Why delegating code to MySQL Stored Routines is poor engineering practice.

Categories: DBA Blogs

IORM Architecture in Exadata

Pakistan's First Oracle Blog - Thu, 2014-02-06 22:43



From the database notes, the I/O requests are sent to the cell nodes. These requests go through the intelligent iDB protocol containing information pieces like database name, category name, consumer group etc. These I/O requests are placed in the CELLsrv I/O queues in the order they are received. Then these I/O requests are passed to the IORM, which prioritize these requests on the basis of IORM plan and then places them accordingly into the Cell disk queues.
Categories: DBA Blogs

The difference between SELECT ANY DICTIONARY and SELECT_CATALOG_ROLE

Hemant K Chitale - Thu, 2014-02-06 09:59
I've seen some DBAs confused about these two "privileges" or "roles".

SELECT ANY DICTIONARY is a System Privilege.

SELECT_CATALOG_ROLE is a Role you would see in DBA_ROLES.  However, querying DBA_SYS_PRIVS does NOT show what privileges are granted to this role.

SELECT_CATALOG_ROLE predates the SELECT ANY DICTIONARY privilege.

The SELECT ANY DICTIONARY privilege grants Read access on Data Dictionary tables owned by SYS.  The SELECT_CATALOG_ROLE role grants Read access to Data Dictionary (DBA_%) and Performance (V$%) views.

Here is a short demo :


SQL*Plus: Release 11.2.0.2.0 Production on Thu Feb 6 07:48:15 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user sad identified by sad;

User created.

SQL> grant create session, select any dictionary to sad;

Grant succeeded.

SQL> create user scr identified by scr;

User created.

SQL> grant create session, select_catalog_role to scr;

Grant succeeded.

SQL>
SQL> connect sad/sad
Connected.
SQL> select count(*) from sys.user$;

COUNT(*)
----------
115

SQL> select count(*) from dba_users;

COUNT(*)
----------
53

SQL> connect scr/scr
Connected.
SQL> select count(*) from sys.user$;
select count(*) from sys.user$
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> select count(*) from dba_users;

COUNT(*)
----------
53

SQL>

If you needed to grant a new / junior DBA or a Consultant the privilege to query the Data Dictionary and Performance views, which would you grant ?

.
.
.

Categories: DBA Blogs

How to Add Disk Storage to Oracle VirtualBox with Linux Guest OS?

VitalSoftTech - Wed, 2014-02-05 20:46

Are you looking to install Oracle software on your VirtualBox but are running low on disk space? Learn how to add disk storage to Oracle VirtualBox with Linux Guest OS.

The post How to Add Disk Storage to Oracle VirtualBox with Linux Guest OS? appeared first on VitalSoftTech.

Categories: DBA Blogs

Partner Webcast – Oracle WebLogic Server Management using Oracle Enterprise Manager 12c

Increasingly, applications running on applications servers truly represent the customer experience. As a result, the traditional line between mission-critical and non-mission critical applications is...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Untold Secrets of SQL Server: What if?

Pythian Group - Wed, 2014-02-05 10:45

It’s common practice for IT vendors to keep some of the commands and statements for their own use, and to avoid publishing or documenting these tools for two reasons:

  1. Some of these commands are unsafe for production environments and should be used carefully on non-production systems only – otherwise it may cause some adverse effects
  2. To maintain an edge over third parties, so the vendor can always be superior in analysis, and quality of support.

Microsoft is not an exception. Today I’ll share one of the cool secrets that Microsoft did not publish, document or even support: It’s called what-if statement.

In several cases, individuals would confront a situation where they would want to test the execution of a query or an application on a different hardware – upsize or downsize. One solution is to bring an actual server, perform the setup of OS and SQL Server, and restore the database to the new hardware.

Another alternative is made possible by using the what-if statement: Simply run the query or application session after issuing few statements to the optimizer to simulate the needed hardware. Of course, if you are planning to downsize, you will get the full benefit of the command as the actual hardware in terms of CPUs and RAM is superior to the one being simulated. However, if you are planning to upgrade the server, such as in the case the actual server is using 4 CPUs while the emulated one is using 16 CPUs, then you still get the feel of what execution plans will be used. Unfortunately you will never get the performance of 16 cores using 4 cores only!

The syntax of what-if statement goes like this:

DBCC Optimizer_WhatIf (1,4) –tells optimizer that your server has 4 CPUs

go

DBCC Optimizer_WhatIf (2,3327) –tells optimizer to emulate a machine with only 3 GB of RAM

go

DBCC Optimizer_WhatIf (3,64) – Sets your system to simulate 64 bit systems

go

–insert your query here

DBCC Optimizer_WhatIf (1, 0) –clear CPU

go

DBCC Optimizer_WhatIf (2, 0) –clear RAM

go

DBCC Optimizer_WhatIf (3, 0) –clear 64 bits

go

Although the virtualization of SQL Server might resolve the issue of testing applications with higher-end or lower-end hardware, there will still be a significant amount of time consumed during the installation and/or restoring of the database. The what-if statement might give you the look and feel of how queries will perform on another hardware, without the hassle of actually installing one.

Nevertheless, it is important to emphasize that the what-if statement is good in development and pre-testing stages –you still need to get servers (real or virtualized) for doing official testing, quality testing, users testing, stress testing, and of course going to production.

Categories: DBA Blogs

Even Posting in Forums Requires Etiquette

Pythian Group - Wed, 2014-02-05 09:40
politeness[1]

Etiquette — as if we do not have enough rules for dining, dress, meetings, corporate and opening doors for ladies!

From time to time, I browse forums which are mainly SQL Server related. I check on threads, and reply to some if I have something to add. I have to say that forums are an excellent way to enhance your knowledge about any product; people come with different and new problems and scenarios and you have to read more about subject or build a proof of concept to be able to contribute.

Forums aren’t always tightly regulated or moderated — sometimes they are, but in general people are free to post whatever they think as long as it is not overly offensive. Compared to some newspapers where they have moderation over comments, the forums (including MSDN) do not do much of that. In my personal experience, I have seen many times that a forum post can slip to a non-productive direction and this can discourage some people posting or even contributing to discussions.

I’ve included some points about forums, that I’ve learned through personal experience:

  • Forums are not intended for product support.

From time to time, someone poses a question and stresses how urgent the matter is — their business is down, and they need help. My advice to them is to simply open a case with your vendor if it is that urgent, as the time spent to get answers on forums can cost you much more than getting vendor product. For critical issues, you do not want to get advice on forums but rather from experienced vendor support. Remember that the people who answer you, do it for free and they may be busy supporting their own environment or have personal arrangements. Do not pester them for a quicker reply!

  • Forums are not to answer your homework or interview questions.

Asking, “How can I do that?” without even trying is not appealing. Yes, we all struggle sometimes but we have to try. I would find it much better if you outline an approach or an idea and ask for advice.

Also, do not bring your interview questions to the forums and ask for answer. I have seen this personally because I interview people from time to time; we give them some questions and they go on forums asking for answers, whom you think you are cheating? Sorry pal, just yourself!

  • Put in some effort, and do a little research before posting.

This may be relative but if you are asking, “What is the difference between a clustered and a non-clustered index?” then you may not have heard about search engines! Research your topic a bit and if you cannot find answers then you can ask.

In addition, it is common to find repliers quoting other or previous forum posts as they deal with almost exact topic, so you should also search forum topics beforehand.666b0efec6[1]

Same goes for the people who respond — Research your answer, too. If possible, support it with a vendor documentation, blog posts or a whitepaper.

  •  Avoid robotic responses.

Back in the time, MSDN forums would award points for every reply even if you said “Hello” and some people abused that. It makes the thread longer with no value.
I remember one answer specifically where someone would always jump and reply asking this

Please run this query and paste the answer

SELECT
SERVERPROPERTY(‘ProductVersion’) , SERVERPROPERTY(‘ProductLevel’) ,
SERVERPROPERTY(‘Edition’) ,
SERVERPROPERTY(‘EngineEdition’) @@VERSION ;

Many times, the poster would have already included the information in his original post declaring he has SQL 200X , SPX, Edition X with X CPU cores and X GB of memory!

  • Not everyone is a Guru, so try to avoid getting personal.

This applies to those who post, and respond to questions. Regardless if the question looks novice, it is still worth the answer. No matter how a reply may look, it is still a contribution (Obviously restrictions apply.)

On MSDN T-SQL forum , for example, a known figure is famous for his bullish replies , criticizing posters and contributors for their questions, approach or answers; examples of some of his replies are: “your screwed up design “, “you failed this basic IT standard”, “Your code is also bad”, “I would fire you for this coding” …etc. It’s vicious!

  • Avoid a very generic/vague title.

Try to avoid titles like, “Problem with SQL server,” “Someone help me with this issue,” “An urgent question,” or “TSQL query help.” Potential responders usually prefer to have a glimpse about the topic from the title — Someone browsing forums may find issue relevant to his/her experience if the title is more descriptive.

  •   Please – no more spaghetti posts.

How about some formatting? Posting few hundred words all in one paragraph is a turn-off, hurts eye, hard to read and hard to get back to when I need more details.
Use spacing and get relevant information together. You can start with a brief about your topology, technology used, versions et al. Follow that with your situation, problem you are facing, your attempted resolutions and then ask what we can do for you.0910_scarpettas-spaghetti-recipe-2_036[1]

If your post has a HUGE code snippet, then it is preferred to attach it as a query file. Use a reputed fileshare provide like SKydrive (soon to be OneDrive,) Google Drive or DropBox.

In addition, please – CAPS makes me nervous!

  • Format your code

MSDN forums for example, has a nice code formatter – use it! It makes the post easier to read. If the forum does not have a code formatter then you can format it elsewhere. Here is a nice code formatter I use frequently: http://www.dpriver.com/pp/sqlformat.htm

  • Stop those cheeky edits.

If you post an answer then people add to your answer or discuss it then you go and edit your earlier one to look better or more sophisticated then that is rather a cheating. It makes subsequent answers look weird and irrelevant. As long as it is not rude, there is no harm in a less than optimal answer, just try harder next time.

  •  Include a disclaimer if you work for a certain company.

If someone asks for a recommendation for a third party tool, and you work for a vendor of one of those tools, it is OK to tout your product! But ensure that you disclose that you work for the vendor.

  • Follow up on your posts.

Dead blog posts clog the forum, and can send more valuable posts down the stack. It is quite common that people ask for more details, so you should followup and reply. If an answer satisfies you, then mark it as answer. Ir helps those who have similar situation, and pays back to those who contributed.blessu[1] If you found an answer on your own, post it and mark it as an answer.

How about you? If you could change ONE thing in forums, what would it be?

Categories: DBA Blogs

Oracle EMEA Partner Community Forums - April 2014, Prague, Czech Republic

We are delighted to invite you to the 2014 editions of the Exadata, Exalogic & Manageability and of the Servers and Storage Partner Community Forums for EMEA partners, which will take place...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Speaking at RMOUG TD 2014

DBASolved - Tue, 2014-02-04 14:22

WebHeaderImage

Let the conference season begin! 

I will be presenting at RMOUG Training Days 2014 this month in Denver, CO.   The presentation that I will be presenting is a revised version of my OOW 2014 presentation on Oracle Enterprise Manager 12c, Database 12c and You!  This presentation will cover a few basics of Database 12c and how Oracle Enterprise Manager 12c can be used to monitor a Database 12c environment.

Being that this is my first RMOUG conference, I’m looking forward to an exciting 3 days of presentations and networking.   This is going to be fun!

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: General
Categories: DBA Blogs

Puppet and MySQL: A How-To Guide

Pythian Group - Tue, 2014-02-04 12:24

Today’s blog post will include a how-to guide on setting up Puppet Master and Agents for streamlined server management. First off you’ll have to configure each of your puppet agent nodes (i.e. clients.)

Get the latest repo from Puppet Labs for your distro — These will have to be added both to the puppet master and to the puppet agent nodes.

Log in to each node and install/configure the puppet agent:

apt-get install puppet #[or yum install puppet for RHEL derivatives]

Edit your /etc/puppet/puppet.conf and add your puppet master servername under the [main] section:

[main]
server= # e.g. dev-laptop in my case
report=true

Also, edit your /etc/default/puppet file and set START to yes:

# Start puppet on boot?
START=yes

Now restart the puppet agent service on all nodes and enable the puppet service:

service puppet restart
puppet resource service puppet ensure=running enable=true

After running the above commands on each of your agent nodes, it’s now time to hop over onto your puppet master server node and configure your puppet master:

apt-get install puppetmaster #[or yum install puppet-server for RHEL]

sudo puppet resource service puppetmaster ensure=running enable=true

By running the following command you should now see the signed certificate requests made by your puppet agents:

sudo puppet cert list
“deb-box-1.lan” (09:00:F5:B9:CF:D0:E7:BF:C5:D3:8B:74:BC:3E:87:E2)
“deb-box-2.lan” (AC:B5:D0:BD:CF:AC:C8:9C:83:21:86:09:40:D3:ED:1E)
“deb-box-3.lan” (67:4E:81:48:18:73:47:16:2F:2C:3D:31:4D:4D:80:F8)

You will have to certify each request in order to enable the agent:

puppet cert sign “deb-box-1.lan”
notice: Signed certificate request for deb-box-1.lan
notice: Removing file Puppet::SSL::CertificateRequest deb-box-1.lan at ‘/var/lib/puppet/ssl/ca/requests/deb-box-1.lan.pem’

puppet cert sign “deb-box-2.lan”
notice: Signed certificate request for deb-box-2.lan
notice: Removing file Puppet::SSL::CertificateRequest deb-box-2.lan at ‘/var/lib/
puppet/ssl/ca/requests/deb-box-2.lan.pem’

puppet cert sign “deb-box-3.lan”
notice: Signed certificate request for deb-box-3.lan
notice: Removing file Puppet::SSL::CertificateRequest deb-box-3.lan at ‘/var/lib/puppet/ssl/ca/requests/deb-box-3.lan.pem’

Finally, test the client puppet agent by running:

puppet agent –test

Now you are ready to create/install modules to manage various server components. Activities such as managing your MySQL databases can be performed by installing the puppetlabs/mysql module as follows:

puppet module install puppetlabs/mysql

Your module has been added! From here it is as simple as adding your requirements to your “site.pp” file in “/etc/puppet/manifests” e.g. (1.) to ensure mysql-server is installed on the node <serverbox.lan> with (2.) a root password = “root_pass_string” & (3.) max_connections = “1024″ the following configuration will suffice (all options are configurable, just specify the section you would like modified & the variable as in the example ~ http://forge.puppetlabs.com/puppetlabs/mysql for more info):

# vi /etc/puppet/manifests/site.pp

node ‘deb-box-1.lan’ {
class { ‘::mysql::server’:
root_password => ‘root_pass_string’,
override_options => { ‘mysqld’ => { ‘max_connections’ => ’1024′ } }
}
}

node ‘deb-box-2.lan’ {
class { ‘::mysql::server’:
root_password => ‘root_pass_string’,
override_options => { ‘mysqld’ => { ‘max_connections’ => ’1024′ } }
}
}

node ‘deb-box-3.lan’ {
class { ‘::mysql::server’:
root_password => ‘root_pass_string’,
override_options => { ‘mysqld’ => { ‘max_connections’ => ’1024′ } }
}
}

Since site.pp is configured run the following command on the puppet agent node to update the configuration:

puppet agent -t

Voilà! Your MySQL server is now under new management =)

Optionally you can also install puppet dashboard by performing the following steps:

apt-get install puppet-dashboard # RHEL # yum install puppet-dashboard

Edit your database.yml (using standard YAML notation) adding the dashboard database name, username and password – it is recommended to use the same database for production/development/testing environments:

cd /usr/share/puppet-dashboard/config;
vi database.yml # add the database name, username & password you intend to setup in the database (see the following step for details)

Connect to your mysql instance (hopefully installed already) and create the puppet-dashboard database & users:

CREATE DATABASE dashboard CHARACTER SET utf8;

CREATE USER ‘dashboard’@'localhost’ IDENTIFIED BY ‘my_password’;

GRANT ALL PRIVILEGES ON dashboard.* TO ‘dashboard’@'localhost’;

Now time to populate the database using rake by running:

rake RAILS_ENV=production db:migrate

rake db:migrate db:test:prepare

Your dashboard is now setup – you can run it with the builtin webserver WEBrick:

sudo -u puppet-dashboard ./script/server -e production

Here are some useful Puppet commands that will come in handy:

# Apply changes with noop
puppet apply –noop /etc/puppet/modules/testit/tests/init.pp

# Full apply changes
puppet apply /etc/puppet/modules/testit/tests/init.pp

# Print a puppet agent’s full configuration
puppet –configprint all

Categories: DBA Blogs