Skip navigation.

Feed aggregator

I’m Graham Smith and this is how I work

Duncan Davies - Mon, 2015-08-31 09:00

I’m genuinely delighted that Graham Smith has added his profile to our ‘How I Work‘ series. Not just because he’s one of the World’s best known PeopleSoft experts, or just because he’s recently joined Cedar’s family of PeopleSoft consultants, but mainly because I was sure he’d give some really interesting answers.

For those who don’t know Graham, he led the PeopleSoft Technical Team at Oxfam for longer than he’d want me to reveal, before his recent move to Cedar where many more customers can now benefit from his talents. He’s also an Oracle ACE Director, Chair of the UKOUG PeopleSoft Tech Committee, multi-occasion OOW speaker (including this year), runs a well-known PeopleSoft blog and is a thoroughly nice chap.

Graham Smith

Name: Graham Smith

Occupation: I work for Cedar Consulting specialising in PeopleSoft.  I get involved in design, development, infrastructure, integration, project management and dreaming up new ideas and solutions.
Location: I live and work in the wonderful, university city of Oxford in the United Kingdom.
Current computer: Dell E5550 but a lot of my work is done on cloud based machines.
Current mobile devices: Samsung Galaxy S6 running Android Lollipop (5.0.2) and a Moto G – Android (5.0.2)
I work: Enthusiastically

What apps/software/tools can’t you live without?
1) ClipCache – This is my number 1 application and I’ve come to rely heavily on it. This simple PC tool remembers EVERYTHING I copy into the Windows clipboard. I can search and organise every clip that’s ever hit my clipboard.

2) Beyond Compare – One of the best folder and file compare tools around. Since v3, Beyond Compare has supported 3 way compare. Great for comparing old-demo with new-patched-demo alongside your customised code. Also indispensable for keeping web, app and batch server file systems in synch.

3) UltraEdit – Simply the best text editor I know. Has a PeopleCode and SQR syntax highlighter and can be easily programmed to be aware of app and batch server cfg files. There are free editors (like Notepad++) but none of them come close to this superbly engineered editor.

4) SQL formatter – For those monster SQL statements.

5) soapUI – Superb web service testing tool.

6) Fiddler – http proxy to help discover what your browser is sending and receiving.

Besides your phone and computer, what gadget can’t you live without?
I don’t need gadgets… unless you include my glasses (can’t read without them).

What’s your workspace like?
When I’m working at home my office is an old converted shed at the bottom of my garden. This is my commute to work:

My desk is a £20 refurb from Emmaus and has on it the following items….. a toy train, a model of an ancient cedar tree from Lebanon, an old PeopleSoft clock, a ruler from the 1980’s that can measure dots-per-inch, a Hot-Wheels car, a copy of Jim Marion’s PeopleTools: Tips & Techniques, an OTN members badge, a signed copy of PeopleSoft for the Oracle DBA by Dave Kurtz, a photo of my wife and kids, 2 bags of Fair Trade coffee, a fan and a Netgear hub. I hear every train that passes by as the main line to London is only 20 metres away.

What do you listen to while you work?
Quite happy with silence…. but I often switch on Radio 2 or UCB Radio later in the day. When I worked at Oxfam I sat opposite my friend Steve who has an alarming 120db sneeze although the Class 43 diesels doing 125mph close to my home office are a good substitute.

What PeopleSoft-related productivity apps do you use?
1) I write and debug a lot of SQL and find SQL Pretty Printer from Gudu Software very helpful. There’s a standalone version that works with SQL from most platforms and one that’s very nicely integrated into Microsoft SQL Server Management Studio. Essential for navigating long complex statements.

2) I also think the Google Chrome PS Utilities extension from Uffe Graakjae is a useful tool for developers and analysts.

3) The Application Designer Debugger is a very useful tool, often overlooked and underused. Don’t wait until you need this before you set up the infrastructure to support it.

4) TraceMagic from Oracle Support Team is very useful for navigating heavily nested code in trace files.

Do you have a 2-line tip that some others might not know?
My tip is more of an observation.  “All system failures are caused by the things that we put in place to protect us from system failures”.  Test this …. and you’ll find it’s true – most of the time.

What SQL/Code do you find yourself writing most often?
I find myself querying PeopleTools meta-data most often.  This is at the heart of what makes PeopleSoft applications so successful, agile and extensible.  Genius software engineering!

What would be the one item you’d add to PeopleSoft if you could?
Code completion for PeopleCode in Application Designer.  It’s partially there in 8.54 it just needs to support Application Package Classes and Methods. Also, how about adding a visual object explorer into Application Designer for delivered and custom objects and app packages.  At a glance, you could see what methods, properties, functions, variables were defined in a code window.

What everyday thing are you better at than anyone else?
Oh my. Hard question. I can balance on one leg on an empty coke can.  Does that count?

What’s the best advice you’ve ever received?
Love the Lord God with all your heart and soul and love your neighbour as yourself.


A Startup is Like Making a Trip to Each Continent

Bradley Brown - Sun, 2015-08-30 21:43
Maybe you realize how difficult it is to go from an idea to a successful company.  Maybe you've attempted one (or more) yourself.  Is the American dream a startup?  It is for some, but not all.  If you start a company and you're the only employee and you don't get paid, is your business a startup?  Is it successful?  Beauty and success are in the eye of beholder.  You might hear people suggest you set your sites and expectations lower so you guarantee success.  In fact, you might hear a lot of different conflicting things over your life.  In Founders Institute we call this "mentor whiplash."  One mentor says yes, another so no.  One says go and another says stop.  There's no right way to do most things in the startup world.  Set your own goals - think about your definition of success and strive for it...every day.

One thing I do know is that MOST people need a partner or partners to "complete them."  In other words, we all have gaps (and strengths) in our personality.  Your partners should fill your gaps.  If they don't, your company will have a gap.  If you're a perfectionist, you're going to want to find a partner who isn't.  If you're a detailed person, you're going to find a partner who isn't.  If you don't have any gaps, congrats!  I have many!

So how is a startup similar to making a trip to every continent?  First, when I say startup, I'm referring to a funded startup.  If you're trying to build a lifestyle company (i.e. one that provides a nice lifestyle for you), don't take someone else's money to do this - they will NOT be happy.  If you can build a business on your own and without any money, that's a dream come true for many.  That's not my dream.  So like I said, the basic premise here is that you're building a business that is going to require capital (i.e. money) to get it going.

When do you raise your first dollar?  My preference is to raise money as soon as you can.  In other words, don't spend any more of your own money to start the business than you absolutely need to.  Even if you have money.  Why?  Because in my view, if you can't get someone else to believe in your idea, it probably isn't a fundable idea.

So let's say you start your trip in a rich continent such as North America.  You can get to a lot of places by driving around.  Some might be safe, some might not.  In fact, you can probably get to your second continent without hopping on a plane - i.e. South America.  We could say that's similar to your "self funding" stage of your business.  At some point you're going to have to take off on a plane to get to the next continent.  In the startup world, continents are like "fund raising series."

The first round of funding (continent 1 - North America) is your own cash.  Make this your smallest and quickest continent to get behind you.  The second round (continent 2 - South America) is your seed round.  This is sometimes called the friends and family round.  Get people believing in your idea ASAP.  Get your seed funding ASAP.  The 3rd round, your Series A round (continent 3 - Australia) is a stretch.  It's a long and big flight.  Most companies frankly never get to this phase.  Maybe you're able to go to Europe instead and the flight is shorter.  It's still a long way and you better have the fuel you need to get to that next round.

So yes, if you start in North America, using your own money, you might have a lot of money.  You might have a nice vehicle to get to your next continent.  But at some point, to get to continent 3 of 7, you're going to have to take off.  When are you going to fuel up next?  Are you going to attempt to fly around the entire world without refueling?  Of course not.  That would be similar to trying to raise $1B in your first round of funding.  It will never happen.

Seed round valuations tend to range from $1M to $3M.  If you're a proven startup person and 100s of other "ifs," you're probably not reading my blog, but yes, you're seed round valuation would be higher.  If you only have an idea, you're initial valuation may be $0 and it could be less than $1M.

If you try to fuel up in the middle of the ocean, you're going to run out of fuel and crash.  If you stop in a dangerous place (i.e. the wrong investors, investors who run out of money, investors with the wrong expectations, etc), you could also be extorted for money.  Depending on where you stop, fuel could be reasonable or it could be expensive.  The more you need the fuel, the more expensive it is.  All of this is true in the startup world too.  If you don't make it to the next continent (funding stage) with your fuel (funding), you're done.  If you stop in the wrong place (i.e. you don't have the right metrics in place by the time you get there), your next funding round might not be fatal, but it could be VERY expensive (i.e. a low valuation equal a high percentage of stock you give up for very little money).


It's important to plan your trip (i.e. startup).  It's important that you can make it to the next continent safely.  Where you start is important.  Focus is important.  If you deliver a message like "we'll do that" for everything a potential investor brings up, they will not want to give you the fuel (cash) you need.  What if you pulled into a gas station (called an FBO in the flight world) with $1000 on you.  Let's say the FBO has a casino in it.  On your walk to the casino you notice a bar, so you stop in there.  You go into the casino and bet $500 on black.  What would your crew (your employees) have to say about your behavior?  A bit confused about the goal?  A lack of focus on the goal here?  What would your investors (i.e. the FBO employees) feel about your ability to pay for the fuel?  Have a mission and live for it.  Know what you need to do and do it.

Whatever you do, bet the farm on your focus.  One thing, not 2, not 3.  Don't try to be more than one thing.  Don't fuel up or raise money in the wrong location (i.e. bad investors) or at the wrong time (i.e. middle of the ocean).  Keep in mind that you MUST get to the next continent with whatever you have from your last funding round.  In other words, DO NOT try to raise money $30k at a time.  You'll spend all of your money raising money and you'll end up in the middle of the ocean without any more fuel.  Each round needs to get you to the next round - or...it's going to be costly or deadly.


Prepare for your trip.  Be ready.  Your investors are your lifeline to the next continent.  Respect them and do what they expect you to do to get to the next round.  If you don't, you both lose - unless they get your company and then they do something good with it.  But that's not what they want.  That's not what you want.  Good luck and safe travels!

Using DBMS_OUTPUT with Node.js and node-oracledb

Christopher Jones - Sun, 2015-08-30 19:20

The DBMS_OUTPUT package is the standard way to "print" output from PL/SQL. The way DBMS_OUTPUT works is like a buffer. Your Node.js application code turns on DBMS_OUTPUT buffering, calls some PL/SQL code that puts text into the buffer, and then later fetches from that buffer. Note: any PL/SQL code that calls DBMS_OUTPUT runs to completion before any output is available to the user. Also, other database connections cannot access your buffer.

A basic way to fetch DBMS_OUTPUT with node-oracledb is to bind an output string when calling the PL/SQL dbms_output.get_line() procedure, print the string, and then repeat until there is no more output. Another way that I like is to wrap the dbms_output.get_line() call into a pipelined function and fetch the DBMS_OUTPUT using a SQL query.

The following code shows both methods.

/*
  NAME
    dbmsoutput.js

  DESCRIPTION
    Shows two methods of displaying PL/SQL DBMS_OUTPUT in node-oracledb.
    The second method depends on these PL/SQL objects:

      create or replace type dorow as table of varchar2(32767);
      /
      show errors

      create or replace function mydofetch return dorow pipelined is
        line varchar2(32767);
        status integer;
        begin loop
          dbms_output.get_line(line, status); 
          exit when status = 1;
          pipe row (line);
        end loop;
      return; end;
      /
      show errors

*/

'use strict';

var async = require('async');
var oracledb = require('oracledb');
var dbconfig = require('./dbconfig.js');

oracledb.createPool(
  dbconfig,
  function(err, pool) {
    if (err)
      console.error(err.message)
    else
      doit(pool);
  });

var doit = function(pool) {
  async.waterfall(
    [
      function(cb) {
        pool.getConnection(cb);
      },

      // Tell the DB to buffer DBMS_OUTPUT
      enableDbmsOutput,

      // Method 1: Fetch a line of DBMS_OUTPUT at a time
      createDbmsOutput,
      fetchDbmsOutputLine,

      // Method 2: Use a pipelined query to get DBMS_OUTPUT 
      createDbmsOutput,
      function(conn, cb) {
        executeSql(
          conn,
          "select * from table(mydofetch())", [], { resultSet: true}, cb);
      },
      printQueryResults
    ],
    function (err, conn) {
      if (err) { console.error("In waterfall error cb: ==>", err, "<=="); }
      conn.release(function (err) { if (err) console.error(err.message); });
    }
  )
};

var enableDbmsOutput = function (conn, cb) {
  conn.execute(
    "begin dbms_output.enable(null); end;",
    function(err) { return cb(err, conn) });
}

var createDbmsOutput = function (conn, cb) {
  conn.execute(
    "begin "
     + "dbms_output.put_line('Hello, Oracle!');"
     + "dbms_output.put_line('Hello, Node!');"
     + "end;",
    function(err) { return cb(err, conn) });
}

var fetchDbmsOutputLine = function (conn, cb) {
  conn.execute(
    "begin dbms_output.get_line(:ln, :st); end;",
    { ln: { dir: oracledb.BIND_OUT, type:oracledb.STRING, maxSize: 32767 },
      st: { dir: oracledb.BIND_OUT, type:oracledb.NUMBER } },
    function(err, result) {
      if (err) {
        return cb(err, conn);
      } else if (result.outBinds.st == 1) {
        return cb(null, conn);  // no more output
      } else {
        console.log(result.outBinds.ln);
        return fetchDbmsOutputLine(conn, cb);
      }
    });
  }
               
var executeSql = function (conn, sql, binds, options, cb) {
  conn.execute(
    sql, binds, options,
    function (err, result) {
      if (err)
        cb(err, conn)
      else
        cb(null, conn, result);
    });
}

var printQueryResults = function(conn, result, cb) {
  if (result.resultSet) {
    fetchOneRowFromRS(conn, result.resultSet, cb);
  } else if (result.rows && result.rows.length > 0) {
    console.log(result.rows);
    return cb(null, conn);
  } else {
    console.log("No results");
    return cb(null, conn);
  }
}

function fetchOneRowFromRS(conn, resultSet, cb) {
  resultSet.getRow(  // note: getRows would be more efficient
    function (err, row) {
      if (err) {
        cb(err, conn);
      } else if (row) {
        console.log(row);
        fetchOneRowFromRS(conn, resultSet, cb);
      } else {
        cb(null, conn);
      }
    });
}

The output is:

Hello, Oracle!
Hello, Node!
[ 'Hello, Oracle!' ]
[ 'Hello, Node!' ]

I used resultSet.getrow() for simplicity, but you will probably want to use resultSet.getRows() for efficiency. If you want to buffer all the output in the Node.js application, Bruno Jouhier has a nice implementation to build up an array of query output in his GitHub gist query-all.js.

RMAN -- 8 : Using a Recovery Catalog Schema

Hemant K Chitale - Sun, 2015-08-30 06:21
Besides retaining information about backups in the controlfile, Oracle allows the use of an (external) Recovery Catalog schema.  This schema is queryable via SQL in the same manner as querying any user / application schema.

Let's start with a database that already has backups present but created without a Recovery Catalog Schema.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 30 19:48:30 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select to_char(completion_time,'DD-MON HH24:MI') Completed_at, datafile_blocks, blocks_read
2 from v$backup_datafile
3 where file#=1
4 order by completion_time;

COMPLETED_AT DATAFILE_BLOCKS BLOCKS_READ
--------------------- --------------- -----------
01-AUG 22:10 107648 107648
10-AUG 15:14 107648 107648
10-AUG 19:58 107648 107648
30-AUG 16:59 107648 107648

SYS>

I now create a Catalog Schema and register this database into that schema.  There are 4 steps to this.  The first is to create the database that will hold t he Catalog Schema (here, we presume that the database has been created before today's steps).  The next is to create the database account for the Catalog Schema (I create the account RCAT_OWNER).  The third step is to login to the Catalog Schema with RMAN and run the CREATE CATALOG command.  The final step is to create an RMAN connection from the TARGET database to the Catalog and REGISTER the database.

[oracle@localhost ~]$ sqlplus system/oracle@rcat

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 30 19:52:15 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYSTEM>create user rcat_owner identified by rcat_owner
2 default tablespace users quota unlimited on users;

User created.

SYSTEM>grant create session to rcat_owner;

Grant succeeded.

SYSTEM>grant recovery_catalog_owner to rcat_owner;

Grant succeeded.

SYSTEM>
SYSTEM>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman catalog rcat_owner@rcat

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 19:53:26 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

recovery catalog database Password:
connected to recovery catalog database

RMAN> create catalog;

recovery catalog created

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ rman target / catalog rcat_owner/rcat_owner@rcat

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 19:55:59 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>

When my (TARGET) database ("ORCL") is registered, Oracle automatically does a RESYNC CATALOG. Can I now see my backups in the Catalog ?

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

Why does it show only the latest two backups ? What about the preceding two backups that I could see in V$BACKUP_DATAFILE ? Let me check those backups without a CATALOG connection.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 19:59:59 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup of datafile 1;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
254 Full 733.27M DISK 00:04:51 01-AUG-15
BP Key: 266 Status: AVAILABLE Compressed: YES Tag: TAG20150801T220612
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/o1_mf_nnndf_TAG20150801T220612_bvsnlnpn_.bkp
List of Datafiles in backup set 254
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14157609 01-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
262 Full 733.23M DISK 00:03:17 10-AUG-15
BP Key: 274 Status: AVAILABLE Compressed: YES Tag: TAG20150810T151144
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T151144_bwjmojs6_.bkp
List of Datafiles in backup set 262
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14158847 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
271 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 283 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 271
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
275 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 287 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 275
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

When I disconnect from the Catalog Schema and do a local only (TARGET) connection, I can see 4 backups of the datafile.  So, what gives ?  Let me try an SQL query on V$BACKUP_DATAFILE.
(As an aside : Note above how there an be discrepancy in the listings showed by LIST BACKUP in the two scenarios (a) without a Recovery Catalog connection  and  (b) with a Recovery Catalog that was created after the last RESETLOGS).  This is something to remember.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 30 20:01:54 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select to_char(completion_time,'DD-MON HH24:MI') Completed_at, datafile_blocks, blocks_read,
2 resetlogs_change#, resetlogs_time
3 from v$backup_datafile
4 where file#=1
5 order by completion_time
6 /

COMPLETED_AT DATAFILE_BLOCKS BLOCKS_READ RESETLOGS_CHANGE# RESETLOGS
--------------------- --------------- ----------- ----------------- ---------
01-AUG 22:10 107648 107648 14082620 04-JUL-15
10-AUG 15:14 107648 107648 14082620 04-JUL-15
10-AUG 19:58 107648 107648 14185666 10-AUG-15
30-AUG 16:59 107648 107648 14185666 10-AUG-15

SYS>

Notice that I have added two columns (RESETLOGS_CHANGE# and RESETLOGS_TIME) in the query.  Now, I see that the two older backups were from an *older* incarnation of the database.  They have a different RESETLOGS_CHANGE# / RESETLOGS_TIME.  So, those backups of the older incarnation are not cataloged into the Catalog Schema !

Can I do anything about this ?  It seems that the full RESYNC doesn't resync for backups of previous incarnations.   Can I reset my RETENTION POLICY and then do a RESYNC ?

SYS>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target / catalog rcat_owner/rcat_owner@rcat

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 30 20:08:06 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)
connected to recovery catalog database

RMAN> show all;

RMAN configuration parameters for database with db_unique_name HEMANTDB are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN> configure retention policy to recovery window of 36 days;

old RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;
new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 36 DAYS;
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> resync catalog;

starting full resync of recovery catalog
full resync complete

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

No, extending the Recovery Window still doesn't help.  Can I try something else ?  What about the CATALOG command ?

[oracle@localhost ~]$ ls -l /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/
total 894280
-rw-rw---- 1 oracle oracle 7786496 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlfrc_.bkp
-rw-rw---- 1 oracle oracle 2421248 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlk3q_.bkp
-rw-rw---- 1 oracle oracle 56320 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnll9n_.bkp
-rw-rw---- 1 oracle oracle 3595776 Aug 1 22:06 o1_mf_annnn_TAG20150801T220605_bvsnlmdx_.bkp
-rw-rw---- 1 oracle oracle 165888 Aug 1 22:12 o1_mf_annnn_TAG20150801T221209_bvsnxs75_.bkp
-rw-rw---- 1 oracle oracle 16896 Aug 1 22:14 o1_mf_annnn_TAG20150801T221404_bvso1f28_.bkp
-rw-rw---- 1 oracle oracle 768901120 Aug 1 22:11 o1_mf_nnndf_TAG20150801T220612_bvsnlnpn_.bkp
-rw-rw---- 1 oracle oracle 110075904 Aug 1 22:11 o1_mf_nnndf_TAG20150801T220612_bvsnvwjj_.bkp
-rw-rw---- 1 oracle oracle 21782528 Aug 1 22:12 o1_mf_nnndf_TAG20150801T220612_bvsnx9v4_.bkp
[oracle@localhost ~]$ ls -l /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/
total 1771844
-rw-rw---- 1 oracle oracle 786944 Aug 10 15:11 o1_mf_annnn_TAG20150810T151143_bwjmohc7_.bkp
-rw-rw---- 1 oracle oracle 28672 Aug 10 15:16 o1_mf_annnn_TAG20150810T151601_bwjmxk5v_.bkp
-rw-rw---- 1 oracle oracle 4140032 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk48xxv_.bkp
-rw-rw---- 1 oracle oracle 526336 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk4911j_.bkp
-rw-rw---- 1 oracle oracle 311296 Aug 10 19:55 o1_mf_annnn_TAG20150810T195509_bwk4923s_.bkp
-rw-rw---- 1 oracle oracle 32256 Aug 10 19:59 o1_mf_annnn_TAG20150810T195901_bwk4j5lt_.bkp
-rw-rw---- 1 oracle oracle 768851968 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmojs6_.bkp
-rw-rw---- 1 oracle oracle 110075904 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmvycy_.bkp
-rw-rw---- 1 oracle oracle 21938176 Aug 10 15:15 o1_mf_nnndf_TAG20150810T151144_bwjmx1sv_.bkp
-rw-rw---- 1 oracle oracle 905863168 Aug 10 19:58 o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
[oracle@localhost ~]$


RMAN> catalog start with '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/';

searching for all files that match the pattern /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_01/
no files found to be unknown to the database

RMAN> list backup of datafile 1;


List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
381 Full 863.89M DISK 00:03:38 10-AUG-15
BP Key: 391 Status: AVAILABLE Compressed: YES Tag: TAG20150810T195515
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/o1_mf_nnndf_TAG20150810T195515_bwk493jn_.bkp
List of Datafiles in backup set 381
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14186110 10-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
385 Full 238.12M DISK 00:01:09 30-AUG-15
BP Key: 395 Status: AVAILABLE Compressed: YES Tag: TAG20150830T165804
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_30/o1_mf_nnndf_TAG20150830T165804_by5kdwrb_.bkp
List of Datafiles in backup set 385
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14198051 30-AUG-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>
RMAN> catalog start with '/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/';

searching for all files that match the pattern /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_08_10/
no files found to be unknown to the database

Well, apparently, even the CATALOG command refuses to catalog backup pieces from an older incarnation !

So, it seems that :
1.  If you've done a RESETLOGS recently and even though the controlfile may show previous backups, if you create a Catalog Schema after the RESETLOGS, previous backups (i.e. of the preceding incarnation)  are not visible in the Catalog  (disconnecting from the Catalog does allow you to view the previous backups in RMAN !)
2. The CATALOG command also will not include the previous backups, it will only accept backups of the current incarnation.

This testing has been done with 11.2.0.2    Has the behaviour changed in 11.2.0.4 / 12.1.0.1 / 12.1.0.2 ?

.
.
.


Categories: DBA Blogs

Personalized Learning is Hard

Michael Feldstein - Sat, 2015-08-29 10:04

By Michael FeldsteinMore Posts (1043)

Paul Fain has written a really good, nuanced article at IHE covering the update that Essex County College gave of their developmental math adaptive learning pilot at a recent conference in Washington, DC. We did a twopart case study on ECC in our e-Literate TV series). The headline results are as follows:

  • In the first year, the pass rate was worse than  in the traditional classes. (The first semester was “disastrous.”)
  • This year—the second year—the pass rate is coming closer to the traditional class but is still underperforming.
  • The article seems to imply that students who earn a C in the personalized learning class do better than students who earn a C in the traditional class, but the article is not explicit about that.

There is no magic pill. As Phil and I have been saying all along—most recently in my last post, which mentioned ECC’s use of adaptive learning—the software is, at best, an enabler. It’s the work that the students and teachers do around the software that makes the difference. Or not. In ECC’s case, they are trying to implement a pretty radical change in pedagogy with an at-risk population. It’s worth digging into the details.

Let’s start by reviewing the basics of their situation:

  • ECC has a 50% pass rate in their lowest level developmental math class, and a 50% pass rate in the next developmental math class up. Since a substantial majority of ECC students place into developmental math, a big part of ECC’s college completion problem can be traced to students failing developmental math.
  • ECC believes that a big reason they have a high failure rate is that students come into that class with an incredibly wide range of prior skills and knowledge—wide enough that a traditional lecture-based class would not address the needs of a majority of the students.
  • They decided to try a radical change in the way the developmental math course was structured.
    • Students would work self-paced on a mastery learning curriculum in labs using McGraw Hill’s ALEKS adaptive learning software. Students could ask each other or the roving instructor for help.
    • Students also met with a teacher each week, separately from the lab sessions, to report their progress of the week, assess the success or failure of their learning strategies, and set new strategies and goals for the next week.

So why does ECC think that they are not getting the results that they hoped for? Doug Walercz, ECC’s Vice President for Planning, Research, and Assessment, offered a few observations. From the article:

  • “[A]daptive courses provide less “accountability.” That’s because students move through content at different paces and it’s harder to make sure they master concepts by a certain point. ‘There is no classwide mile post.'”
  • “[T]he college leaned heavily on graduate students from nearby Rutgers University at Newark and the New Jersey Institute of Technology to teach parts of the adaptive courses during the first year.”
  • “’We underestimated the skill that you would need as a teacher to deliver that content,’ he said.”
  • “Faculty buy-in has also been a challenge. In adaptive courses, instructors do not give lectures or teach in the traditional format. Instead, they circulate among students who are working on computer-based courseware, offering help when needed, much like tutors. That feels like a job ‘below faculty status’ for some instructors, Walcerz said.”

Putting this all together, here is what I see:

  • ECC is starting with an at-risk population, a large portion of which probably has not been taught good meta-cognitive skills or help-seeking behaviors.
  • They are putting those students into a curriculum which, whatever its other virtues may be, puts a higher demand on those meta-cognitive and help-seeking behaviors than a traditional class would.
  • The burden of addressing that weakness in the course design falls on the faculty. But ECC has been working with untrained and inexperienced adjuncts—in, fact, graduate students—as well as some faculty who may be hostile to the project. (ECC has since moved away from using graduate students, according to the article.)

There may or may not also be problems with the software. For what it’s worth, Walercz seems to think highly of the software and doesn’t believe that it is contributing to the poor results. Personally, I think the problems with the match between the student skills and the course design are sufficient to explain the problem. The kind of burden that a self-paced program like this puts on these students is somewhat analogous to the burden that an online course puts on them. We know that the type of population that would be enrolled in a developmental math course in a community college in Newark, NJ typically does not do well in online courses. The difference is that, in ECC’s design, there actually are faculty there to intervene and coach the students personally. It stands to reason that the quality of that coaching would be a critical success factor.

Does this mean that ECC’s approach was a bad idea? I don’t think so. Differentiated instruction is a logical pedagogical response to a heterogeneous class problem. But it can only work in their environment if they have appropriately skilled, trained, and motivated faculty. ECC made substantial investments in software and facilities, but this result highlights the fact that the critical success factors in many cases will be making a substantial investment in providing faculty with appropriate professional development and a motivating compensation and promotion plan. It sounds like they have come to realize that and are taking some steps in that direction.

Truly effective innovation in education is hard. As Phil likes to stress, it takes both brutal honesty regarding the results and a commitment to iterate when the results are almost inevitably not what we hoped for in the first try. A while back, I blogged about an interesting case study at MSU where they did exactly that with a psychology class. If you read the comments thread in the follow-on post, you’ll see that Mike Caulfield brought up a potentially new insight that the course’s DWF pattern may be related to interactions between the course’s absence policy and the blended format. Course problems (and course successes) can be subtle and hard to tease out.

There. Is. No. Magic. Pill.

The post Personalized Learning is Hard appeared first on e-Literate.

Count Selected Rows

Denes Kubicek - Sat, 2015-08-29 05:44
This small example is showing hot to make sure at least one row in your tabular form has been selected prior to proceeding:

https://apex.oracle.com/pls/apex/f?p=93000:806

Categories: Development

Index Usage – 2

Jonathan Lewis - Sat, 2015-08-29 04:33

I’ve been a little slow in the follow-up to my previous posting on possibly redundant indexes. Before going into the slightly more complex stuff, there’s another peripheral point (but a very important one) that’s worth raising about how clever the optimizer can be. Here’s some code for 11.2.0.4 to demonstrate the point:

create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id 
	from dual 
	connect by 
		level <= 1e4
)
select
	rownum					id,
	trunc(sysdate,'MM') + (rownum-1)/1440	date_time,
	rpad('x',100)				padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e5
;


alter table t1 
add (
        date_only
	generated always as (trunc(date_time)) virtual 
)
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

create index t1_i1 on t1(date_only) nologging;

So, in a two-step process, I’ve got an indexed virtual column that holds the value of the date_time column truncated to just the date. Would you expect the optimizer to use the index to execute the following query efficiently:


select
        max(id)
from
        t1
where
        date_time between sysdate-1 and sysdate
;

Note that the query references the real date_time column not the virtual column date_only, and it’s not using the expression that defines the index – yet the plan reads as follows:


-----------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |     1 |    21 |    86   (2)| 00:00:01 |
|   1 |  SORT AGGREGATE                       |       |     1 |    21 |            |          |
|*  2 |   FILTER                              |       |       |       |            |          |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |  1442 | 30282 |    86   (2)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN                  | T1_I1 |  4306 |       |    13   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(SYSDATE@!>=SYSDATE@!-1)
   3 - filter("DATE_TIME"<=SYSDATE@! AND "DATE_TIME">=SYSDATE@!-1)
   4 - access("T1"."DATE_ONLY">=TRUNC(SYSDATE@!-1) AND
              "T1"."DATE_ONLY"<=TRUNC(SYSDATE@!))

It’s a little odd that even though the optimizer in the newer versions of Oracle treats many simple expressions on sysdate as constants it still checks (operation 2) that “sysdate >= sysdate – 1” but perhaps that’s just a case of a piece of generic code that isn’t worth the risk or effort of changing.

The key point, of course, is that Oracle has managed to generate some extra predicates that allow it to use the “wrong” index to get a first approximation of the result set fairly efficiently, and then used the original predicate to reduce the approximation down to the correct result set.

If you want a quick sanity check on the access predicates used for operation 4:

  • If date_time >= sysdate-1, then trunc(date_time) >= trunc(sysdate-1)
  • If date_time <= sysdate, then trunc(date_time) <= trunc(sysdate)

This style of predicate manipulation also works numeric data types, but I think its greatest benefit (or convenience) is likely to come from date data types where the data has been created with a time component but there are frequent “date-only” queries. The days of creating two indexes as a workaround for handling generated code that wants to deal with both date_time and trunc(date_time) predicates should be numbered.

Footnote:

This enhancement probably appeared in 11.2.0.2, and I first saw it described in October 2013 in this blog note by Mohamed Houri; but 12c offers a delightful little enhancement – here’s what my table looks like in the 12c version of the code:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 ID                                     NUMBER
 DATE_TIME                              DATE
 PADDING                                VARCHAR2(100)

SQL> 

Where’s the virtual column ? The 12c version of my code had a slightly different definition for it:


alter table t1  
add (
        date_only
        invisible
        generated always as (trunc(date_time)) virtual
)
;

The transformation still works even when the virtual column is invisible. So (subject to searching for anomalies, boundary conditions and bugs) it looks as if you can change the table definition, and get the benefits of two indexes for the price of one without the application realising that anything has changed.


Switch Panes in Page Designer

Denes Kubicek - Sat, 2015-08-29 03:36
This is definitely worth of mentioning. Go here and download this plugin for Firefox or Chrome. You will like it. It switches the panes in the Page Designer so you can much easier access the properties of the selected elements. Congrats Filip. This is a cool feature. Thanks.

Categories: Development

Three Easy Ways to Stream Twitter Data into ElasticSearch

Rittman Mead Consulting - Sat, 2015-08-29 00:46

For the past few months a friend has been driving me crazy with all his praise for Splunk. He was going on about how easy it is to install, integrate different sources and build reports. I eventually started playing around to see if it could be used for a personal project I’m working on. In no time at all I understood what he was on about and I could see the value and ease of use of the product. Unfortunately the price of such a product means it is not a solution for everyone so I started looking around for alternatives and ElasticSearch caught my eye as a good option.

In this post we will focus on how we can stream Twitter data into ElasticSearch and explore the different options for doing so. Storing data in ElasticSearch is just the first step but you only gain real value when you start analysing this data. In the next post we will add sentiment analysis to our Twitter messages and see how we can analyse this data by building Kibana dashboards. But for now we will dig a bit deeper into the following three configuration options:

We will look at the installation and configuration of each of these and see how we can subscribe to twitter using the Twitter API. Data will then get processed, if required, and sent to Elasticsearch.

twitter_es_dataflow_shadow

Why Use Elasticsearch

Elasticsearch has the ability to store large quantities of semi-structured (JSON) data and provides the ability to quickly and easily query this data. This makes it a good option for storing Twitter data which is delivered as JSON and a perfect candidate for the project I’m working on.

Prerequisites

You will need a server to host all the required components. I used an AWS free tier (t2.micro) instance running Amazon Linux 64-bit. This post assumes you already have an elasticsearch cluster up and running and that you have a basic understanding of elasticsearch. There are some good blog posts, written by Robin Moffatt, which were very useful during the installation and configuration stages.

Twitter Stream API

In order to access the Twitter Streaming API, you need to register an application at http://apps.twitter.com. Once created, you should be redirected to your app’s page, where you can get the consumer key and consumer secret and create an access token under the “Keys and Access Tokens” tab. These values will be used as part of the configuration for all the sample configurations to follow.

The API allows two types of subscriptions. Either subscribe to specific keywords or to a user timeline (similar to what you see as a twitter user).

logstash

We'll start with logstash as this is probably the easiest one to configure and seems to be the recommended approach for integrating sources with elasticsearch in recent versions. At the time of writing this post, logstash only supported streaming based on keywords which meant it was not suitable for my needs but it’s still a very useful option to cover.

logstash installation

To install logstash you need to download the correct archive based on the version of elasticsearch you are running.

curl -O https://download.elasticsearch.org/logstash/logstash/logstash-x.x.x.tar.gz

Extract the archived file and move the extracted folder to a location of your choice

tar zxf logstash-x.x.x.tar.gz
mv logstash-x.x.x /usr/share/logstash/

logstash configuration

To configure logstash we need to provide input, output and filter elements. For our example we will only specify input (twitter) and output (elasticsearch) elements as we will be storing the full twitter message.

For a full list of logstash twitter input settings see the official documentation.

Using your favourite text editor, create a file called twitter_logstash.conf and copy the below text. Update the consumer_key, consumer_secret, oath_token and oath_token_secret values with the values from your Twitter Stream App created earlier.

input {
    twitter {
        # add your data
        consumer_key => "CONSUMER_KEY_GOES_HERE"
        consumer_secret => "CONSUMER_SECRET_GOES_HERE"
        oauth_token => "ACCESS_TOKEN_GOES_HERE"
        oauth_token_secret => "ACCESS_TOKEN_SECRET_GOES_HERE"
        keywords => ["obiee","oracle"]
        full_tweet => true
    }
}
output {
    elasticsearch_http {
        host => "localhost"
        index => "idx_ls"
        index_type => "tweet_ls"
    }
}

This configuration will receive all tweets tagged with obiee or oracle and store them to an index called idx_ls in elasticsearch.

To run logstash, execute the following command from the installed location

bin/logstash -f twitter_logstash.conf

If you subscribed to active twitter tags you should see data within a few seconds. To confirm if your data is flowing you can navigate to http://server_ip:9200/_cat/indices?v which will show you a list of indices with some relevant information.

es_index_list_shadow

With this easy configuration you can get Twitter data flowing in no time at all.

Twitter River Plugin

Next we will look at using the River Plugins to stream Twitter data. The only reason to use this approach over logstash is if you want to subscribe to a user timeline. Using this feature will show the same information as the Twitter application or viewing your timeline online.

Twitter River Plugin installation

Before installing the plugin you need to determine which version is compatible with your version of elasticsearch. You can confirm this at https://github.com/elasticsearch/elasticsearch-river-twitter and selecting the correct one.

To install you need to use the elasticsearch plugin installation script. From the elasticsearch installation directory, execute:

bin/plugin -install elasticsearch/elasticsearch-river-twitter/x.x.x

Then restart your Elasticsearch service.

Twitter River Plugin configuration

To configure the twitter subscriber we will again create a .conf file with the necessary configuration elements. Create a new file called twitter_river.conf and copy the following text. As with logstash, update the required fields with the values from the twitter app created earlier.

{
  "type": "twitter",
  "twitter" : {
        "oauth" : {
            "consumer_key" : "CONSUMER_KEY_GOES_HERE",
            "consumer_secret" : "CONSUMER_SECRET_GOES_HERE",
            "access_token" : "ACCESS_TOKEN_GOES_HERE",
            "access_token_secret" : "ACCESS_TOKEN_SECRET_GOES_HERE"
        },
        "filter" : {
            "tracks" : ["obiee", "oracle"]
        },
        "raw" : true,
        "geo_as_array" : true
  },
  "index": {
    "index": "idx_rvr",
    "type": "tweet_rvr",
    "bulk_size": 100,
    "flush_interval": "5s"
  }
}

This configuration is identical to the logstash configuration and will receive the same tweets from twitter. To subscribe to a user timeline instead of keywords, replace the filter configuration element:

"filter" : {
      "tracks" : ["obiee", "oracle"],
},

with a user type element

"type" : "user",

To start the plugin you need to execute the following from a terminal window.

curl -XPUT localhost:9200/_river/idx_rvr/_meta -d @twitter_river.conf

Depending on how active your subscribed tags are you should see data within a few seconds in elasticsearch. You can again navigate to http://server_ip:9200/_cat/indices?v to confirm if your data is flowing. Note this time that you should see two new rows, one index called _river and the other idx_rvr. idx_rvr is where your twitter data will be stored.

To stop the plugin (or change between keywords and user timeline), execute the following from a terminal window:

curl -XDELETE 'localhost:9200/_river/idx_rvr';

Tweepy

Finally we will look at the most flexible solution of them all. It is a bit more complicated to install and configure but, given what you gain, the small amount of extra time spent is well worth the effort. Once you have Tweepy working you will be able to write you own python code to manipulate the data as you see fit.

Tweepy installation

As Tweepy is a python package we will use pip to install the required packages. If you don't have pip installed. Execute one of the following, depending on your linux distribution.

yum -y install python-pip

or

apt-get install python-pip

Next we will install the Tweepy and elasticsearch packages

pip install tweepy
pip install elasticsearch

Tweepy configuration

Create a new file called twitter_tweepy.py and copy the following text to the file

import tweepy
import sys
import json
from textwrap import TextWrapper
from datetime import datetime
from elasticsearch import Elasticsearch


consumer_key="CONSUMER_KEY_GOES_HERE"
consumer_secret="CONSUMER_SECRET_GOES_HERE"

access_token="ACCESS_TOKEN_GOES_HERE"
access_token_secret="ACCESS_TOKEN_SECRET_GOES_HERE"

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

es = Elasticsearch()

class StreamListener(tweepy.StreamListener):
    status_wrapper = TextWrapper(width=60, initial_indent='    ', subsequent_indent='    ')

    def on_status(self, status):
        try:
            #print '\n%s %s' % (status.author.screen_name, status.created_at)

            json_data = status._json
            #print json_data['text']

            es.create(index="idx_twp",
                      doc_type="twitter_twp",
                      body=json_data
                     )

        except Exception, e:
            print e
            pass

streamer = tweepy.Stream(auth=auth, listener=StreamListener(), timeout=3000000000 )

#Fill with your own Keywords bellow
terms = ['obiee','oracle']

streamer.filter(None,terms)
#streamer.userstream(None)

As with the river plugin you can subscribe to the user timeline by changing the subscription type. To do this replace the last line in the script with

streamer.userstream(None)

To start the listener you need to execute the python file

python twitter_tweepy.py

Navigate to the elasticsearch index list again to ensure you are receiving data.

Conclusion

Getting Twitter data into Elasticsearch is actually pretty simple. Logstash is by far the easiest one to configure and if subscribing to keywords is your only requirement it should be the preferred solution. Now that we have the foundation in place, in the next post we will have a look at how we can enhance this data by adding sentiment analysis and how we can use this data to make decisions.

Categories: BI & Warehousing

Amazon Dash — It’s Dinner Time!

Oracle AppsLab - Fri, 2015-08-28 17:44

Yesterday I received an Amazon Dash for ordering IZZE juice.

61KC+ua8uYL._SL1000_

I think it is a great device, not that I would order tons of IZZE from Amazon, but at $5 it has wifi module + a micro controller + a LED light + a battery + nice enclosure, and it’s usually in deep sleep which means the device can last for years. That’s a bargain – a similar device would cost $20 – $40, at least before ESP8266 became available.

First thing I tried is to re-purpose it to toggle on/off the poor man’s Nest screen, because the PiTFT screen is quite bright and gets warm, I want to turn it off without unplugging the cord, and turn it back on instantly if needed. And here is it. Note the IZZE sticker color coordinates well with Nest warm yellow color :)

IMAG7128

IMAG7129

The signal cycle goes through PubNub, which has MQTT at its core, the response time is less than 0.5 second. So it is remotely controlled – if leave poor man’s Nest in office, I can push the button at home to turn it off.

While my daughter is excited about the IZZE toggle button controlling PiTFT screen, my wife asked me to do something more meaningful :) So I made a second try, turned IZZE button to be a “dinner time” call button.

Every time when dinner is ready, I have to shout toward upstairs to get my kids, and more often than not, I couldn’t get them because they have head-phones on.

So I modified a little bit of the code, to listen for IZZE wake up and try to connect to my router, then use that signal to ask Philips Hue lights to blink 3 times.

Now my wife can just press IZZE button in the kitchen, the Hue lights at my kid’s desk start to blink, and that’s dinner time call.

I guess that is more meaningful, at least I don’t have to shout toward upstairs again :)Possibly Related Posts:

asmcmd> a better “du”

Pythian Group - Fri, 2015-08-28 14:28

I discovered ASM with a 10.1.0.3 RAC running on Linux Itanium and that was a big adventure. At this time there was no asmcmd. In 2005, Oracle released Oracle 10gR2 and asmcmd came into the place and we figured out how to make it work with a 10gR1 ASM. We were very excited to have a command line for ASM until… we tried it ! let’s call a spade a spade,  it was very poor…

10 years after, Oracle has released 11gR1, 11gR2, 12cR1, asmcmd has been improved but the “ASM shell” remains very weak and specially the “du” command :

ASMCMD> du
Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du .
Used_MB Mirror_used_MB
 556178 556178
ASMCMD> du *
Used_MB Mirror_used_MB
 556265 556265
ASMCMD> ls
ASM_CONFIG/
DATA/
FRA/
LOG/
ASMCMD>

Why “du *” does not act as it acts in any Unix shell ? How do I know the size of each subdirectory in my current directory ?

 

Nowadays, we use to have dozens of instances running on the same server sharing the same ASM :


[oracle@higgins ~]$ ps -ef | grep pmon | wc -l
30
[oracle@higgins ~]$

so should I use one “du” per database (directory) to know the size used by each database ? what if I keep one month of archivelogs in my FRA ? should I wait for the month of February to have only 28 “du” to perform if I want to know the size of archivelogs generated each day (if this is a non-leap year !) ?

 

This is why I wrote this piece of code to have a “du” under ASM that makes my life easier everyday :

[oracle@higgins ~]$ cat asmdu.sh
#!/bin/bash
#
# du of each subdirectory in a directory for ASM
#
D=$1

if [[ -z $D ]]
then
 echo "Please provide a directory !"
 exit 1
fi

(for DIR in `asmcmd ls ${D}`
 do
     echo ${DIR} `asmcmd du ${D}/${DIR} | tail -1`
 done) | awk -v D="$D" ' BEGIN {  printf("\n\t\t%40s\n\n", D " subdirectories size")           ;
                                  printf("%25s%16s%16s\n", "Subdir", "Used MB", "Mirror MB")   ;
                                  printf("%25s%16s%16s\n", "------", "-------", "---------")   ;}
                               {
                                  printf("%25s%16s%16s\n", $1, $2, $3)                         ;
                                  use += $2                                                    ;
                                  mir += $3                                                    ;
                               }
                         END   { printf("\n\n%25s%16s%16s\n", "------", "-------", "---------");
                                 printf("%25s%16s%16s\n\n", "Total", use, mir)                 ;} '
[oracle@higgins ~]$
Let's see it in action with some real life examples :
[oracle@higgins ~]$. oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /oracle
[oracle@higgins ~]$./asmdu.sh DATA

DATA subdirectories size

Subdir  Used MB Mirror MB
------  ------- --------
DB01/    2423    2423
DB02/    2642    2642
DB03/    321201  321201
DB04/    39491   39491
DB05/    180753  180753
DB06/    4672    4672
DB07/    1431    1431
DB08/    2653    2653
DB09/    70942   70942
DB10/    96001   96001
DB11/    57322   57322
DB12/    70989   70989
DB13/    4639    4639
DB14/    40800   40800
DB15/    13397   13397
DB16/    15279   15279
DB17/    19020   19020
DB18/    8886    8886
DB19/    4671    4671
DB20/    14994   14994
DB21/    502245  502245
DB22/    4839    4839
DB23/    10169   10169
DB24/    7772    7772
DB25/    7828    7828
DB26/    112109  112109
DB27/    5564    5564
DB28/    16895   16895
------  ------- ---------
Total   1639627 1639627
[oracle@higgins ~]$

 

Another one with many archivelogs directories :
[oracle@higgins ~]$./asmdu.sh FRA/THE_DB/ARCHIVELOG/

 FRA/THE_DB/ARCHIVELOG/ subdirectories size

 Subdir       Used MB Mirror MB
 ------        ------ ---------
 2015_02_19/    114   114
 2015_02_20/    147   147
 2015_02_21/    112   112
 2015_02_22/    137   137
 2015_02_23/    150   150
 2015_02_24/    126   126
 2015_02_25/    135   135
 2015_02_26/    130   130
 2015_02_27/    129   129
 2015_02_28/    119   119
 2015_03_01/    146   146
 2015_03_02/    150   150
 2015_03_03/    128   128
 2015_03_04/    134   134
 2015_03_05/    44    44
 2015_05_27/    28    28
 2015_05_28/    95    95
 2015_05_29/    76    76
 2015_05_30/    187   187
 2015_05_31/    78    78
 2015_06_01/    111   111
 2015_06_02/    105   105
 2015_06_03/    43    43
 2015_06_04/    142   142
 2015_06_05/    42    42
 2015_06_06/    84    84
 2015_06_07/    70    70
 2015_06_08/    134   134
 2015_06_09/    77    77
 2015_06_10/    143   143
 2015_06_11/    2     2
 2015_06_21/    14    14
 2015_06_22/   14918 14918
 ------       ------- ---------
 Total         18250   18250

[oracle@higgins ~]$

This example is a very nice one as it shows us that 2015 is not a leap year and that some archivelogs are still on disk even if they probably shouldn’t and that’s a good information as v$log_history do not contain these information anymore :

SQL> select trunc(FIRST_TIME), count(*) from v$log_history group by trunc(FIRST_TIME) order by 1 ;

TRUNC(FIR COUNT(*)
--------- ----------
22-JUN-15 402

SQL>

Hope it will also makes your life easier,

Have a good day :)

Categories: DBA Blogs

Simplify Oracle Tracing with Creative Scripting

Pythian Group - Fri, 2015-08-28 14:26

Running a SQL trace is something that all DBAs do to varying degrees. Let’s say you are working on optimizing a SQL statement, and experimenting with some different hints for indexes and optimizer directives. This kind of effort typically goes something like this:

  • modify the SQL statement
  • enable tracing
  • run the statement
  • disable tracing
  • disconnect
  • retrieve the trace file
  • use a profiler to process the trace file
    this might be Method-R mrskew,Oracle tkprof, or something of your own.
  • delete the trace file if no longer needed

That process is OK if all you need to do is look at a couple of trace files, but quickly becomes tedious for any serious optimization effort as there will be many iterations of this process.  This is the kind of job that just cries out for some simple automation.

Let’s walk though automating much of this process using Sqlplus, ssh and some profiling tools.

First let’s consider the environment:

  • Oracle 11.2 database on a remote server
  • Workstation has 11.2 client software installed
  • ssh is setup for connecting to the oracle user on the database server
  • some profiling tools are available

Let’s get started with the script that is the subject of our ‘tuning’ effort.

-- sql2trace.sql
select * from dual;

As you can see there is not really going to be any tuning done in this article; it is all about the process.

The following script tracefile_identifier_demo.sql is used to setup the trace environment by collecting some information about the database host the process owner, and then setting the tracefile_identifier parameter.  The values for these are then used to set sqlplus define variables.

-- tracefile_identifier_demo.sql

-- column variables to capture host, owner and tracefile name
col tracehost new_value tracehost noprint
col traceowner new_value traceowner noprint
col tracefile new_value tracefile noprint

set term off head off feed off

-- get oracle owner
select username traceowner from v$process where pname = 'PMON';

-- get host name
select host_name tracehost from v$instance;

-- set tracefile identifier
alter session set tracefile_identifier = 'MYTRACEFILE';

select value tracefile from v$diag_info where name = 'Default Trace File';

set term on head on feed on

-- do your tracing here
alter session set events '10046 trace name context forever, level 12';

-- run your SQL here
@@sql2trace

alter session set events '10046 trace name context off';

-- disconnect to ensure all trace data flushed
-- the disconnect must be done in the called script
-- otherwise the values of the defined vars are lost

-- now get the trace file, or other processing
--@@mrskew '&&traceowner@&&tracehost' '&&tracefile'
@@tkprof '&&traceowner@&&tracehost' '&&tracefile'

This article began as an idea to write about tracefile_identier, hence the script name.

Most of this script is quite straightforward:

  • set column command initiated define variables to capture host, process owner and tracefile name
  • collect the data
  • enable tracing
  • run the target script
  • disable tracing
  • call the tkprof.sql script to run tkprof

The interesting bit is found in tkprof.sql.

-- tkprof.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on
disconnect

host ssh &&ssh_target 'cat &&scp_filename' | tkprof /dev/stdin ./tkprof.out sort=exeqry sys=no
host cat ./tkprof.out

There are a couple of things to take notice of in tkprof.sql.  Did you notice the disconnect statement?  There are couple of points of interest about that.  Prior to 11g it was necessary to disconnect from Oracle to ensure that all cursors were closed and all STAT and row source operation rows were written to the trace file.  Disconnecting the session is not necessary in Oracle 11g+.

Another interesting bit about this disconnect statement is its placement.  At first the disconnect statement was in the main script.  The problem was that the define variables would all lose their values prior to calling the tkprof.sql script, and so the call would fail; and so the disconnect command is in the called script.

Finally the trace output is retrieved via ssh and piped to tkprof.  Notice that there is no need to actually copy the file, rather the contents of the file are simple sent to STDOUT and piped to tkprof.

The tkprof command does not read from STDIN.  If for instance you try this; cat somefile | tkprof – ./tkprof.out sort=exeqry; tkprof will exit with an error that an input file is needed.  That problem is circumvented by using the file /dev/stdin.

Put it all together and it looks like this:

11:34:11 JKSTILL@oravm > @tracefile_identifier_demo

Session altered.

Elapsed: 00:00:00.00

D
-
X

1 row selected.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

TKPROF: Release 11.2.0.3.0 - Development on Thu Aug 27 11:34:18 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Host key fingerprint is de:ad:be:ed:a2:d6:63:4b:rx:77:fd:1c:e1:36:2b:88
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         .  .    |
|        S  +.    |
|        ..ox.o   |
|       o+.F.* o  |
|      99+o.o.= . |
|     . ..+y.ooo  |
+-----------------+

TKPROF: Release 11.2.0.3.0 - Development on Thu Aug 27 11:34:18 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Trace file: /dev/stdin
Sort options: exeqry
********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

SQL ID: a5ks9fhw2v9s1 Plan Hash: 272002086

select *
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 90
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS FULL DUAL (cr=2 pr=0 pw=0 time=22 us cost=2 size=2 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  log file sync                                   1        0.00          0.00
  SQL*Net message from client                     2        0.00          0.00
********************************************************************************

SQL ID: 06nvwn223659v Plan Hash: 0

alter session set events '10046 trace name context off'


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Parsing user id: 90

********************************************************************************
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       3        0.00          0.00
  SQL*Net message from client                     3        0.00          0.00
  log file sync                                   1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          3           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1      0.00       0.00          0          0          3           1

Misses in library cache during parse: 0

    2  user  SQL statements in session.
    1  internal SQL statements in session.
    3  SQL statements in session.
********************************************************************************
Trace file: /dev/stdin
Trace file compatibility: 11.1.0.7
Sort options: exeqry
       1  session in tracefile.
       2  user  SQL statements in trace file.
       1  internal SQL statements in trace file.
       3  SQL statements in trace file.
       3  unique SQL statements in trace file.
     218  lines in trace file.
       0  elapsed seconds in trace file.

The same process was used to run the trace data through the Method-R mrskew command:

-- mrskew.sql

col ssh_target new_value ssh_target noprint
col scp_filename new_value scp_filename noprint

set term off feed off verify off echo off

select '&&1' ssh_target from dual;
select '&&2' scp_filename from dual;

set feed on term on verify on
--disconnect
host ssh &&ssh_target 'cat &&scp_filename' | mrskew

The results of calling mrskew.sql  rather than tkprof.sql:

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
—————————  ——–  ——  —–  ——–  ——–  ——–
SQL*Net message from client  0.003733   74.1%      3  0.001244  0.001004  0.001663
log file sync                0.001300   25.8%      1  0.001300  0.001300  0.001300
SQL*Net message to client    0.000008    0.2%      3  0.000003  0.000002  0.000003
PARSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
FETCH                        0.000000    0.0%      2  0.000000  0.000000  0.000000
CLOSE                        0.000000    0.0%      2  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      2  0.000000  0.000000  0.000000
—————————  ——–  ——  —–  ——–  ——–  ——–
TOTAL (7)                    0.005041  100.0%     15  0.000336  0.000000  0.001663

These scripts can all be found at https://github.com/jkstill/tracefile_identifier

If you have ideas about how to improve these, please feel free to clone the repo, make some changes and issue a pull request.

If you don’t know what all of that means, might I suggest this article?  Git for Beginners

The next time you have some tracing to do, why not give this method a try?  Doing so will save you time and make you more productive.

 

Categories: DBA Blogs

Pillars of PowerShell: SQL Server – Part 2

Pythian Group - Fri, 2015-08-28 14:24
Introduction

This is the seventh and final post in the series on the Pillars of PowerShell. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS
  6. SQL Server – Part 1

In this final post I am going to touch on SQL Server Management Objects (SMO) with PowerShell. SMO is one of the most widely used methods, and offers the most versatile way of working with SQL Server to me. It can be a bit tedious to work with being that you are going to be using raw .NET objects now instead of cmdlets, but offers so much more compared to SQLPS. In this post I am just going to touch on the basics of loading SMO, and how you can connect to an instance of SQL Server (or multiple). I am going to end it showing you a function I published a few years ago and use fairly frequently to this day.

Loading SMO

As with SQLPS, you have to load SMO into your PowerShell session before you can utilize it. SMO is what is referred to as an “assembly”, basically a collection of types and other objects that form a logical unit of functionality for interacting with various parts of SQL Server. SQL Server 2012 and above you can import the SQLPS module and it will automatically import the associated version of SMO. However, being that SQLPS is loading in more than just SMO it can take time for that to complete before your script will continue. In that regard, it can shave off some time by just loading SMO directly without all the overhead of the SQLPS module. You will commonly see the following line of code used to load SMO into your session:

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO')

SMO_Load_1

Generally this command is going to load the highest version registered in the GAC on your machine. In the screenshot you may see the version is “13.0.0”, this is from SQL Server Management Studio preview (July 2015) that is installed on my machine. Now with PowerShell things change over time and using LoadWithPartialName is actually the version 1 method of loading SMO. This method is actually no longer supported, but still works for now. In PowerShell 2.0 a cmdlet was added to do this for you called, Add-Type. If you were to just type in Add-Type ‘Microsoft.SqlServer.Smo’ when you have multiple versions, your are going to get an error similar to this:

SMO_Load_2

In this situation you have to specify the assembly you want to load, so there is a bit more to doing this with SMO. You can load an assembly by specifying the file itself or by the assembly name along with 4 bits of information:

  1. Name
  2. Version
  3. Culture
  4. PublicKeyToken

To date, Microsoft always uses the same Culture and PublicKeyToken on almost all of their assemblies that come out of Redmond. So the only thing lacking is the version, which is going to be in the format of a 4-part version number, 0.0.0.0. If you have worked with SQL Server and you are familiar with the build numbers, you simply need to know that “10” is SQL Server 2008, “11” is SQL Server 2012, “12” is SQL Server 2014, and “13” is going to be SQL Server 2016. So, if I want to load the SQL Server 2012 SMO into my session I simply use this command:

Add-Type -AssemblyName "Microsoft.SqlServer.Smo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
The first connection…

To connect to a single instance of SQL Server with Windows Authentication you can use the following:

$srvObject = New-Object Microsoft.SqlServer.Management.Smo.Server "MyServer"

Once you hit enter, it will make a connection to your instance and then the variable $srvObject will contain properties and methods that you can use to manipulate the server-level objects of your instance. If you recall from the previous pillars in this series, this is where Get-Member comes in real handy for exploring. As an example let’s say you wanted to get similar information to what SELECT @@VERSION returns in T-SQL. You simply need to know the properties that hold this information and pipe the object to select:

 
$srvObject | select Product, VersionString, Edition, OSVersion 

In PowerShell it is good to start out with the mindset “if I write it for one server, might as well write it to handle multiple”. What I mean by this is you get to the point of developing a script into a tool. If I wanted to turn the above bit of code into something I can reuse, and run for one instance or 50 instances it just takes a bit of work and you are there before you know it:

function Get-SqlVersion {
 [cmdletbinding()]
 param (
 [string[]]$server
 )
 
 $allServers = @()
 $props = @{ServerName="";Product="";Version="";Edition="";OSVersion=""}
 foreach ($s in $server) {
 $srvObject = New-Object Microsoft.SqlServer.Management.Smo.Server $s

 $cserver = New-Object psobject -Property $props
 $cserver.ServerName = $s
 $cserver.Product = $srvObject.Product
 $cserver.Version = $srvObject.VersionString
 $cserver.Edition = $srvObject.Edition
 $cserver.OSversion = $srvObject.OSVersion
 $allServers += $cserver
 }
 
 $allServers
}

Now, don’t let this scare you as it may look more complicated than it seems. You could just put two lines inside the foreach loop that create your server object and then just select the properties, then you are done. It is best though when you start to write functions that the output of your function is an object. So that is the only additional step I take using New-Object psobject to create a PowerShell object with the properties ServerName, Product, Version, Edition, and OSVersion. In the event you expand on this function in the future, and wanted to pipe this output to another cmdlet or custom bit of code it will be in a more formal object type for you to work against.

Golden Nugget

One of the things I got annoyed with fairly quickly when troubleshooting an instance of SQL Server was having to search through the error log(s). You could be dealing with the default of 6 logs for an instance or up to 99 of them. Now there is some T-SQL code out there of people iterating through each log for you, but I just prefer to use PowerShell. I published this code on my personal blog back in December of 2014. You can find the write-up and code here: Search-SqlErrorLog. It will be good practice for you to try and understand it on your own, but I include help information just in case.

This is one of the few times I wrote a function that only works with one server at a time. You can do some one-liner tricks with the pipeline to easily call it for multiple servers:

"server1","server2" | foreach {Search-SqlErrorLog -server $_ -all -value "^backup"}

The output of this function provides the number of the log it was found in, the date, the process (if noted in the log), and the text found matching the value you provided (which can accept regex expressions, the “^” means the start of the string):

search_sqlerrorlog

The End

I hope you learned something new in this series on PowerShell, and good scripting to you all.

Categories: DBA Blogs

Migration of Oracle Database to Amazon RDS using Golden Gate

Pythian Group - Fri, 2015-08-28 14:15

Amazon RDS is a web service used to manage databases, like Oracle, in the cloud. Small- and medium-sized enterprises with databases of normal load, volume, and SLA, can certainly leverage the ease and cost efficiency Amazon RDS offers.

There are two other methods that are widely used to migrate databases with minimal downtime: Oracle Data Guard and Oracle GoldenGate. AWS RDS doesn’t support Data Guard, but luckily it does support Oracle GoldenGate. There are some version constraints though.

The following steps are involved while migrating a database from on-premises to AWS RDS:

— Source database on premises
— Oracle GoldenGate Hub on EC2 instance
— Target database on AWS RDS

Now there could be different topologies for the above 3 components, but we are just using this topology for simplicity. For details on this topology, refer to this very fine and simple Appendix: Using Oracle GoldenGate with Amazon RDS.

Generally and roughly, the steps used to migrate databases from on-premises Oracle database to AWS RDS could be as follows:

— Create target database targetdb in AWS RDS with same parameters as that of the source database sourcedb.

— Create same tablespaces on targetdb in AWS RDS as they exist in source database sourcedb.

— Create same non default users on targetdb in AWS RDS as they exist in source database sourcedb.

— Create same non default roles on targetdb in AWS RDS as they exist in source database sourcedb and assign these roles to users on targetdb.

— Export data/objects from sourcedb database to specific SCN from non default schemas

— Import data/objects into targetdb database

— Configure GoldenGate extract process on sourcedb , for configuration see this

— Configure GoldenGate replicate processes on targetdb , for configuration see this

— Set up Oracle GoldenGate (GG) Hub on EC2 , for configuration see this

— Start GG extract process on sourcedb

— Start GG replicate process on targetdb starting after that SCN until it catch all changes generated on sourcedb database during exp/imp time.

— Then plan the cut-off time for applications to switch to new AWS RDS database after stopping replicat process at targetdb.

— Cleanup of sourcedb.

These are just the skeleton steps and need refining and proper planning. It’s always good to first thoroughly test such action plans. But as you can see, Oracle GoldenGate is a viable tool to migrate databases to the AWS RDS. Pythian has a full range of skills, experience, and capabilities to oversee such migrations as its our daily routine to use GoldenGate to do migrations. And yes, even if AWS RDS is a cloud service, you still need a DBA :)

Categories: DBA Blogs

Controlling NodeBox from an Apple Watch

Oracle AppsLab - Fri, 2015-08-28 13:39

We are always on the hunt for interesting new uses of the Apple Watch, so when my colleague Ben Bendig alerted me to AstroPad’s new iPhone/Apple Watch app, I downloaded it immediately.

The app, AstroPad Mini, is intended to let you use your iPhone as a graphics tablet and controls Photoshop nicely right out of the box. But it will work with any Mac app; it lets you map any area of your Mac screen to the iPhone and map up to eight keyboard commands to buttons in an Apple Watch app. I reprogrammed it to work with NodeBox.

Although you can zoom and pan the Mac screen from your iPhone, this seems awkward for precision work (the iPad app would work better for that). It was more useful to map a small control area of the screen to the iPhone instead. For Photoshop you could arrange palettes (tools, layers, history) and dialogs (e.g. color picker) into a corner somewhere (maybe on a second monitor), map the iPhone to that, and use the iPhone as an auxiliary screen so you don’t have to keep moving your mouse back and forth. This worked particularly well for the color picker.

iPhone display when controlling a typical NodeBox node

iPhone display when controlling a typical NodeBox node

For NodeBox I mapped the node pane, a small area which displays properties of the currently selected node. I could then select any node on my ginormous screen using a mouse or trackpad and then scrub its properties from the phone (without having to relocate the mouse).

Some of the Apple Watch buttons I use to control NodeBox

Some of the Apple Watch buttons I use to control NodeBox

Even more fun: I mapped common actions to Apple Watch buttons: Save, Full Screen, Escape, New Node, Undo, Redo, Play, and Rewind. When creating animations, it’s pleasant to lean back in my chair, put the display in full screen, and play and rewind to my hearts content all from my watch.

I was also able to focus the iPhone on the slider of my transforming table (running as a web app) and could then stand back from the display and move the slider back and forth from my phone. You could do the same thing by just running the table app on the phone and mirroring it via AirPlay, but AstroPad let me focus the entire iPhone screen on the slider so that it was easier to manipulate.

The app did occasionally lose its wifi connection for a few moments, but otherwise worked fine.

I think with a little thought and practice this setup could speed my workflow somewhat. The benefits are marginal, though, not revolutionary. One tip: if you use the Apple Watch be sure to set “Activate on Wrist Raise” to “Resume Previous Activity” instead of “Show Watch Face” so that you don’t have to keep relaunching the AstoPad app.

We could conceivably use this app in some of the concept demos our group does. It would be a quick and dirty way of controlling some features from an iPhone or Apple Watch without having to write any special code. The catch is that the demo would have to run on a Mac. One advantage: they have an option for controlling the Mac via USB cable instead of wifi, a handy workaround at HQ or demo grounds when sharing a local wifi router is problematic.

Hmmm. I wonder if I could aim and fire my USB Rocket Launcher from my watch. Now THAT might be a killer app.Possibly Related Posts:

Three Hidden Azure SQL Database Gotchas

Pythian Group - Fri, 2015-08-28 13:35

Azure SQL Database is Microsoft’s Database as a Service (DBaaS) platform offering. It allows end users to leverage the power of SQL Server in the cloud without the expense and complexity of building a private infrastructure. Additionally, this offering simplifies database maintenance tasks while providing seamless high availability and disaster recovery capabilities.

Although DBaaS offerings are still crawling out their infancy, with the correct planning and use cases, implementing an Azure SQL Database solution can be a relatively straightforward process. However, as this platform continues to mature, you can expect to encounter some “Ghosts in the Machine”. Hopefully this post will allow you to avoid some of these unexpected behaviors.

  1. What’s in a name?

Azure SQL Servers all share the same public domain, database.windows.net and access is controlled through IP white-lists and user credentials. Until recently, Azure SQL Database dynamically allocated server names comprised of long random strings for security purposes and because each Azure server name must be unique globally. However, recently Microsoft provided the ability to allocate specific server names specified by the end user, i.e. MyServerName.database.windows.net.

This feature is a more than a welcome addition, particularly for organizations who wish to pre-configure connection strings for cloud implementations.

The hidden gotcha resides in the implementation of this feature. Once you create a server with a user defined name, the Azure cloud reserves that name for you within the Azure fabric. If for any reason you remove the server you will be unable to recreate the server using the same name for at least 5 days. When you attempt to recreate the server, you will receive the message “Specified server name is already used” as depicted below:

Bl1

Microsoft is aware of this limitation, however, at this time, the only way to correct the situation is to contact Microsoft Support and have them remove the Azure fabric metadata manually.

Additionally, it should be noted that you can only specify a specific Azure SQL Database Server name in the preview portal. This feature is not available in the standard portal or via the New-AzureSqlDatabaseServer Cmdlet in PowerShell.

2. You can change the performance tier at any time, unless you can’t.

One of the fantastic benefits of leveraging Azure SQL Database is the ability to switch service tiers at any time, without service disruption in order to leverage pay per minute costing efficiencies.

Unfortunately, another hidden gotcha may rear its ugly head during the switching process. Organizations that utilize BCP processes against an Azure SQL instance need to be wary when performing a service level switch. BCP operations often simply “Hang” when switching between service levels. The only resolution for this issue is to terminate the process and re-initiate once the tier switch has been completed.

3. I know you’re there, but I can’t see you.

Just like all could offerings, Azure SQL Database continues to mature and improve. However, you need to be prepared for some management inconsistencies. The preview portal is aptly named and although some functions are only available within the preview portal, you may need to frequently revert to the standard portal for a more consistent experience.

As an example, I have a client who switched databases between standard and premium tiers and vice versa. These databases no longer display in the preview portal at all. However, they do appear correctly in the standard portal as shown in the CIA level of redacted screen captures below.

BL_Combo2015-08-24_14-08-57

 

Categories: DBA Blogs

Trust and confidence from Pythian

Pythian Group - Fri, 2015-08-28 13:25

Recently I “inherited” some new responsibilities at work. It’s not the first time during my 11 or so of the last 16 years at Pythian. Throughout my employ at Pythian, I have been continually given new titles based on new roles I have taken on. For me, besides the enjoyment I have been lucky to have at Pythian, this trust and confidence are two of the biggest contributors to one’s longevity with a company.

For Pythian and me, it all started one spring afternoon in about 1998. Paul and Steve had been doing the Pythian-thing for a year or more, and were looking for assistance getting “off-the-ground” so to speak. That endeavour was part of the reason for our new association and it’s been a magic carpet ride since. I did leave at one point for almost 6 years, but returned in early 2011. Between 1998 and 2011, the size of the company changed, but it was still the same old company.

I now manage the day-to-day operations of the consulting group and take pride in the work I do. Touché all you people out there in Pythian-land.

Categories: DBA Blogs

Creating an Oracle Database Cloud Service

Pythian Group - Fri, 2015-08-28 13:10

Back in late June of 2015, Larry Ellison launched several public cloud services and one of those services was the public DBaaS. Today, I had the opportunity to try out this new service. This blog post will examine how to create it and how to connect it with sqlcli. As with any cloud service, it all happens in the background, saving you from doing tedious configuration steps to start using your service.

2015-08-21_1319

In my case, it took about 30 mins from when I clicked on create service to start using my database.

So the first thing that you have to do, obviously, is access the Oracle Cloud My Services application.  If you do not currently have access, speak with your sales rep or cloud administrator, but remember that this application is not free. Once you have access, click on the Oracle Database Cloud Service link and the following page will come up. Click on “Create Service” :

Once you have done that, we need to choose the type of service we will solicit and the billing frequency. As I have talked about in previous posts, it all depends on your business needs and abilities. The difference here between choosing a “Cloud Service” and a “Cloud Service – Virtual Image” is that in the first option, the database and the database instance are created for you, whereas in the “Virtual Image“, you will need to create it yourself, so choose carefully. One of the good things that comes with the first option is that the cloud patching option comes with it, but in the “Virtual Image“, you have to do this yourself.

As of the writing of this post, Oracle offers two database versions – 11.2.0.4 and 12.1.0.2. I chose the latter.

2015-08-18_1256

 

In the Edition section, we get to choose the type of service we will get when choosing the Cloud Software Edition. Unlike the previous one, here we will choose the bells and whistles that you will be licensed to use in this database. I won’t include the differences between the two here, but you can view them in cloud.oracle.com in the PaaS section, under Database. In my case, I just chose the regular Enterprise Edition :

In the details section, we can set the characteristics of the database service. It is important to select the “Compute Shape” correctly as this is critical to your usage billing. It is also good to know that one OCPU (Oracle CPU) is equivalent to a 3.0 GHz 2012 Intel Xeon with HyperThreading Enabled. Also you will have to add a Public SSH key to access your compute node. You can learn how here: how to create one. This is where you will also set the usable storage, your system or administrator password for the database, the name of the SID, the version (in this case, you are using version 12.1.0.2), the name of the PDB. Last, but not least, you will choose your backup destination. In my case, I just chose a local, but you can choose the Oracle Database Backup Service if you have one.

 

 

Last, but not least, you will get a confirmation of the service you are about to create. I didn’t copy this particular screenshot when I created it, but here is a similar one, so you get the gist.

 

Once you click on create, you can select the service and see the details of the creation process, as well as some others, like the Public IP, Port, etc.

Once the DB and VM are allocated, you need to go back to the Oracle Cloud My Services application  and go to the Oracle Compute Cloud Service console. This is to enable the security rule that will allow us to connect to port 1521 for this DB.

 

2015-08-21_1247

In the page that comes up, go to the Network section, and you will see a set of Security Rules, which you will find disabled.2015-08-21_1056

In my case, I enabled the “dbaas/test-orcl/db/ora_p2_dblistener” rule.

2015-08-21_1057

In this particular case – and I want to emphasize this – I am not concerned with security, so I also enabled the Security List for Inbound/Outbound Policy traffic.

2015-08-21_1106

 

Once I had done this, I am now ready to connect to my DB via sqlcli  like I would connect to any other DB:

Renes-iMac:bin Rene$ ./sql system@***.***.****.****:1521:ORCL

SQLcl: Release 4.2.0.15.177.0246 RC on Fri Aug 21 11:41:42 2015

Copyright (c) 1982, 2015, Oracle. All rights reserved.


Password? (**********?) ************
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Oracle Label Security option 

SQL&gt; select name from v$database;


NAME 
---------
ORCL 

SQL&gt; set lines 200 pages 9999


SQL&gt; COLUMN PDB_NAME FORMAT A15

SQL&gt; 
SQL&gt; SELECT PDB_ID, PDB_NAME, STATUS FROM CDB_PDBS ORDER BY PDB_ID;


 PDB_ID PDB_NAME STATUS 
---------- --------------- ---------
 2 PDB$SEED NORMAL 
 3 PDB1 NORMAL 

SQL&gt; alter session set container=PDB1;


Session altered.

Conclusion

As you can see, it is quite easy to request a database service and start using it. You will have to start building your case to use the public cloud, but once you do, you can see that using your database is no different from an on-premise to a cloud service.

Note– This was originally published on rene-ace.com

Categories: DBA Blogs

Log Buffer #438: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-08-28 12:08

This Log Buffer Edition covers Oracle, MySQL, and SQL Server blog posts from the last week.

Oracle:

Integrating Telstra Public SMS API into Bluemix

Adaptive Query Optimization in Oracle 12c : Ongoing Updates

First flight into the Oracle Mobile Cloud Service

Oracle 12C Problem with datapatch. Part 2, the “fix”

oracle applications r12 auto start on linux

SQL Server:

Email Formatted HTML Table with T-SQL

SQL Server 2016 – Introduction to Stretch Database

Soundex – Experiments with SQLCLR Part 3

An Introduction to Real-Time Communication with SignalR

Strange Filtered Index Problem

MySQL:

Announcing Galera Cluster 5.5.42 and 5.6.25 with Galera 3.12

doing nothing on modern CPUs

Single-threaded linkbench performance for MySQL 5.7, 5.6, WebScale and MyRocks

Identifying Insecure Connections

MyOraDump, Oracle dump utility, version 1.2

Categories: DBA Blogs