Feed aggregator

Using Data Relationship Management to Maintain Hierarchies for BI Apps (1)

Dylan's BI Notes - Wed, 2015-09-02 11:52
DRM is a generic data management application. It provides a web based application that allows the deploying company to maintain the data. It is a collaboration tool that allows you to define the validation and set up the data security duties and share the maintenance. Earlier the tool was designed to maintain the account information.  However, […]
Categories: BI & Warehousing

job_name cannot be null

Laurent Schneider - Wed, 2015-09-02 02:23


exec dbms_scheduler.create_job(job_name=>null,job_type=>'PLSQL_BLOCK',job_action=>'BEGIN NULL; END;')
ORA-27451: JOB_NAME cannot be NULL
ORA-06512: at "SYS.DBMS_ISCHED", line 146
ORA-06512: at "SYS.DBMS_SCHEDULER", line 288
ORA-06512: at line 1

This sounds like a proper error message. A bit less obvious is the drop_job message


SQL> exec dbms_scheduler.drop_job(job_name=>null)
ORA-20001: comma-separated list invalid near
ORA-06512: at "SYS.DBMS_UTILITY", line 236
ORA-06512: at "SYS.DBMS_UTILITY", line 272
ORA-06512: at "SYS.DBMS_SCHEDULER", line 743
ORA-06512: at line 1

comma-separated list invalid near what?

Ok, why would you create an empty job? Obviously you wouldn’t. But remember job_name could be a very long expression that won’t fit in your VARCHAR2(30) variable.


SQL> begin 
  dbms_scheduler.create_job(job_name=>
'                  "SCOTT"                    '||
'                     .                       '||
'             "JOB10000000000000000000001"    ',
    job_type=>'PLSQL_BLOCK',
    job_action=>'BEGIN NULL; END;');
end;
/

PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.drop_job('scott.job10000000000000000000001')

PL/SQL procedure successfully completed.

If you use drop job in the exception clause without catching the exception of the exception, it could lead to this ORA-20001 if job name is null

For exception handling, we could improve


BEGIN
  CREATE JOB 
  RUN JOB
  DROP JOB
EXCEPTION
  WHEN OTHERS THEN
    DROP JOB
    output message
    RAISE
END

into

BEGIN
  CREATE JOB 
  RUN JOB
  DROP JOB
EXCEPTION
  WHEN OTHERS THEN
    BEGIN
      DROP JOB
    EXCEPTION 
      WHEN IS_RUNNING
         sleep
      WHEN OTHERS
         output message
    END LOOP
    output message
    RAISE
END

Loose Rules Sink Fools

Bradley Brown - Tue, 2015-09-01 22:11
You've probably heard someone talk about how awesome it is that they get to bring their dog to work.  Or how there is NO dress code.  Or they have unlimited vacation.

Many corporate cultures have what I would call loose rules.  Most people do not like confrontation - in fact, many people will do anything to avoid it.  Which means they will not tell you when you're in the gray area of the corporate culture - but believe me, they take note.

Loose rules include dress code to bringing dogs to work and many others.  Does dress code matter for the job you're doing?  What if you're in sales and NEVER leave the office?  What if you RARELY leave the office, but sometimes you do?  What if the CEO stops by and says "hey, can you run to this meeting with me (or for me) tonight.  Then they look at how you're dressed and say "maybe next time?"  Yes, you're within the "rules" but it WILL affect your career - and not usually in a positive way.

Bring your dog to work.  Where's the line?  What if someone gets a puppy, which they bring to work every day?  Yes, puppies are cute, REALLY cute. What if you're the person that brought the puppy in every day?  You spend an hour taking the puppy for a walk, taking it outside (while it has you trained rather than visa versa).  Or worse yet, every time your boss walks by, someone is distracted by petting your puppy?  They cute!  Too cute to pass!  You're negatively affecting productivity.  Again, this is not likely to work well for your career.

Many companies have gone to unlimited vacation.  What if 2 weeks after you start, you take a 2 month vacation?  According to policy that's OK, but I wouldn't expect you to have a job when you return.  Where's the line?  What's gray?   What's acceptable?

Are you within the rules in each of the above examples?  Absolutely, but loose rules have unspoken rules.  Rules nobody will actually admit to quite often.  Startups go through a number of stages in a very short period of time.  People are evaluated regularly based on the current company stage.  Some people survive from one stage to the next and others do not.  Those who are not performing or viewed as not performing (e.g. taking care of their puppy all day) do not.

Don't let the loose rules sink your career!

Integrating Telstra Public WIFI API into Bluemix

Pas Apicella - Tue, 2015-09-01 19:51
I previously blogged about Integrating Telstra Public SMS Api as shown below.

http://theblasfrompas.blogspot.co.nz/2015/08/integrating-telstra-public-sms-api-into.html

Here I show how I integrated Telstra Public WIFI Api into IBM Bluemix. This Api from Telstra is documented as follows. You need to register on http://t.dev to get the credentials to use thier API which I have previously done which then enables me to integrate it onto Bluemix

https://dev.telstra.com/content/wifi-api

Once again here is the Api Within the Bluemix Catalog, these screen shots show the Api has been added to the Bluemix Catalog which can then be consumed as a service.



Finally here is a Web based application using Bootstrap so it renders quite well on mobile devices as well which allows you to enter your Latitidue, Longitude and Radius to find Telstra WIFI Hotspots using the Telstra WIFI Api on IBM Bluemix

http://pas-telstawifiapi.mybluemix.net/


More Information

Visit http://bluemix.net to get started
Categories: Fusion Middleware

What is the most dangerous food at Olive Garden?

Nilesh Jethwa - Mon, 2015-08-31 22:24

Olive Garden is one of the favorite destination for Italian food and today we got hold of the entire Olive Garden menu along with their nutrition data.

A typical meal at Olive Garden starts with a drink, appetizers [free bread sticks], main dish and finally the desert.

So going in the same sequence let see what the data menu for Wine and Beer has to offer.

Amount of Carbs per serving in your favorite Wine at Olive Garden

Read more at: http://www.infocaptor.com/dashboard/what-is-the-most-dangerous-food-at-olive-garden

Search plugins: Search Oracle docs from your browser search bar

RDBMS Insight - Mon, 2015-08-31 13:42

Tired of navigating to the SQL documentation every time you need to look up syntax? I created a search plugin so that you can search the SQL documentation directly from your browser’s search bar:

search

If you’re going to be doing a lot of looking-up, you can make this your default search engine. Click on “Change search settings” in the search bar dropdown, or go to Preferences > Search and select it:

change default search

I also created a search plugin for the 12c database documentation as a whole.

To install either or both plugins in Firefox, go to this page and click on the “Install Search Plugin” button.

Tested in Firefox. OpenSearch plugins reportedly work in IE too. Use IE? Try it and let me know if it works for you.

UPDATE: I added these plugins to mycroftproject.com as well. Thanks, Uwe, for pointing me to it! Also, check out Uwe’s OERR search plugin in the comments below.

Categories: DBA Blogs

The Golden Path

Floyd Teter - Mon, 2015-08-31 12:07
Geek Warning:  The Golden Path is a term in Frank Herbert's fictional Dune universe referring to Leto II Atreides's strategy to prevent humanity's ultimate destruction.

Just back from a little "stay-cation".  My batteries were running a little low, so it was good to recharge for a bit.  The #Beat39 theme continued to roll around in my brain and I want to share a predominant line of thinking from that.

Back in the olden days when Oracle was first developing Fusion Applications, they made a big effort to discover common threads of business practices across a range of industries and organizations. Processing invoices, controlling inventory, managing employee performance reviews, completing projects, billing customers...it's a long list of common business practices and common activities.

The result of that effort was a set of common "best practices", by industry, that were baked into Fusion Applications.  That collection of best practices became known as the Oracle Business Process Model ("Oracle BPM").  You can see an example for the Project Portfolio Management Suite here.  As Fusion Applications have evolved into Oracle Cloud Application Services (Oracle's SaaS offerings), Oracle BPM has evolved right along with it.  You'll find the latest Oracle BPM in Oracle SaaS.

Back in the really olden days, customers and their implementation partners would generally follow a three-step implementing strategy:  1) understand the customer's current business process; 2) design the customer's future business process; 3) implement enterprise software to model the customer's future business process as closely as possible.

With today's SaaS applications, customers may be better served by following a different strategy: 1) configure a SaaS zone and test the "baked in" business processes with an eye toward utilizing those processes in your own organization; 2) address and resolve any business process gaps; 3) test and go live.  In short, maximize your use of enterprise software in the way the software was designed to be used, business processes and all.  Being open to business process change is the "Golden Path" to a successful SaaS implementation.

While this idea is nothing new, it's a pretty fundamental shift in perspective.  Thoughts?  Comments welcome.

A Startup is Like Making a Trip to Each Continent

Bradley Brown - Sun, 2015-08-30 22:43
Maybe you realize how difficult it is to go from an idea to a successful company.  Maybe you've attempted one (or more) yourself.  Is the American dream a startup?  It is for some, but not all.  If you start a company and you're the only employee and you don't get paid, is your business a startup?  Is it successful?  Beauty and success are in the eye of beholder.  You might hear people suggest you set your sites and expectations lower so you guarantee success.  In fact, you might hear a lot of different conflicting things over your life.  In Founders Institute we call this "mentor whiplash."  One mentor says yes, another so no.  One says go and another says stop.  There's no right way to do most things in the startup world.  Set your own goals - think about your definition of success and strive for it...every day.

One thing I do know is that MOST people need a partner or partners to "complete them."  In other words, we all have gaps (and strengths) in our personality.  Your partners should fill your gaps.  If they don't, your company will have a gap.  If you're a perfectionist, you're going to want to find a partner who isn't.  If you're a detailed person, you're going to find a partner who isn't.  If you don't have any gaps, congrats!  I have many!

So how is a startup similar to making a trip to every continent?  First, when I say startup, I'm referring to a funded startup.  If you're trying to build a lifestyle company (i.e. one that provides a nice lifestyle for you), don't take someone else's money to do this - they will NOT be happy.  If you can build a business on your own and without any money, that's a dream come true for many.  That's not my dream.  So like I said, the basic premise here is that you're building a business that is going to require capital (i.e. money) to get it going.

When do you raise your first dollar?  My preference is to raise money as soon as you can.  In other words, don't spend any more of your own money to start the business than you absolutely need to.  Even if you have money.  Why?  Because in my view, if you can't get someone else to believe in your idea, it probably isn't a fundable idea.

So let's say you start your trip in a rich continent such as North America.  You can get to a lot of places by driving around.  Some might be safe, some might not.  In fact, you can probably get to your second continent without hopping on a plane - i.e. South America.  We could say that's similar to your "self funding" stage of your business.  At some point you're going to have to take off on a plane to get to the next continent.  In the startup world, continents are like "fund raising series."

The first round of funding (continent 1 - North America) is your own cash.  Make this your smallest and quickest continent to get behind you.  The second round (continent 2 - South America) is your seed round.  This is sometimes called the friends and family round.  Get people believing in your idea ASAP.  Get your seed funding ASAP.  The 3rd round, your Series A round (continent 3 - Australia) is a stretch.  It's a long and big flight.  Most companies frankly never get to this phase.  Maybe you're able to go to Europe instead and the flight is shorter.  It's still a long way and you better have the fuel you need to get to that next round.

So yes, if you start in North America, using your own money, you might have a lot of money.  You might have a nice vehicle to get to your next continent.  But at some point, to get to continent 3 of 7, you're going to have to take off.  When are you going to fuel up next?  Are you going to attempt to fly around the entire world without refueling?  Of course not.  That would be similar to trying to raise $1B in your first round of funding.  It will never happen.

Seed round valuations tend to range from $1M to $3M.  If you're a proven startup person and 100s of other "ifs," you're probably not reading my blog, but yes, you're seed round valuation would be higher.  If you only have an idea, you're initial valuation may be $0 and it could be less than $1M.

If you try to fuel up in the middle of the ocean, you're going to run out of fuel and crash.  If you stop in a dangerous place (i.e. the wrong investors, investors who run out of money, investors with the wrong expectations, etc), you could also be extorted for money.  Depending on where you stop, fuel could be reasonable or it could be expensive.  The more you need the fuel, the more expensive it is.  All of this is true in the startup world too.  If you don't make it to the next continent (funding stage) with your fuel (funding), you're done.  If you stop in the wrong place (i.e. you don't have the right metrics in place by the time you get there), your next funding round might not be fatal, but it could be VERY expensive (i.e. a low valuation equal a high percentage of stock you give up for very little money).


It's important to plan your trip (i.e. startup).  It's important that you can make it to the next continent safely.  Where you start is important.  Focus is important.  If you deliver a message like "we'll do that" for everything a potential investor brings up, they will not want to give you the fuel (cash) you need.  What if you pulled into a gas station (called an FBO in the flight world) with $1000 on you.  Let's say the FBO has a casino in it.  On your walk to the casino you notice a bar, so you stop in there.  You go into the casino and bet $500 on black.  What would your crew (your employees) have to say about your behavior?  A bit confused about the goal?  A lack of focus on the goal here?  What would your investors (i.e. the FBO employees) feel about your ability to pay for the fuel?  Have a mission and live for it.  Know what you need to do and do it.

Whatever you do, bet the farm on your focus.  One thing, not 2, not 3.  Don't try to be more than one thing.  Don't fuel up or raise money in the wrong location (i.e. bad investors) or at the wrong time (i.e. middle of the ocean).  Keep in mind that you MUST get to the next continent with whatever you have from your last funding round.  In other words, DO NOT try to raise money $30k at a time.  You'll spend all of your money raising money and you'll end up in the middle of the ocean without any more fuel.  Each round needs to get you to the next round - or...it's going to be costly or deadly.


Prepare for your trip.  Be ready.  Your investors are your lifeline to the next continent.  Respect them and do what they expect you to do to get to the next round.  If you don't, you both lose - unless they get your company and then they do something good with it.  But that's not what they want.  That's not what you want.  Good luck and safe travels!

Thank you to Daniel Feher for the use of his amazing maps.  You can find more great maps on his website: www.freeworldmaps.net.

Using DBMS_OUTPUT with Node.js and node-oracledb

Christopher Jones - Sun, 2015-08-30 20:20

The DBMS_OUTPUT package is the standard way to "print" output from PL/SQL. The way DBMS_OUTPUT works is like a buffer. Your Node.js application code turns on DBMS_OUTPUT buffering, calls some PL/SQL code that puts text into the buffer, and then later fetches from that buffer. Note: any PL/SQL code that calls DBMS_OUTPUT runs to completion before any output is available to the user. Also, other database connections cannot access your buffer.

A basic way to fetch DBMS_OUTPUT with node-oracledb is to bind an output string when calling the PL/SQL dbms_output.get_line() procedure, print the string, and then repeat until there is no more output. Another way that I like is to wrap the dbms_output.get_line() call into a pipelined function and fetch the DBMS_OUTPUT using a SQL query.

The following code shows both methods.

/*
  NAME
    dbmsoutput.js

  DESCRIPTION
    Shows two methods of displaying PL/SQL DBMS_OUTPUT in node-oracledb.
    The second method depends on these PL/SQL objects:

      create or replace type dorow as table of varchar2(32767);
      /
      show errors

      create or replace function mydofetch return dorow pipelined is
        line varchar2(32767);
        status integer;
        begin loop
          dbms_output.get_line(line, status); 
          exit when status = 1;
          pipe row (line);
        end loop;
      return; end;
      /
      show errors

*/

'use strict';

var async = require('async');
var oracledb = require('oracledb');
var dbconfig = require('./dbconfig.js');

oracledb.createPool(
  dbconfig,
  function(err, pool) {
    if (err)
      console.error(err.message)
    else
      doit(pool);
  });

var doit = function(pool) {
  async.waterfall(
    [
      function(cb) {
        pool.getConnection(cb);
      },

      // Tell the DB to buffer DBMS_OUTPUT
      enableDbmsOutput,

      // Method 1: Fetch a line of DBMS_OUTPUT at a time
      createDbmsOutput,
      fetchDbmsOutputLine,

      // Method 2: Use a pipelined query to get DBMS_OUTPUT 
      createDbmsOutput,
      function(conn, cb) {
        executeSql(
          conn,
          "select * from table(mydofetch())", [], { resultSet: true}, cb);
      },
      printQueryResults
    ],
    function (err, conn) {
      if (err) { console.error("In waterfall error cb: ==>", err, "<=="); }
      conn.release(function (err) { if (err) console.error(err.message); });
    }
  )
};

var enableDbmsOutput = function (conn, cb) {
  conn.execute(
    "begin dbms_output.enable(null); end;",
    function(err) { return cb(err, conn) });
}

var createDbmsOutput = function (conn, cb) {
  conn.execute(
    "begin "
     + "dbms_output.put_line('Hello, Oracle!');"
     + "dbms_output.put_line('Hello, Node!');"
     + "end;",
    function(err) { return cb(err, conn) });
}

var fetchDbmsOutputLine = function (conn, cb) {
  conn.execute(
    "begin dbms_output.get_line(:ln, :st); end;",
    { ln: { dir: oracledb.BIND_OUT, type:oracledb.STRING, maxSize: 32767 },
      st: { dir: oracledb.BIND_OUT, type:oracledb.NUMBER } },
    function(err, result) {
      if (err) {
        return cb(err, conn);
      } else if (result.outBinds.st == 1) {
        return cb(null, conn);  // no more output
      } else {
        console.log(result.outBinds.ln);
        return fetchDbmsOutputLine(conn, cb);
      }
    });
  }
               
var executeSql = function (conn, sql, binds, options, cb) {
  conn.execute(
    sql, binds, options,
    function (err, result) {
      if (err)
        cb(err, conn)
      else
        cb(null, conn, result);
    });
}

var printQueryResults = function(conn, result, cb) {
  if (result.resultSet) {
    fetchOneRowFromRS(conn, result.resultSet, cb);
  } else if (result.rows && result.rows.length > 0) {
    console.log(result.rows);
    return cb(null, conn);
  } else {
    console.log("No results");
    return cb(null, conn);
  }
}

function fetchOneRowFromRS(conn, resultSet, cb) {
  resultSet.getRow(  // note: getRows would be more efficient
    function (err, row) {
      if (err) {
        cb(err, conn);
      } else if (row) {
        console.log(row);
        fetchOneRowFromRS(conn, resultSet, cb);
      } else {
        cb(null, conn);
      }
    });
}

The output is:

Hello, Oracle!
Hello, Node!
[ 'Hello, Oracle!' ]
[ 'Hello, Node!' ]

I used resultSet.getrow() for simplicity, but you will probably want to use resultSet.getRows() for efficiency. If you want to buffer all the output in the Node.js application, Bruno Jouhier has a nice implementation to build up an array of query output in his GitHub gist query-all.js.

Count Selected Rows

Denes Kubicek - Sat, 2015-08-29 06:44
This small example is showing hot to make sure at least one row in your tabular form has been selected prior to proceeding:

https://apex.oracle.com/pls/apex/f?p=93000:806

Categories: Development

Switch Panes in Page Designer

Denes Kubicek - Sat, 2015-08-29 04:36
This is definitely worth of mentioning. Go here and download this plugin for Firefox or Chrome. You will like it. It switches the panes in the Page Designer so you can much easier access the properties of the selected elements. Congrats Filip. This is a cool feature. Thanks.

Categories: Development

Creating User Schema Table and Projections in Vertica

Pakistan's First Oracle Blog - Fri, 2015-08-28 02:25
Vertica is a an exciting database with some real nifty features. Projections is a ground breaking unique feature of Vertica which dramatically increases performance benefits in terms of querying and space benefits in terms of compression.



Following test commands are impromptu sesssion in which a user is being created, then a schema is created, and that user is authorized on that schema. Then a table is created with a default superprojection and then a projection is created and then we see its usage.

Create new vertica database user, create schema and authorize that user to that schema. Create 4 column table and insert data.

select user_name from v_catalog.users;

vtest=> create user mytest identified by 'user123';
CREATE USER
vtest=>

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> \dn
         List of schemas
     Name     |  Owner  | Comment
--------------+---------+---------
 v_internal   | dbadmin |
 v_catalog    | dbadmin |
 v_monitor    | dbadmin |
 public       | dbadmin |
 TxtIndex     | dbadmin |
 store        | dbadmin |
 online_sales | dbadmin |
(7 rows)


vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit


vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
ROLLBACK 4367:  Permission denied for schema public

[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U dbadmin -w vtest -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> \du
      List of users
 User name | Is Superuser
-----------+--------------
 dbadmin   | t
 mytest    | f
(2 rows)

vtest=> create schema mytest authorization mytest;
CREATE SCHEMA
vtest=> select current_user();
 current_user
--------------
 dbadmin
(1 row)

vtest=>

vtest=> \q
[dbadmin@vtest1 root]$ /opt/vertica/bin/vsql -U mytest -w user123 -h 0.0.0.0 -p 5433 -d vtest
Welcome to vsql, the Vertica Analytic Database interactive terminal.

Type:  \h or \? for help with vsql commands
       \g or terminate with semicolon to execute query
       \q to quit

vtest=> create table testtab (col1 integer,col2 integer, col3 varchar2(78), col4 varchar2(90));
CREATE TABLE
vtest=> select current_user();
 current_user
--------------
 mytest
(1 row)

vtest=>

vtest=> \dt
               List of tables
 Schema |  Name   | Kind  | Owner  | Comment
--------+---------+-------+--------+---------
 mytest | testtab | table | mytest |
(1 row)

vtest=> insert into testtab values (1,2,'test1','test2');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (2,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (3,2,'test2','test3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> insert into testtab values (4,2,'test4','tesrt3');
 OUTPUT
--------
      1
(1 row)

vtest=> commit;
COMMIT
vtest=>


Create a projection on 2 columns.

Superprojection exists already:

vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
(1 row)

vtest=>


vtest=> \d testtab
                                    List of Fields by Tables
 Schema |  Table  | Column |    Type     | Size | Default | Not Null | Primary Key | Foreign Key
--------+---------+--------+-------------+------+---------+----------+-------------+-------------
 mytest | testtab | col1   | int         |    8 |         | f        | f           |
 mytest | testtab | col2   | int         |    8 |         | f        | f           |
 mytest | testtab | col3   | varchar(78) |   78 |         | f        | f           |
 mytest | testtab | col4   | varchar(90) |   90 |         | f        | f           |
(4 rows)

vtest=>
vtest=> create projection ptest (col1,col2) as select col1,col2 from testtab;
WARNING 4468:  Projection is not available for query processing. Execute the select start_refresh() function to copy data into this projection.
          The projection must have a sufficient number of buddy projections and all nodes must be up before starting a refresh
CREATE PROJECTION
vtest=>


vtest=> select anchor_table_name,projection_name,is_super_projection from projections;
 anchor_table_name | projection_name | is_super_projection
-------------------+-----------------+---------------------
 testtab           | testtab_super   | t
 testtab           | ptest           | f
(2 rows)


vtest=> select * from ptest;
ERROR 3586:  Insufficient projections to answer query
DETAIL:  No projections eligible to answer query
HINT:  Projection ptest not used in the plan because the projection is not up to date.
vtest=>

vtest=> select start_refresh();
             start_refresh
----------------------------------------
 Starting refresh background process.

(1 row)

vtest=> select * from ptest;
 col1 | col2
------+------
    1 |    2
    2 |    2
    3 |    2
    4 |    2
    4 |    2
    4 |    2
    4 |    2
(7 rows)

vtest=>


 projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 testtab             | UNUSED      | 1970-01-01 00:00:00-05
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
(2 rows)

vtest=> select * from testtab;
 col1 | col2 | col3  |  col4
------+------+-------+--------
    1 |    2 | test1 | test2
    3 |    2 | test2 | test3
    2 |    2 | test2 | test3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
    4 |    2 | test4 | tesrt3
(7 rows)

projection_basename | USED/UNUSED |           last_used
---------------------+-------------+-------------------------------
 ptest               | USED        | 2015-08-28 07:14:49.877814-04
 testtab             | USED        | 2015-08-28 07:16:10.155434-04
(2 rows)
Categories: DBA Blogs

List of acquisitions by Microsoft a data journey

Nilesh Jethwa - Thu, 2015-08-27 22:21

If we look into the SEC data for Microsoft and other tech companies, Microsoft spends the most in Research and Development from [by dollar]

Image

Read more at: http://www.infocaptor.com/dashboard/list-of-acquisitions-by-microsoft-a-data-journey

Integrating Telstra Public SMS API into Bluemix

Pas Apicella - Thu, 2015-08-27 21:16
In the post below I will show how I integrated Telstra public SMS Api into my Bluemix catalog to be consumed as a service. This was all done from Public Bluemix using the Cloud Integration Service.

Step 1 - Create a T.DEV account

In order to get started you need to create an account on http://dev.telstra.com in order to be granted access to the SMS API. Once access is granted you need to create an application which enables you to add/manage Telstra API keys as shown below.

1.1 Create an account an http://dev.telstra.com

1.2. Once done you should have something as follows which can take up to 24 hours to get approved as shown by the "approved" icon


Step 2 - Test the SMS Telstra API

At this point we want to test the Telstra SMS API using a script, this ensures it's working before we proceed to Integrating it onto Bluemix.

2.1. Create a script called setup.sh as follows

#Obtain these keys from the Telstra Developer Portal
APP_KEY="yyyy-key"
APP_SECRET="yyyy-secret"

curl "https://api.telstra.com/v1/oauth/token?client_id=$APP_KEY&client_secret=$APP_SECRET&grant_type=client_credentials&scope=SMS"

2.2. Edit the script above to use your APP_KEY and APP_SECRET values from the Telstra Developer Portal

2.3. Run as shown below


pas@Pass-MBP:~/ibm/customers/telstra/telstra-apis/test$ ./setup.sh
{ "access_token": "xadMkPqSAE0VG6pSGEi6rHA5vqYi", "expires_in": "3599" }

2.4. Make a note of the token key returned, you will need this to send an SMS message
2.5. Create a script called "sendsms.sh" as shown below.


# * Recipient number should be in the format of "04xxxxxxxx" where x is a digit
# * Authorization header value should be in the format of "Bearer xxx" where xxx is access token returned
# from a previous GET https://api.telstra.com/v1/oauth/token request.
RECIPIENT_NUMBER=0411151350
TOKEN=token-key

curl -H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d "{\"to\":\"$RECIPIENT_NUMBER\", \"body\":\"Hello, pas sent this message from telstra SMS api!\"}" \
"https://api.telstra.com/v1/sms/messages"

2.6. Replace the token key with what was returned at step 2.4 above
2.7. Replace the RECIPIENT_NUMBER with your own mobile number to test the SMS API.
2.8. Run as shown below.


pas@Pass-MBP:~/ibm/customers/telstra/telstra-apis/test$ ./sendsms.sh
{"messageId":"1370CAB677B59C226705337B95945CD6"}

Step 3 - Creating a REST based service to Call Telstra SMS API

At this point we can now Integrate the Telstra SMS API into Bluemix. To do that I created a simple Spring Boot Application which exposes a RESTful method to call Telstra SMS API using Spring's RestTemplate class. I do this as it's two calls you need to make to call the Telstra SMS API. A REST based call to get a ACCESS_TOKEN , then followed by a call to actually send an SMS message. Creating a Spring Boot application to achieve this allows me to wrap that into one single call making it easy to consume and add to the Bluemix Catalog as a Service.

More Information on The Cloud Integration service can be found here. Cloud Integration allows us to expose RESTful methods from Bluemix applications onto the catalog via one simple screen. We could alos use Bluemix API management service as well.

https://www.ng.bluemix.net/docs/services/CloudIntegration/index.html

Below shows the application being pushed into Bluemix which will then be used to add Telstra SMS API service into the Bluemix catalog.



pas@192-168-1-4:~/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo$ cf push
Using manifest file /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo/manifest.yml

Creating app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Using route pas-telstrasmsapi.mybluemix.net
Binding pas-telstrasmsapi.mybluemix.net to pas-telstrasmsapi...
OK

Uploading pas-telstrasmsapi...
Uploading app files from: /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSAPIDemo/target/TelstraSMSAPI-1.0-SNAPSHOT.jar
Uploading 752.3K, 98 files
Done uploading
OK

Starting app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
-----> Downloaded app package (15M)
-----> Liberty Buildpack Version: v1.19.1-20150622-1509
-----> Retrieving IBM 1.8.0_20150617 JRE (ibm-java-jre-8.0-1.0-pxa6480sr1ifx-20150617_03-cloud.tgz) ... (0.0s)
         Expanding JRE to .java ... (1.4s)
-----> Retrieving App Management 1.5.0_20150608-1243 (app-mgmt_v1.5-20150608-1243.zip) ... (0.0s)
         Expanding App Management to .app-management (0.9s)
-----> Downloading Auto Reconfiguration 1.7.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.7.0_RELEASE.jar (0.1s)
-----> Liberty buildpack is done creating the droplet

-----> Uploading droplet (90M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-telstrasmsapi was started using this command `$PWD/.java/jre/bin/java -Xtune:virtualized -Xmx384M -Xdump:none -Xdump:heap:defaults:file=./../dumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd -Xdump:java:defaults:file=./../dumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:snap:defaults:file=./../dumps/Snap.%Y%m%d.%H%M%S.%pid.%seq.trc -Xdump:heap+java+snap:events=user -Xdump:tool:events=systhrow,filter=java/lang/OutOfMemoryError,request=serial+exclusive,exec=./.buildpack-diagnostics/killjava.sh $JVM_ARGS org.springframework.boot.loader.JarLauncher --server.port=$PORT`

Showing health and status for app pas-telstrasmsapi in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-telstrasmsapi.mybluemix.net
last uploaded: Fri Jul 17 11:26:58 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-07-17 09:28:28 PM   1.0%   150.6M of 512M   148.9M of 1G

Step 4 - Add the RESTful method to IBM Bluemix catalog to invoke the Telstra SMS API

4.1 To expose our RESTful method we simply define the end point using the Cloud Integration service as shown below.


The image showing the Cloud Integration service with the Telstra API exposed and available to be consumed as a Service on Bluemix.





This was created using the Bluemix Dashboard but can also be done using the Cloud Foundry command line "cf create-service ..."

Step 5 - Create a Application client which will invoke the Telstra SMS service

At this point we are going to push a client application onto Bluemix which consumes the Telstra SMS API service and then uses it within the application. WE do this to verify the service works creating a simple HTML based application which invokes the service which has a manifest.yml file indicating it wants to consume the service which is now exposed on the catalog within Bluemix as per above.

5.1. The manifest.yml consumes the service created from the API in the catalog


applications:
- name: pas-telstrasmsapi-client
  memory: 512M
  instances: 1
  host: pas-telstrasmsapi-client
  domain: mybluemix.net
  path: ./target/TelstraSMSApiClient-0.0.1-SNAPSHOT.jar
  env:
   JBP_CONFIG_IBMJDK: "version: 1.8.+"
  services:
    - TelstraSMS-service

5.2. Push the application as shown below.


pas@Pass-MacBook-Pro:~/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient$ cf push
Using manifest file /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient/manifest.yml

Updating app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Using route pas-telstrasmsapi-client.mybluemix.net
Uploading pas-telstrasmsapi-client...
Uploading app files from: /Users/pas/ibm/DemoProjects/spring-starter/jazzhub/TelstraSMSApiClient/target/TelstraSMSApiClient-0.0.1-SNAPSHOT.jar
Uploading 806.8K, 121 files
Done uploading
OK
Binding service TelstraSMS-service to app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Stopping app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

Starting app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
-----> Downloaded app package (16M)
-----> Downloaded app buildpack cache (1.2M)
-----> Liberty Buildpack Version: v1.19.1-20150622-1509
-----> Retrieving IBM 1.8.0_20150617 JRE (ibm-java-jre-8.0-1.0-pxa6480sr1ifx-20150617_03-cloud.tgz) ... (0.0s)
         Expanding JRE to .java ... (1.5s)
-----> Retrieving App Management 1.5.0_20150608-1243 (app-mgmt_v1.5-20150608-1243.zip) ... (0.0s)
         Expanding App Management to .app-management (0.9s)
-----> Downloading Auto Reconfiguration 1.7.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.7.0_RELEASE.jar (0.0s)
-----> Liberty buildpack is done creating the droplet

-----> Uploading droplet (90M)

0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-telstrasmsapi-client was started using this command `$PWD/.java/jre/bin/java -Xtune:virtualized -Xmx384M -Xdump:none -Xdump:heap:defaults:file=./../dumps/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd -Xdump:java:defaults:file=./../dumps/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:snap:defaults:file=./../dumps/Snap.%Y%m%d.%H%M%S.%pid.%seq.trc -Xdump:heap+java+snap:events=user -Xdump:tool:events=systhrow,filter=java/lang/OutOfMemoryError,request=serial+exclusive,exec=./.buildpack-diagnostics/killjava.sh $JVM_ARGS org.springframework.boot.loader.JarLauncher --server.port=$PORT`

Showing health and status for app pas-telstrasmsapi-client in org pasapi@au1.ibm.com / space apple as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: pas-telstrasmsapi-client.mybluemix.net
last uploaded: Sun Jul 19 15:22:26 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-07-19 11:23:57 PM   0.8%   144.9M of 512M   149.8M of 1G


Step 6 - Send SMS using Telstra SMS Api from Bluemix Application using the Service

6.1. Navigate to the URL below and send an SMS using the form below.

http://pas-telstrasmsapi-client.mybluemix.net/


6.2. Verify it has sent a message to the phone number entered in the text field as shown below.



More Information 

Getting started with Bluemix is easy, navigate to http://bluemix.net to sign up and get going.
Categories: Fusion Middleware

Oracle Priority Support Infogram for 27-AUG-2015

Oracle Infogram - Thu, 2015-08-27 15:57

RDBMS


Migration IBM AIX ==> SPARC Solaris with Data Guard, from Upgrade your Database - NOW!


Coding Oracle


WebLogic


Java

9 tools to help you with Java Performance Tunin, from WebLogic Partner Community EMEA.




And from the same source:


BI Publisher

Page Borders and Title Underlines, from the Oracle BI Publisher blog.

BPM

Getting Started with BPM: Free Oracle University Video Tutorial, from the SOA & BPM Partner Community Blog.

Mobile Computing


Ops Center

How Many Systems Can Ops Center Manage?, from the Ops Center blog.

Demantra


EBS

From the Oracle E-Business Suite Support blog:


From the Oracle E-Business Suite Technology blog:




Page Borders and Title Underlines

Tim Dexter - Wed, 2015-08-26 16:32

I have taken to recording screen grabs to help some folks out on 'how do I' scenarios. Sometimes a 3 minute video saves a couple of thousand words and several screen shots.

So, per chance you need to know:

1. How to add a page border to your output and/or

2. How to add an under line that runs across the page

Watch this!   https://www.youtube.com/watch?v=3UcXHeSF0BM

If you need the template, sample data and output, get them here.

I'm taking requests if you have them.

Categories: BI & Warehousing

Page Borders and Title Underlines

Tim Dexter - Wed, 2015-08-26 16:32

I have taken to recording screen grabs to help some folks out on 'how do I' scenarios. Sometimes a 3 minute video saves a couple of thousand words and several screen shots.

So, per chance you need to know:

1. How to add a page border to your output and/or

2. How to add an under line that runs across the page

Watch this!   https://www.youtube.com/watch?v=3UcXHeSF0BM

If you need the template, sample data and output, get them here.

I'm taking requests if you have them.

Categories: BI & Warehousing

On docker, Ubuntu and Oracle RDBMS

Marcelo Ochoa - Tue, 2015-08-25 20:17
I have Oracle RDBMS working on Ubuntu for a long time (12.04 and 14.04) RDBMS versions 10g, 11g and 12c with some tweaks to get it working.
Apart from the effort to get it working, some time requires Makefile modifications, these configurations are not supported by Oracle and for sure you couldn't report any bug.
To solve this, is easy to get VirtualBox working and download a pre-built VM, but it requires a lot of hardware resource :(
Hopefully Docker comes in action, I followed this great post by Frits Hoogland for Installing Oracle Database in Docker.
First We need docker running on Ubuntu, there too many guides about that also by simple installing using apt-get repository, my case.
Following Frits's guide I changed the default repository destination using an USB external disk formatted with btrfs:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  7,7G  6,5G  55% /
none            4,0K     0  4,0K   0% /sys/fs/cgroup
udev            3,9G   12K  3,9G   1% /dev
tmpfs           789M  1,4M  788M   1% /run
none            5,0M     0  5,0M   0% /run/lock
none            3,9G  344M  3,6G   9% /run/shm
none            100M   56K  100M   1% /run/user
/dev/sda6       442G  384G   36G  92% /home
/dev/sdb        597G   18G  577G   3% /var/lib/docker
Also I created the directory for the Docker template but with some changes, here the files:
root@local:/# cd /var/lib/docker/dockerfiles/build-oracle-12102/
root@local:/var/lib/docker/dockerfiles/build-oracle-12102# ls -l
total 2625148
-rw-r--r-- 1 root root      10976 ago 23 10:07 db_install.dbt
-rw-r--r-- 1 root root      10931 ago 25 16:30 db_install-full.dbt
-rw-r--r-- 1 root root      10972 ago 25 16:30 db_install-simple.dbt
-rw-r--r-- 1 root root       1168 ago 25 11:09 Dockerfile
-rw-r--r-- 1 root root 1673544724 ago 22 20:36 linuxamd64_12102_database_1of2.zip
-rw-r--r-- 1 root root 1014530602 ago 22 20:36 linuxamd64_12102_database_2of2.zip
-rwxr-xr-x 1 root root       1729 ago 25 10:11 manage-oracle.sh
-rw-r--r-- 1 root root      24542 ago 24 20:42 responsefile_oracle12102.rsp
The content of Dockerfile is:
FROM    oraclelinux:6
MAINTAINER marcelo.ochoa@gmail.com
RUN groupadd -g 54321 oinstall
RUN groupadd -g 54322 dba
RUN useradd -m -g oinstall -G oinstall,dba -u 54321 oracle
RUN yum -y install oracle-rdbms-server-12cR1-preinstall perl wget unzip
RUN mkdir /u01
RUN chown oracle:oinstall /u01
USER    oracle
WORKDIR /home/oracle
COPY linuxamd64_12102_database_1of2.zip /home/oracle/
COPY linuxamd64_12102_database_2of2.zip /home/oracle/
COPY responsefile_oracle12102.rsp /home/oracle/
RUN unzip linuxamd64_12102_database_1of2.zip
RUN unzip linuxamd64_12102_database_2of2.zip
RUN rm linuxamd64_12102_database_1of2.zip linuxamd64_12102_database_2of2.zip
RUN /home/oracle/database/runInstaller -silent -force -waitforcompletion -responsefile /home/oracle/responsefile_oracle12102.rsp -ignoresysprereqs -ignoreprereq
USER    root
RUN /u01/app/oraInventory/orainstRoot.sh
RUN /u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh -silent
RUN rm -rf /home/oracle/responsefile_oracle12102.rsp /home/oracle/database
USER    oracle
WORKDIR /home/oracle
RUN     mkdir -p /u01/app/oracle/data
COPY    manage-oracle.sh /home/oracle/
EXPOSE  1521
CMD sh -c /home/oracle/manage-oracle.sh
Remarked lines differ from the original post by Frits to not download everything from the web using wget, instead of doing that I downloaded 12c binary distribution from OTN Web Site and I copied these two zip into the directory where Dockerfile resides, also I am download Frits's files responsefile_oracle12102.rsp, manage-oracle.sh and db_install.dbt.
The file which is responsible for creating/starting/stopping the DB also was modified to use db_install.dbt from the host machine, here the modified version of manage-oracle.sh:
#!/bin/bash
PERSISTENT_DATA=/u01/app/oracle/data
export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
export ORACLE_SID=$(hostname)
stop_database() {
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
shutdown abort
exit
EOF
exit
}
start_database() {
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
startup
exit
EOF
}
create_pfile() {
$ORACLE_HOME/bin/sqlplus -S / as sysdba << EOF
set echo off pages 0 lines 200 feed off head off sqlblanklines off trimspool on trimout on
spool $PERSISTENT_DATA/init_$(hostname).ora
select 'spfile="'||value||'"' from v\$parameter where name = 'spfile';
spool off
exit
EOF
}
trap stop_database SIGTERM
printf "LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$(hostname))(PORT=1521))(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))))\n" > $ORACLE_HOME/network/admin/listener.ora
$ORACLE_HOME/bin/lsnrctl start
if [ ! -f ${PERSISTENT_DATA}/DATABASE_IS_SETUP ]; then
sed -i "s/{{ db_create_file_dest }}/\/u01\/app\/oracle\/data\/$(hostname)/" $PERSISTENT_DATA/db_install.dbt
sed -i "s/{{ oracle_base }}/\/u01\/app\/oracle/" $PERSISTENT_DATA/db_install.dbt
sed -i "s/{{ database_name }}/$(hostname)/" $PERSISTENT_DATA/db_install.dbt
$ORACLE_HOME/bin/dbca -silent -createdatabase -templatename $PERSISTENT_DATA/db_install.dbt -gdbname $(hostname) -sid $(hostname) -syspassword oracle -systempassword oracle -dbsnmppassword oracle
create_pfile
if [ $? -eq 0 ]; then
touch ${PERSISTENT_DATA}/DATABASE_IS_SETUP
fi
else
mkdir -p /u01/app/oracle/admin/$(hostname)/adump
$ORACLE_HOME/bin/sqlplus / as sysdba << EOF
startup pfile=$PERSISTENT_DATA/init_$(hostname).ora
exit
EOF
fi
tail -f /u01/app/oracle/diag/rdbms/$(hostname)/*/trace/alert_$(hostname).log &
wait
I am doing that to quickly creates different database with some Oracle options enabled or not, for example db_install.dbt file for OLS Searching will have enabled:
option name="JSERVER" value="true"
option name="SPATIAL" value="true"
option name="IMEDIA" value="true"
option name="XDB_PROTOCOLS" value="true"
option name="ORACLE_TEXT" value="true"
.....
To create the docker template is similar to the original post:
# cd /var/lib/docker/dockerfiles/build-oracle-12102
# docker build -t "oracle-12102" .
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
oracle-12102        latest              24687eeab73c        8 hours ago         12.26 GB
oraclelinux         6                   cfc75fa9f295        3 weeks ago         156.2 MB
Finally to create a full featured Java enabled 12c database I created a directory with the following content:
# mkdir -p /var/lib/docker/db/ols
# cp /var/lib/docker/dockerfiles/build-oracle-12102/db_install-full.dbt  /var/lib/docker/db/ols/db_install.dbt
# chown -R 54321:54321 /var/lib/docker/db/ols
and executed:
# docker run --ipc=host --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true oracle-12102
25efb5d26aad31e7b06a8e2707af7c25943e2e42ec5c432dc9fa55f0da0bdaef
the container is started and the database creations works perfect, here the output:
# docker logs -f ols
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-AUG-2015 16:35:18
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ols)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                25-AUG-2015 16:35:19
Uptime                    0 days 0 hr. 0 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 25-AUG-2015 16:35:18
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ols)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                25-AUG-2015 16:35:19
Uptime                    0 days 0 hr. 0 min. 1 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ols/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ols)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
Creating and starting Oracle instance
1% complete
.....
25% complete
Adding Oracle JVM
32% complete
....
Adding Oracle Text
50% complete
....
Adding Oracle Multimedia
55% complete
....
Adding Oracle Spatial
69% complete
Adding Oracle Spatial
69% complete
....
Adding Oracle Application Express
82% complete
87% complete
Completing Database Creation
Next steps installing Scotas OLS for testing and happy Docker/Oracle combination :)
Update about shared memory usage (31/08)After installing Scotas OLS which is a heavy Java Application I found these messages at the .trc file associated to the JIT compiler slave process:
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "1073741824"
joez: Failed loading machine code: Unable to allocate code space
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permitted
After checked that there is no problem about memory available I sent an email to my friend Kuassi to find some clarification from JVM Dev team and they sent me a quick test to find the solution, the problem is the mount option of the tmpfs file system, by default docker do:
# docker run --privileged=true -i -t oraclelinux:6 /bin/bash
[root@d1ac66be54fb /]# cat /proc/mounts|grep shm
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0
Note the noexec flag!!, to solve that is necessary to start the container with the --privileged=true flag and execute as root:
[root@d1ac66be54fb /]# mount -o remount,exec /dev/shm
warning: can't open /etc/fstab: No such file or directory
[root@
d1ac66be54fb /]# cat /proc/mounts|grep shm
shm /dev/shm tmpfs rw,nosuid,nodev,relatime,size=65536k 0 0
and that's all, the message "..unable to allocate 4096 bytes.." disappear and the Oracle Internal Java Virtual Machine works perfect.
Finally the task now is to modify Dockerfile and manage-oracle.sh scripts to run as root and to execute the remount operation before starting Oracle RDBMS.
New Dockerfile basically changes last lines to start manage-oracle.sh as root:
USER root
RUN /u01/app/oraInventory/orainstRoot.sh
RUN /u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh -silent
RUN rm -rf /home/oracle/responsefile_oracle12102.rsp /home/oracle/database
WORKDIR /home/oracle
RUN mkdir -p /u01/app/oracle/data
RUN chown oracle:oinstall /u01/app/oracle/data
COPY manage-oracle.sh /home/oracle/
EXPOSE 1521
CMD [ "sh" , "-c" ,  "/home/oracle/manage-oracle.sh" ]
and manage-oracle.sh executes all Oracle related scripts using su - oracle -c calls, for example:
stop_database() {
su - oracle -c "$ORACLE_HOME/bin/sqlplus / as sysdba" << EOF
shutdown abort
exit
EOF
exit
}
obviously there is remount operation previous to start the RDBMS:
mount -o remount,exec /dev/shm  su - oracle -c "$ORACLE_HOME/bin/lsnrctl start"
Remember to start your docker image using --privileged=true, for example:
docker run --privileged=true --ipc=host --volume=/var/lib/docker/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102
HTH, Marcelo.



Autonomous transaction to the rescue

Bar Solutions - Tue, 2015-08-25 11:10
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

Today, at my current project, I came across an issue where autonomous transactions came in handy.

The situation: I need to create a query to perform an export. A couple of the fields to be selected come from a global temporary table, nothing fancy so far except this global temporary table is filled by a (rather complex) procedure. Another problem is this table is emptied for every row, i.e. it will contain only one row at a time. ‘Just build a wrapper table function for this procedure and have that function call the procedure’ was my first idea.

I created a script that shows the situation

CREATE GLOBAL TEMPORARY TABLE empdate
(
  empno NUMBER(4)
, hiredate DATE
)
ON COMMIT DELETE ROWS
/
CREATE OR REPLACE PROCEDURE getthehiredate(empno_in IN NUMBER) IS
BEGIN
  DELETE FROM empdate;
  INSERT INTO empdate
    (empno
    ,hiredate)
    (SELECT empno
           ,hiredate
       FROM emp
      WHERE empno = empno_in);
END getthehiredate;
/

Then I set out to build a pipelined table function that accepts a cursor as one of its parameters. This function then loops all the values in the cursor, calls the procedure, reads the data from the global temporary table and pipes out the resulting record, nothing really fancy so far.

CREATE TYPE empdate_t AS OBJECT
(
  empno    NUMBER(4),
  hiredate DATE
)
/
CREATE TYPE empdate_tab IS TABLE OF empdate_t
/
CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  RETURN;
END getallhiredates;
/

But when I ran a query against this function:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

I ran into an error:

ORA-14551: cannot perform a DML operation inside a query 

So, all the work I done so far had been for nothing? Time wasted? I don’t think so. If there is anything I learned over the years it is that Oracle tries to stop you doing certain things but at the same time supplies you the tools to create a work-around.

There is something like an autonomous transaction, that might help me in this case so I changed the code for the function a bit:

CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  COMMIT;
  RETURN;
END getallhiredates;
/

But when I ran the query:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

I ran into a different error:

ORA-06519: active autonomous transaction detected and rolled back

So this doesn’t work or does it? Pipelined table functions have ‘exit’ the function multiple times. Whenever a row is piped out. So, I tried to put the COMMIT just before the PIPE ROW command:

CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    COMMIT;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  RETURN;
END getallhiredates;
/

And when I ran my statement again:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

It worked as I hoped for.

As you can see I have tried to mimic the situation using the EMP and DEPT tables. I think this is a nice little trick, but it should be used with caution. It is not for no reason that Oracle prevents you from running DML inside a query, but in this case I can bypass this restriction.


Pages

Subscribe to Oracle FAQ aggregator