Feed aggregator

Displaying PDF files stored in the Database

Tom Kyte - Thu, 2017-06-15 01:46
Tom, How can we display PDF files stored in the database, using htp.p (OAS PL/SQL cartridge )calls and dbms_lob ? We are developing a document archival and display app. Using ctxload, I can upload the files. Now, I would like to display the file...
Categories: DBA Blogs

Notes From Orlando

Floyd Teter - Wed, 2017-06-14 17:13
I thought y'all would appreciate some notes from last week's OHUG conference in Orlando Florida.  So, in no particular order, my observations...
  • The sessions were pretty evenly divided between Oracle E-Business, PeopleSoft and HCM Cloud.  Right around 1/3 of total sessions for each track.
  • The mix of attendees, from what I could tell, were about half HCM Cloud users and half on either EBS or PeopleSoft.  And out of those on EBS or Peoplesoft, about half of them were exploring the transformation of moving to HCM Cloud.
  • Attendance for this year's conference seems a little light; maybe down 10 or 15 percent from prior years.  I'm guessing that was caused by a combination of following so closely on the heels of Oracle HCM Cloud World and the fact that it's always tough for a user group conference to get a big draw in Orlando (I don't know why, just know from experience that it's generally true).
  • I did not run into many decision makers this year...very few upper management executives.  But tons of influencers: people who implement and use the products.  I suspect most decision makers are going to Oracle HCM Cloud World while most of those responsible for executing those decisions attend OHUG.
  • A follow on to the prior point.  Attendees were focused on the fundamentals of implementation and use; "how do I do...".  Not many strategic discussions.
  • You couldn't  walk more than 10 feet without encountering a Taleo session or a hallway discussion about Taleo.  Maybe the top topic of the conference.
  • I tried several times to catch Oracle's Justin Challenger, who ran the conference Report Jam this year.  But every time I tried, he was heads down with a group of users designing reports. So I have to thin that the Report Jam was a big hit.
  • Likewise, the UX Lab was abuzz with activity whenever I stopped by there.
  • When the folks in Orlando say they're going to get rain, they're not messing around.  It rained hard...and usually more than once...every day I was there.
  • There may not be anyone who understands global implementations of HCM Cloud better than Oracle's Jon McGoy.  The breadth and depth of the material he presented, plus his off-the-cuff answers to questions, was pretty amazing.
So, overall, I think the OHUG conference is in the midst of a transition.  First, it's becoming more cloud-centric.  You can see it in both the session tracks and the focus of the attendees.  Second, it's become a "how to" type of conference.  More emphasis on using, integrating, and extending the applications.  Less emphasis on strategic decisions.   Third, the type of attendee is changing.  More influencers and users, fewer decision makers (hot tip:  some folks think that's a good thing).

I'm already looking forward to next year's OHUG conference.  Can't wait to see how the transition continues to shake out.

Bash: The most useless command (3)

Dietrich Schroff - Wed, 2017-06-14 12:48
The blog statistics show, that there are many people reading the posts about useless commands. And there is the next candidate, suggested by an anonymous comment:
slThis is my most hated program on a shell. Why?
NAME
sl − display animations aimed to correct users who accidentally enter sl instead of ls.and this program is not interruptable by ctrl-c.
It shows a train running from the left to the right
and blocks your shell for at least 2-3 seconds (depends on the width of your shell window):
$ time sl
real 0m3.347s

Nice Trick to Get ADF LOV Description Text

Andrejus Baranovski - Wed, 2017-06-14 12:29
I will tell you about nice trick to display LOV description. Usually you would create separate attribute in VO for LOV description and base LOV on this attribute (read about it in my previous post - Defining the LOV on a Reference Attribute in Oracle ADF 11g). But there is one more way - it makes it much faster to define LOV on description, but you should be aware about additional SQL statement executed to fetch description text.

You could set converter for ADF UI LOV, and then LOV component would use description by itself, without any additional configuration.

It is important to set correct order for LOV display attributes. Make sure to set description attribute to be first in the list for converter approach to work:


Go to ADF UI LOV component and set converter property. This must point to binding object, converter expression:


What you get - LOV field displays description, converter is able to mask ID value behind it:


It offers nice usability option - you could start typing description, press tab and value will be auto completed (if autoSubmit is set to true):


Behind the scenes it executes LOV SQL to get description text (this SQL is executed on page load too, which is not ideal when you have many LOV fields - in such situation is better to use separate attribute for description in the same VO):


When LOV value is changed and changes are saved, ADF BC updates ID value only (as expected):


Download sample application - ADFLovDescriptionApp.zip.

Register Now for Oracle Code Online - June 20, 2017

OTN TechBlog - Wed, 2017-06-14 12:09

Oracle Code OnlineIf you don't live in or near one of the 20 cities that have hosted, or will host, an Oracle Code event, no worries! Next week Oracle Code Online brings the event to you with 15 expert technical sessions in five tracks, covering database, server-side development, mobile development, full-stack web development, and DevOps and systems. Each session features live Q&A with the session presenters.

Kicking off the event is a special keynote session, Building and Deploying Chatbots and Microservices with Oracle Cloud Platform, by Siddhartha Agarwal, Oracle's VP of Product Management and Strategy.

Register now for this special event:

Full Stack Web Sessions
  • Making RESTful Web Services the Easy Way with Node.js
    by Dan McGhan, Developer Advocate, Oracle
  • Visual JavaScript Development for Everyone
    by Shay Shmeltzer, Director of Product Management, Mobile and Development Tools, Oracle
  • JavaScript Toolkit for Enterprise Applications
    by Oracle ACE Director Andreas Koop, CEO and IT Architect, enpit GmbH and Co. with Janis Krasemann, Developer, enpit GmbH and Co.
Mobile Development Sessions
  • Introduction to Platform Agnostic Hybrid App Development
    by Oracle ACE John Sim, Consultant, Fishbowl Solutions
  • Building Hybrid Mobile Apps with JavaScript and Cordova
    by Chris Muir, Senior Principal Mobility and Cloud Development Tools Product Manager, Oracle
  • The Rise of ChatBots in Enterprise Mobile Application Development
    by Frank Nimphius, Senior Principal Product Manager, Oracle Mobility and Development Tools Group
Register now!

Webcast: Testing E-Business Suite with Flow Builder

Steven Chan - Wed, 2017-06-14 12:00

Oracle University has a large number of free recorded webcasts that EBS sysadmins might find useful.  Here's another good webcast on our testing tools:

Mazen Arar, Senior Director EBS Quality Assurance, provides an overview of how the Quality Assurance team tests Oracle E-Business Suite. It covers main areas that you should consider during functional testing, approaches for new-feature and regression testing, how to reduce the test script generation and execution time, experiences on capturing and presenting metrics to showcase the ROI of testing investment, leveraging automation for testing Oracle E-Business Suite applications and more. This material was presented at Oracle OpenWorld 2016. 

Related Articles

 

Categories: APPS Blogs

Single-Tenant over bequeath connections

Laurent Schneider - Wed, 2017-06-14 10:46

If you follow Oracle recommendation to use SingleTenant CDB instead of Non-CDB, then a lot of things will break.

I won’t go into details, but basically, in this new architecture, you have one core container, called CDB$ROOT, which is the default container where you connect to if you connect locally


sqlplus / as sysdba
SQL> select cdb from v$database;
CDB
---
YES
SQL> select sys_context('USERENV','CON_NAME') from dual;
SYS_CONTEXT('USERENV','CON_NAME')
---------------------------------
CDB$ROOT

SQL> select con_id, NAME from V$CONTAINERS

    CON_ID NAME
---------- ----------
         1 CDB$ROOT
         2 PDB$SEED
         3 ST01

Then you’ll soon realise, you can no longer do what you used to do


SQL> create user u identified by u;
create user u identified by u
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

Some scripts still run in the root container. SHUTDOWN ABORT, ALTER SYSTEM SWITCH LOGFILE. Doing a full backup or clone probably won’t hurt you much. Relatively..

But now let’s imagine I have a very simple and very old script to lock scott


vintage.sh:
  echo "alter user scott account lock;"|
    sqlplus -s / as sysdba

This won’t work. I need to lock scott in the SingleTenant container ST01.

I could do this in the container


SQL> alter session set container=ST01;
  Session altered.
SQL> alter user scott account lock;
  User altered.

So fine, so good. NOW : how do I make this work without changing the script ?

Remember, non-cdb database, as they were used in Oracle 11, 10, 9, 8, 7 … are now deprecated. Remember, cdb is recommended. Now face it : it’ll break your dba scripts.

As a production dba, I don’t want to rewrite all the existing scripts. Some are ultra-old and used by people who did not write them.

One method for my script would be to change the container in a login script.


echo "alter session set container=ST01;" > /tmp/login.sql
export ORACLE_PATH=/tmp
vintage.sh
  Session altered.
  User altered.  

(ORACLE_PATH in latest 12.1 and in 12.2, SQL_PATH in older release)

However, if my script must work with both CDB and non-CDB, I need to set the container in only this case.

In my login.sql, I first tried to implement some plsql logic, but alter session set container is not working (aka working with limitation) with execute immediate.

As well, I don’t want my script to break Oracle 11.

So I decide to do some sqlplus magic with defined variable.


set ver off feed off
-- 1) check if the column v$database.cdb exists
col column_name new_v cdb nopri
def cdb=null
select column_name from dba_tab_columns 
where owner='SYS' and 
table_name='V_$DATABASE' and column_name='CDB';
-- 2) if cdb is YES, then select a dynamic statement using V$PDB 
col pdb new_v pdb nopri
def pdb="null stmt from dual where 1=0"
select 
'''set container="''||name||''"'' stmt from v$pdbs where name!=''PDB$SEED'''
  pdb 
from v$database 
where &cdb='YES';
-- 3) get a dynamic alter session statement. I use a dummy flagger for non-cdb
col stmt new_val stmt nopri
def stmt="SET FLAGGER=OFF"
select &pdb;
-- 4) alter session
alter session &stmt;
set feed 6
col column_name clear
col stmt clear
col pdb clear
undef cdb
undef stmt
undef pdb
del

Now I run my script


11g: ./vintage.sh
User altered.

12c-non-cdb: ./vintage.sh
User altered.

12cR2-single-tenant: ./vintage.sh
User altered.

DISCLAIMER: you shouldn’t use a global login.sql and you should know that secretly fixing old scripts may have side effects. Test, test and retest your code
DISCLAIMER 2: my frequent readers surely wonder if this statement generating a statement generating a statement is for an obfuscation contest

Unpivot

Jonathan Lewis - Wed, 2017-06-14 09:46

An interesting observation appeared recently as a side-channel on a question on the OTN database forum – how does Oracle execute an unpivot() operation. Here’s an example of such a query:

rem
rem     Script:         unpivot_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem

create table t1( id, col1, col2, col3, col4, col5, padding )
cache
pctfree 95 pctused 5
-- compress for query low
as
select
        1, 100 , 200 , 300 , 400 , 500,rpad('x',100)
from
        all_objects
where
        rownum <= 50000 ; execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

select 
        /*+ gather_plan_statistics find this */
        id, max(val) as high_val
from
        t1
unpivot include nulls (
        val for source in (col1, col2, col3, col4, col5)
)
group by id
order by id
; 

I’ve created a table with 50,000 rows (all_objects is quite large in 12.1.0.2 and 12.2.0.1), but with lots of free space per block so that I get three rows per block for a total of roughly 16,667 blocks which is going to make it fairly easy to spot any interesting session statistics. Then I’ve used an unpivot() call that has the effect of turning one row with five columns into five rows with one column.

Here’s the basic execution plan for the query (as pulled from memory):


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        | 10695 (100)|      1 |00:00:00.18 |   16671 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |      1 | 10695   (2)|      1 |00:00:00.18 |   16671 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K| 10653   (2)|    250K|00:00:01.41 |   16671 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.52 |   16671 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  2131   (2)|  50000 |00:00:00.12 |   16671 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

According to the plan Oracle has done one full tablescan on the data (Starts = 1), but the unpivot operation has used this to produce 250,000 rows of output from 50,000 rows (A-Rows=) of input. Note, however, that the cost of the view operation is 5 times the cost of the tablescan but, on the other hand, the number of buffers visited is 16,671 (which matches the size of the table). So have we done 5 tablescans with a union all, or have we done one tablescan ?

The next step is to look at the 10053 (optimizer) trace file, specifically for the “unparsed query” which (despite the final plan table showing the plan we’ve just seen above) looked like this – after applying a few purely cosmetic changes:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "from$_subquery$_002"."ID" "ID",MAX("from$_subquery$_002"."VAL") "HIGH_VAL" 
FROM    ( 
                   (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL1' "SOURCE","T1"."COL1" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL2' "SOURCE","T1"."COL2" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL3' "SOURCE","T1"."COL3" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL4' "SOURCE","T1"."COL4" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL5' "SOURCE","T1"."COL5" "VAL" FROM "TEST_USER"."T1" "T1")
        ) "from$_subquery$_002" 
GROUP BY "from$_subquery$_002"."ID" 
ORDER BY "from$_subquery$_002"."ID"
;

And then there’s the outline (which I can see in the 10053 trace, or in the plan pulled from memory by a call to dbms_xplan.display_cursor()).


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$6")
      NO_ACCESS(@"SEL$6" "from$_subquery$_002"@"SEL$6")
      FULL(@"SEL$5" "T1"@"SEL$5")
      FULL(@"SEL$4" "T1"@"SEL$4")
      FULL(@"SEL$3" "T1"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

And if I also pull the alias (query block and fully qualified table name) information from memory:


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$6
   2 - SET$1 / from$_subquery$_002@SEL$6
   3 - SET$1 
   4 - SEL$5 / T1@SEL$5

Then there’s the alternative (though not necessarily correct) way to find an “unparsed” version of the query – dbms_sql2.expand_sql_text() in 11g, dbms_utility.expand_sql_text() in 12c:


variable m_sql_out clob

declare
        m_sql_in    clob :=
                'select /*+ gather_plan_statistics find this */ id, max(val) as high_val
                from    t1
                unpivot include nulls ( val for source in (col1, col2, col3, col4, col5) )
                group by id
                order by id
                '
        ;
begin

--      dbms_sql2.expand_sql_text(        -- 11g
        dbms_utility.expand_sql_text(     -- 12c
                m_sql_in,
                :m_sql_out
        );

end;
/

set long 20000
print m_sql_out

M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  ( (SELECT "A3"."ID" "ID",
"A3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1"
 "A3") UNION ALL  (SELECT "A4"."ID" "ID","A4"."PADDING" "PADDING",'COL2' "SOURCE
","A4"."COL2" "VAL" FROM "TEST_USER"."T1" "A4") UNION ALL  (SELECT "A5"."ID" "ID
","A5"."PADDING" "PADDING",'COL3' "SOURCE","A5"."COL3" "VAL" FROM "TEST_USER"."T
1" "A5") UNION ALL  (SELECT "A6"."ID" "ID","A6"."PADDING" "PADDING",'COL4' "SOUR
CE","A6"."COL4" "VAL" FROM "TEST_USER"."T1" "A6") UNION ALL  (SELECT "A7"."ID" "
ID","A7"."PADDING" "PADDING",'COL5' "SOURCE","A7"."COL5" "VAL" FROM "TEST_USER".
"T1" "A7")) "A1" GROUP BY "A1"."ID" ORDER BY "A1"."ID"

So at some level the optimizer does think it’s doing 5 tablescans – and the t1 reported in the unpivot plan is from the fifth (i.e. last) copy of the table in the union all. But the execution statistics and the session statistics say I’ve done just one tablescan – and this persists even when I make the table so large that it (a) won’t fit the buffer cache, and/or (b) uses direct path reads, and/or (c) runs under Exadata with Hybrid columnar compression.

So what’s (probably) happening ?

I think Oracle has a code path that says it’s doing a union all of tablescans (at least for this example) but tells it that the union all is there as an expansion of an unpivot so (sticking with an example that does a tablescan into the buffer cache) Oracle reads the number of blocks dictated by the current multiblock read count into the cache, pins that one batch of blocks, scans the batch 5 times (or as required), unpins the batch and then reads the next batch. So the session does five tablescans but does them in a way that lets you see only one tablescan in the statistics.

Footenote

There was a change in the results when I tested this on 12.2.0.1; the unparsed query reported only a two-part union all subquery, and the table alias information in the plan identified the referenced table as the copy of the table from the second subquery in the union all. More significantly the cost of the VIEW operation was a close match to the cost of a single tablescan, rather than being a multiple thereof:


select * from table(dbms_xplan.display_cursor('1k077bzp0t6mn',null,'outline alias cost 

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |  1117 (100)|      1 |00:00:00.13 |    8350 |   8334 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |    250K|  1117   (6)|      1 |00:00:00.13 |    8350 |   8334 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K|  1076   (2)|    250K|00:00:00.13 |    8350 |   8334 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.09 |    8350 |   8334 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  1074   (2)|  50000 |00:00:00.07 |    8350 |   8334 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$3
   2 - SET$1 / from$_subquery$_002@SEL$3
   3 - SET$1
   4 - SEL$2 / T1@SEL$2

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$3")
      NO_ACCESS(@"SEL$3" "from$_subquery$_002"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      END_OUTLINE_DATA
  */


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  (SELECT "A3"."ID" "ID","A
3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1" "
A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL2' "SOURCE","A3
"."COL2" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."
PADDING" "PADDING",'COL3' "SOURCE","A3"."COL3" "VAL" FROM "TEST_USER"."T1" "A3"
UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL4' "SOURCE","A3"."C
OL4" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADD
ING" "PADDING",'COL5' "SOURCE","A3"."COL5" "VAL" FROM "TEST_USER"."T1" "A3") "A1
" GROUP BY "A1"."ID" ORDER BY "A1"."ID"
allstats last'));

It used to be hard enough working out what the run-time engine was doing when the optimizer and the run-time engine claimed that they were working on the same (transformed version of the) SQL; but now we can get the impression that the optimizer is directing the run-time engine to execute a plan that couldn’t possibly match the (unparsed) query that the optimizer had been considering.


The data security mess

DBMS2 - Wed, 2017-06-14 08:21

A large fraction of my briefings this year have included a focus on data security. This is the first year in the past 35 that that’s been true.* I believe that reasons for this trend include:

  • Security is an important aspect of being “enterprise-grade”. Other important checkboxes have been largely filled in. Now it’s security’s turn.
  • A major platform shift, namely to the cloud, is underway or at least being planned for. Security is an important thing to think about as that happens.
  • The cloud even aside, technology trends have created new ways to lose data, which security technology needs to address.
  • Traditionally paranoid industries are still paranoid.
  • Other industries are newly (and rightfully) terrified of exposing customer data.
  • My clients at Cloudera thought they had a chance to get significant messaging leverage from emphasizing security. So far, it seems that they were correct.

*Not really an exception: I did once make it a project to learn about classic network security, including firewall appliances and so on.

Certain security requirements, desires or features keep coming up. These include (and as in many of my lists, these overlap):

  • Easy, comprehensive access control. More on this below.
  • Encryption. If other forms of security were perfect, encryption would never be needed. But they’re not.
  • Auditing. Ideally, auditing can alert you to trouble before (much) damage is done. If not, then it can at least help you do proactive damage control in the face of breach.
  • Whatever regulators mandate.
  • Whatever is generally regarded as best practices. Security “best practices” generally keep enterprises out of legal and regulatory trouble, or at least minimize same. They also keep employees out of legal and career trouble, or minimize same. Hopefully, they even keep data safe.
  • Whatever the government is known to use. This is a common proxy for “best practices”.

More specific or extreme requirements include: 

I don’t know how widely these latter kinds of requirements will spread.

The most confusing part of all this may be access control.

  • Security has a concept called AAA, standing for Authentication, Authorization and Accounting/Auditing/Other things that start with”A”. Yes — even the core acronym in this area is ill-defined.
  • The new standard for authentication is Kerberos. Or maybe it’s SAML (Security Assertion Markup Language). But SAML is actually an old, now-fragmented standard. But it’s also particularly popular in new, cloud use cases. And Kerberos is actually even older than SAML.
  • Suppose we want to deny somebody authorization to access certain raw data, but let them see certain aggregated or derived information. How can we be sure they can’t really see the forbidden underlying data, except through a case-by-case analysis? And if that case-by-case analysis is needed, how can the authorization rules ever be simple?

Further confusing matters, it is an extremely common analytic practice to extract data from somewhere and put it somewhere else to be analyzed. Such extracts are an obvious vector for data breaches, especially when the target system is managed by an individual or IT-weak department. Excel-on-laptops is probably the worst case, but even fat-client BI — both QlikView and Tableau are commonly used with local in-memory data staging — can present substantial security risks. To limit such risks, IT departments are trying to impose new standards and controls on departmental analytics. But IT has been fighting that war for many decades, and it hasn’t won yet.

And that’s all when data is controlled by a single enterprise. Inter-enterprise data sharing confuses things even more. For example, national security breaches in the US tend to come from government contractors more than government employees. (Ed Snowden is the most famous example. Chelsea Manning is the most famous exception.) And as was already acknowledged above, even putting your data under control of a SaaS vendor opens hard-to-plug security holes.

Data security is a real mess.

Categories: Other

Light-touch managed services

DBMS2 - Wed, 2017-06-14 08:14

Cloudera recently introduced Cloudera Altus, a Hadoop-in-the-cloud offering with an interesting processing model:

  • Altus manages jobs for you.
  • But you actually run them on your own cluster, and so you never have to put your data under Altus’ control.

Thus, you avoid a potential security risk (shipping your data to Cloudera’s service). I’ve tentatively named this strategy light-touch managed services, and am interested in exploring how broadly applicable it might or might not be.

For light-touch to be a good approach, there should be (sufficiently) little downside in performance, reliability and so on from having your service not actually control the data. That assumption is trivially satisfied in the case of Cloudera Altus, because it’s not an ordinary kind of app; rather, its whole function is to improve the job-running part of your stack. Most kinds of apps, however, want to operate on your data directly. For those, it is more challenging to meet acceptable SLAs (Service-Level Agreements) on a light-touch basis.

Let’s back up and consider what “light-touch” for data-interacting apps (i.e., almost all apps) would actually mean. The basics are: 

  • The user has some kind of environment that manages data and executes programs.
  • The light-touch service, running outside this environment, spawns one or more app processes inside it.
  • Useful work ensues …
  • … with acceptable reliability and performance.
  • The environment’s security guarantees ensure that data doesn’t leak out.

Cases where that doesn’t even make sense include but are not limited to:

  • Transaction-processing applications that are carefully tuned for efficient database access.
  • Applications that need to be carefully installed on or in connection with a particular server, DBMS, app server or whatever.

On the other hand:

  • A light-touch service is at least somewhat reasonable in connection with analytics-oriented data-management-plus-processing environments such as Hadoop/Spark clusters.
  • There are many workloads over Hadoop clusters that don’t need efficient database access. (Otherwise Hive use would not be so prevalent.)
  • Light-touch efforts seem more likely to be helped than hurt by abstraction environments such as the public cloud.

So we can imagine some kind of outside service that spawns analytic jobs to be run on your preferred — perhaps cloudy — Hadoop/Spark cluster. That could be a safe way to get analytics done over data that really, really, really shouldn’t be allowed to leak.

But before we anoint light-touch managed services as the NBT (Next Big Thing/Newest Bright Thought), there’s one more hurdle for it to overcome — why bother at all? What would a light-touch managed service provide that you wouldn’t also get from installing packaged software onto your cluster and running it in the usual way? The simplest answer is “The benefits of SaaS (Software as a Service)”, and so we can rephrase the challenge as “Which benefits of SaaS still apply in the light-touch managed service scenario?”

The vendor perspective might start, with special cases such as Cloudera Altus excepted:

  • The cost-saving benefits of multi-tenancy mostly don’t apply. Each instance winds up running on a separate cluster, namely the customer’s own. (But that’s likely to be SaaS/cloud itself.)
  • The benefits of controlling your execution environment apply at best in part. You may be able to assume the customer’s core cluster is through some cloud service, but you don’t get to run the operation yourself.
  • The benefits of a SaaS-like product release cycle do mainly apply.
    • Only having to support the current version(s) of the product is a little limited when you don’t wholly control your execution environment.
    • Light-touch doesn’t seem to interfere with the traditional SaaS approach of a rapid, incremental product release cycle.

When we flip to the user perspective, however, the idea looks a little better.

Bottom line: Light-touch managed services are well worth thinking about. But they’re not likely to be a big deal soon.

Categories: Other

Cloudera Altus

DBMS2 - Wed, 2017-06-14 08:12

I talked with Cloudera before the recent release of Altus. In simplest terms, Cloudera’s cloud strategy aspires to:

  • Provide all the important advantages of on-premises Cloudera.
  • Provide all the important advantages of native cloud offerings such as Amazon EMR (Elastic MapReduce, or at least come sufficiently close to that goal.
  • Benefit from customers’ desire to have on-premises and cloud deployments that work:
    • Alike in any case.
    • Together, to the extent that that makes use-case sense.

In other words, Cloudera is porting its software to an important new platform.* And this port isn’t complete yet, in that Altus is geared only for certain workloads. Specifically, Altus is focused on “data pipelines”, aka data transformation, aka “data processing”, aka new-age ETL (Extract/Transform/Load). (Other kinds of workload are on the roadmap, including several different styles of Impala use.) So what about that is particularly interesting? Well, let’s drill down.

*Or, if you prefer, improving on early versions of the port.

Since so much of the Hadoop and Spark stacks is open source, competition often isn’t based on core product architecture or features, but rather on factors such as:

  • Ease of management. This one is nuanced in the case of cloud/Altus. For starters:
    • One of Cloudera’s main areas of differentiation has always been Cloudera Manager.
    • Cloudera Director was Cloudera’s first foray into cloud-specific management.
    • Cloudera Altus features easier/simpler management than Cloudera Director, meant to be analogous to native Amazon management tools, and good-enough for use cases that don’t require strenuous optimization.
    • Cloudera Altus also includes an optional workload analyzer, in slight conflict with other parts of the Altus story. More on that below.
  • Ease of development. Frankly, this rarely seems to come up as a differentiator in the Hadoop/Spark world, various “notebook” offerings such as Databricks’ or Cloudera’s notwithstanding.
  • Price. When price is the major determinant, Cloudera is sad.
  • Open source purity. Ditto. But at most enterprises — at least those with hefty IT budgets — emphasis on open source purity either is a proxy for price shopping, or else boils down to largely bogus concerns about vendor lock-in.

Of course, “core” kinds of considerations are present to some extent too, including:

  • Performance, concurrency, etc. I no longer hear many allegations of differences in across-the-board Hadoop performance. But the subject does arise in specific areas, most obviously in analytic SQL processing. It arises in the case of Altus as well, in that Cloudera improved in a couple of areas that it concedes were previously Amazon EMR advantages, namely:
    • Interacting with S3 data stores.
    • Spinning instances up and down.
  • Reliability and data safety. Cloudera mentioned that it did some work so as to be comfortable with S3’s eventual consistency model.

Recently, Cloudera has succeeded at blowing security up into a major competitive consideration. Of course, they’re trying that with Altus as well. Much of the Cloudera Altus story is the usual — rah-rah Cloudera security, Sentry, Kerberos everywhere, etc. But there’s one aspect that I find to be simple yet really interesting:

  • Cloudera Altus doesn’t manage data for you.
  • Rather, it launches and manages jobs on a separate Hadoop cluster.

Thus, there are very few new security risks to running Cloudera Altus, beyond whatever risks are inherent to running any version of Hadoop in the public cloud.

Where things get a bit more complicated is some features for workload analysis.

  • Cloudera recently introduced some capabilities for on-the-fly trouble-shooting. That’s fine.
  • Cloudera has also now announced an offline workload analyzer, which compares actual metrics computed from your log files to “normal” ones from well-running jobs. For that, you really do have to ship information to a separate cluster managed by Cloudera.

The information shipped is logs rather than actual query results or raw data. In theory, an attacker who had all those logs could conceivably make inferences about the data itself; but in practice, that doesn’t seem like an important security risk at all.

So is this an odd situation where that strategy works, or could what we might call light-touch managed services turn out to be widespread and important? That’s a good question to address in a separate post.

Categories: Other

Oracle Code: The Microservices Beat Goes On

OTN TechBlog - Wed, 2017-06-14 08:07

drummersAs the Oracle Code tour makes its way around the world --- San Francisco, Austin, New York City, Washington DC, Toronto, London, Berlin, Prague, Moscow, Brussels, New Delhi, and Tokyo completed, Atlanta, Sao, Paolo, Mexico City, Rishon LeZion, Sydney, Beijing, Bangalore, and Seoul still to come -- the microservices drumbeat continues.

Microservice development is a key theme across all of the Oracle Code events, and session speakers at the various events have come at microservices from a variety of technical and conceptual angles. The videos listed below are among the most-watched microservices sessions and interviews from various cities on the tour.

Additional Resources

Hothead Games Fuels Business Growth with NetSuite OneWorld

Oracle Press Releases - Wed, 2017-06-14 07:55
Press Release
Hothead Games Fuels Business Growth with NetSuite OneWorld Rapid OneWorld Implementation Gives Canadian Mobile Game Developer Operational Efficiency and Scalability

San Mateo, Calif.—Jun 14, 2017

Oracle NetSuite Global Business Unit, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced that Canada-based Hothead Games, one of the top 100 mobile game developers in the world, implemented NetSuite OneWorld to manage its rapidly growing business. Hothead Games uses NetSuite OneWorld to manage mission-critical business processes including financial consolidation, accounting, purchasing, revenue recognition, multi-currency transactions and multi-country tax compliance. Since going live on NetSuite OneWorld in February 2016 after a rapid two-month implementation, Hothead Games has increased operational efficiency and gained the scalability needed to grow its business.

Based in Vancouver, British Columbia, Hothead Games is one of Canada's most successful digital entertainment companies, creating game titles that include Kill Shot, Rivals at War, and Big Win Hockey, which have attracted more than 250 million mobile downloads. In response to strong demand and new market opportunities, Hothead Games recently expanded its team by 50 percent, which made the company more complex to operate. To successfully manage this growth and reduce complexity, Hothead Games needed to replace its previous software system as it did not support enough simultaneous users to process month-end results in a timely manner and made it difficult for Hothead Games to understand its assets and liabilities in real-time, requiring a manual reporting process to reconcile different currencies. In addition, the previous accounting system was slow to generate forecasts and required manual processes to compare actuals against forecast.

“We were always looking for a way to bridge the gap between the analytics in US dollars and the financial results in Canadian dollars,” said Tim Holland, Hothead Games' Director of Finance.

NetSuite OneWorld supports 190 currencies, 20 languages, automated tax calculation and reporting in more than 100 countries, and transactions in more than 200 countries. NetSuite OneWorld delivers the following benefits to Hothead Games:

  • Rapid implementation. The company was up and running on NetSuite OneWorld within two months of purchase, speeding time to value.
  • Scalability for growth. NetSuite OneWorld’s single cloud solution allows Hothead Games to expand seamlessly as the business grows.
  • Real-time visibility. NetSuite OneWorld’s unified platform gives Hothead Games visibility across its operations with one unified financial system of record to provide visibility into standard costs with detailed breakdowns. NetSuite OneWorld also gives Hothead Games access to the insights it needs to refine strategy and spending in a highly competitive global market where tastes and technology can change overnight.
  • Data-driven decisions. With a unified platform, the company has real-time access to a single source of data across its entire enterprise to make more informed decisions.
  • Faster, more accurate reporting. NetSuite OneWorld's reporting capabilities have given Hothead Games real-time insight into payroll, marketing expenditures and spending at a location- and project-based level, enabling faster financial close. NetSuite OneWorld also makes it easier for Hothead Games to track and report financials for the Canadian Scientific Research and Experimental Development (SR&ED) Program, an incentive that helps offset certain wage expenses.
  • A broad range of robust international capabilities. NetSuite OneWorld provides a highly scalable system for growth with the ability to quickly and easily add global functionality as Hothead Games expands into new markets.

“Our business is always changing, so we are always refining forecasts, and NetSuite OneWorld ensures we are all on the same page when making key financial decisions,” Holland added.

Contact Info
Christine Allen
Public Relations, Oracle NetSuite Global Business Unit
603-743-4534
PR@netsuite.com
About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world's first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Christine Allen

  • 603-743-4534

Data Guard physical standby database redo apply lag time about one month

Tom Kyte - Wed, 2017-06-14 07:26
<code> SYS@standby28> select database_role,open_mode from v$database; DATABASE_ROLE OPEN_MODE -------------------------------- ---------------------------------------- PHYSICAL STANDBY READ ONLY WITH APPLY </...
Categories: DBA Blogs

PLS-00307

Tom Kyte - Wed, 2017-06-14 07:26
<code>Hello! I have the following error: G := GRADO_TYP(21, 30, 50); * ERROR at line 9: ORA-06550: line 9, column 10: PLS-00307: too many declarations of 'GRADO_TYP' match this call ORA-06550: line 9, column 5: PL/SQL: Statement...
Categories: DBA Blogs

Close cursors, reduce session_cached_cursors value

Tom Kyte - Wed, 2017-06-14 07:26
Hi Chris/Connor Can you please help to understand below issue and steps to follow to identify root cause. 04025, 00000, "maximum allowed library object lock allocated for %s%s%s%s%s" // *Cause: Too many active locks for the object has been all...
Categories: DBA Blogs

Index

Tom Kyte - Wed, 2017-06-14 07:26
Hi tom,this is naveen pathuri,a fresher who is new to his role,plsql developer! First of all <b>sorry</b> for asking a question which you might have answered it 1000 times.But i'm still unsure about indexes and the way they perform an efficient tab...
Categories: DBA Blogs

Oracle consuming too much disk resourse! 100%.

Tom Kyte - Wed, 2017-06-14 07:26
My oracle is consuming too much resourses from my disk. Do not matter que querys, even the not much complex, it kills my system. I really need some light to solve it. Thank You. Using that script: <code>SELECT * FROM ( SELECT ROUND ( ( (cpu...
Categories: DBA Blogs

How to hidden password of connect to oracle database someone user is needed in shell script file?

Tom Kyte - Wed, 2017-06-14 07:26
<code> ;;; Export: Release 11.2.0.4.0 - Production on Tue Jun 13 04:00:02 2017 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Pr...
Categories: DBA Blogs

Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) – Follow Up

Yann Neuhaus - Wed, 2017-06-14 02:14

My blog about listener rotation caused some discussion, which is great. :-) It sounds like an easy stuff, but in case of listener logging it isn’t.

https://blog.dbi-services.com/oracle-12-2-how-to-rotate-the-12-2-listener-log-diag_adr_enabled_listener-off/

Many ways do exist to rotate the listener log, but I was trying to point out some issues, because there are a few, e.g.

  1. What happens when the log file reaches 4G?
  2. What are the performance implications?
  3. What happens if I move the listener.log while the listener is running (Open file descriptor)?
  4. And how to rotate the listener log with minimal impact?

The first two points have been discussed already in the previous post, so, I’m not going deeper into those one’s. Let’s take a look at the other ones, and start with the file descriptor issue. In the output below, you can see that the listener has an open file descriptor “3w” which is pointing to “/u01/app/oracle/network/log/listener.log”. It can be quite easily identified by using the lsof utility (list open files).

oracle@dbidg01:/u01/app/oracle/ [DBIT122] ps -ef | grep tnslsnr | grep -v grep
oracle    4686     1  0 07:59 ?        00:00:00 /u01/app/oracle/product/12.2.0/dbhome_1/bin/tnslsnr LISTENER -inherit

oracle@dbidg01:/u01/app/oracle/ [DBIT122] lsof -p 4686 | grep listener.log
COMMAND  PID   USER   FD      TYPE             DEVICE SIZE/OFF      NODE NAME
tnslsnr 4686 oracle    3w      REG              251,4    55239 141607653 /u01/app/oracle/network/log/listener.log

So .. what happens if I move the listener log, while the listener is running.

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] mv listener.log listener.log.1
oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122]

And to have some data, I’m doing some sqlplus sessions here to generate more log entries …

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] sqlplus hr/hr@DBIT122_SITE1
... 

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] sqlplus hr/hr@DBIT122_SITE1
... 

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] ls -l listener.log
ls: cannot access listener.log: No such file or directory

oracle@dbidg01:/u01/app/oracle/network/log/ [DBIT122] lsof -p 4686 | grep listener.log
COMMAND  PID   USER   FD      TYPE             DEVICE SIZE/OFF      NODE NAME
tnslsnr 4686 oracle    3w      REG              251,4    56143 141607653 /u01/app/oracle/network/log/listener.log.1

As you can see. A new listener.log is not automatically created, and the file descriptor “3w” is pointing now to /u01/app/oracle/network/log/listener.log.1.

That’s why it is not a good idea to move the listener log, without stopping the listener logging first. And please don’t try to send a hang up signal to the listener. The listener does not understand the hang up signal (kill -HUP) and you would kill it immediately. ;-)

Ok. Let’s get back to the question about how to rotate the listener log with minimal impact?

If you have activated the listener logging, then there is a reason for doing that and you don’t want to lose data out of that log file. In case that losing any listener log data is not acceptable, you have to stop the listener, rotate the log and start the listener. There is no way out of that. Of course it has the disadvantage that new sessions cannot be established during that time.

However, if you want the listener to be up and running, and rotate the listener log with an absolute minimum of log data loss (taking here about milliseconds), then I would use the approach which I have described already in my previous post.

I have written quickly a listener rotate script which demonstrates how it can look like. I know that the script is not baby save, and a lot of extra checks and tweaks can be built in, but you will get an idea how it can look like. The script takes two parameters. The first one is the listener name, and the second one is the number of days about how long the listener logs should be kept on disk. Everything older than that will be removed.

oracle@dbidg01:/home/oracle/ [DBIT122] ./rotate_listener.sh LISTENER 7
INFO: Check if Listener LISTENER is running
INFO: Start Listener Rotating for LISTENER
INFO: Current Listener Logging for LISTENER is: ON
INFO: Tuning Listener Logging for LISTENER OFF
INFO: Current Listener Logging for LISTENER is: OFF
INFO: Rotating Listener Log /u01/app/oracle/network/log/listener.log to /u01/app/oracle/network/log/listener.log.1497363899
INFO: Turning on Listener Logging for LISTENER
INFO: Current Listener Logging for LISTENER is: ON
INFO: Rotated successfully Listener Log for LISTENER
INFO: Starting cleanup of old listener log files
INFO: Will delete the following log files
/u01/app/oracle/network/log/listener.log.1497354123
/u01/app/oracle/network/log/listener.log.1497354122
/u01/app/oracle/network/log/listener.log.1497354121
INFO: Finished cleanup

And here are the contents of the script.

oracle@dbidg01:/home/oracle/ [DBIT122] cat rotate_listener.sh
#!/bin/bash
#
#-- Listener Name => parameter 1
#-- Delete listener log files older than number of days => parameter 2
#

#-- set -x

ListenerName=$1
NumberOfDays=$2
#-- %s  seconds since 1970-01-01 00:00:00 UTC
Date=`date +%s`

#-- Check if variable $1 and $2 are empty
if [ -z ${ListenerName} ]; then
        echo ""
        echo "INFO: Please specify the listener name"
        echo "INFO: e.g. ./rotate_listener.sh LISTENER 7"
        echo ""
        exit
fi

if [ -z ${NumberOfDays} ]; then
        echo ""
        echo "INFO: Please specify the number of days"
        echo "INFO: e.g. ./rotate_listener.sh LISTENER 7"
        echo ""
        exit
fi

echo "INFO: Check if Listener ${ListenerName} is running"

ps -ef | grep "tnslsnr ${ListenerName} -inherit" | grep -v grep >/dev/null 2>&1
if [ $? != 0 ]; then
        echo "INFO: Listener ${ListenerName} is not running ... will exit here"
        exit
fi

#-- Set the listener log file
ListenerLogFile=`lsnrctl status ${ListenerName} | grep "Listener Log File" | awk '{ print $4 }'`


echo "INFO: Start Listener Rotating for ${ListenerName}"

#-- Check listener log status
ListenerLogStatus=`lsnrctl <<-EOF  | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
show log_status
EOF`

if [ ${ListenerLogStatus} = "ON" ]; then
echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
echo "INFO: Tuning Listener Logging for ${ListenerName} OFF"

ListenerLogStatus=`lsnrctl <<-EOF | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
set log_status off
EOF`

echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
 if [ ${ListenerLogStatus} = "OFF" ]; then
   echo "INFO: Rotating Listener Log ${ListenerLogFile} to ${ListenerLogFile}.${Date}"
        mv ${ListenerLogFile} ${ListenerLogFile}.${Date}
    echo "INFO: Turning on Listener Logging for ${ListenerName}"

ListenerLogStatus=`lsnrctl <<-EOF | grep log_status | awk '{ print $6 }'
set displaymode normal
set current_listener ${ListenerName}
set log_status on
EOF`

echo "INFO: Current Listener Logging for ${ListenerName} is: ${ListenerLogStatus}"
echo "INFO: Rotated successfully Listener Log for ${ListenerName}"
 fi
fi

echo "INFO: Starting cleanup of old listener log files"
echo "INFO: Will delete the following log files"
ListenerLogDirectory=`dirname $ListenerLogFile`
find ${ListenerLogDirectory} -name "listener.log.*" -mtime +${2} -print
find ${ListenerLogDirectory} -name "listener.log.*" -mtime +${2} -print | xargs rm -f

echo "INFO: Finished cleanup"

#-- EOF
Conclusion

If you run that rotate listener log script during a time (e.g. in the middle of night), where you expect minimal activity on the DB, you can minimize the chance of losing listener log entries even further.

 

Cet article Oracle 12.2 – How to rotate the 12.2 listener log (DIAG_ADR_ENABLED_LISTENER = OFF) – Follow Up est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator