Skip navigation.

Feed aggregator

"Why Not Oracle Cloud?" for Fast-Growing, Mid-Sized Organizations

Linda Fishman Hoyle - Mon, 2014-11-17 15:11

In the past, mid-size and smaller companies have had to settle for “lightweight or scoped down ERP solutions.” Generally, these tier-2 solutions don’t have the functionality that a company needs to grow and operate globally.

In this article in Profit magazine, Oracle’s Rondy Ng, Senior Vice President of Applications Development, advises companies to choose wisely when looking at cloud-based ERP solutions to avoid expense, risk, and disruption down the road.

Ng asserts that Oracle ERP Cloud is the ERP solution for Fortune 500 companies, as well as for those who don’t have any designs to be one. There’s no need to settle. He makes a great case for choosing cloud and choosing Oracle.

New Interaction Hub Data Sheet Available

PeopleSoft Technology Blog - Mon, 2014-11-17 14:55

In support of the recently released Revision 3 of the PeopleSoft Interaction Hub, we've just produced the latest data sheet for the Hub, which can be found here.  This paper covers the highlights of the new release, and describes the our overall direction for the product.  The prime topics we cover are as follows:

  • Setting up and running a cluster of PeopleSoft applications using the Interaction Hub
  • Content Management
  • Branding and the User Experience
  • Using the Hub with the new Fluid User Interface
There is much more collateral about the Interaction Hub on My Oracle Support

Visualization on How the undergraduate tuition has increased over the years

Nilesh Jethwa - Mon, 2014-11-17 12:54

Average undergraduate tuition and fees and room and board rates

Source: http://nces.ed.gov/

Image

These figures are inflation adjusted and look how just the tuition fees have increased compared to the Dorm and Board rates

Now comparing the rate increase for 2-year program

Image

So for the 2 year program, the board rates have remained at the same level compared to the dorm rates.

Now check out the interesting graph for 4 year program below

Image

 

Comparing the slope of 2 year Board rates to the 4 year Board rates, the 4 year has significant increase

Image

If price of meals is same for both programs then both 4 year and 2 year programs should have the same slope. So why is the 4 year slope different than 2 year?

Now, let see about the Dorm rates

Image

 

And finally the 4 year vs 2 year Tuition rates

Image

Here is the data table for the above visualization

Musings on Samsung Developer Conference 2014

Oracle AppsLab - Mon, 2014-11-17 11:18

This year some of us at the AppsLab attended the Samsung Developer Conference aka #SDC2014. Last year it was Samsung’s first attempt and we were also there. The quality and caliber of presentations increased tenfold from last year. Frankly, Samsung is doing it really hard to resist to join their ecosystem.

sdc2014

Here are some of the trends I observed:

Wearables and Health:

There was a huge emphasis in Samsung’s commitment with wearable technology. They released a new Tizen based smartwatch (Samsung Gear S) as well as a biometric reference design hardware and software called SIMBAND. Along with their wearable strategy they also released S.A.M.I, a cloud repository to store all this data. All this ties together with their vision of “Voice of the Body.”

Voice of the Body from Samsung on Vimeo.

During the second day keynote we got to hear from Mounir Zok Senior Sports Technologist of the United States Olympic Committee. He told us of how wearable technology is changing they way Olympic athletes are training. It was only a couple years ago when athletes still had to go to a lab and “fake” actual activities to get feedback. Now they can actually get real data on the field thanks to wearable technology.

Virtual Reality:

Samsung released the Gear VR in partnership with Oculus. This goggles can only work with a mounted Galaxy Note 4 in the front. The gaming experiences with this VR devices are amazing. But they are also exploring other cases like virtual tourism and virtual movie experiences. They released a 3D 360+spherical view camera called “Project Beyond.”

IoT – Home Automation:

Samsung is betting big with IoT and Home Automation and they are putting their money where their mouth is by acquiring SmartThings. The SmartThings platform is open sourced and has the ability to integrate with a myriad of other  home automation products. They showcased a smart home powered by SmartThings platform.

Mobile Innovation: 

I actually really like their new Galaxy Note Edge phablet. Samsung is showing true innovation here with the “edge” part of the device. It has it’s own SDK and it feels great on the hand!

Overall I’m pretty impressed with what Samsung is doing. It seems like their spaghetti-on-the-wall approach (throwing a bunch spaghetti and see what sticks) is starting to pay off.  Their whole UX across devices looks seamless. And in my humble approach they are getting ready to take off on their own without having to use Android for their mobile devices. Tizen keeps maturing, but I shall leave that for another post!

Please feel free to share your experience with Samsung devices as well!Possibly Related Posts:

Asteroid Hackathon – The Winning Team

Oracle AppsLab - Mon, 2014-11-17 09:57

Editorial Note: This is a guest post by friend of the ‘Lab and colleague DJ Ursal. Also be sure to check out our Hackathon entry here:

P1230269(2)

EchoUser (@EchoUser), in partnership with SpaceGAMBIT, Maui Makers, the Minor Planet Center, NASA, the SETI Institute, and Further by Design, hosted an Asteroid Hackathon. The event was in response to the NASA Grand Challenge, “focused on finding all asteroid threats to human populations and knowing what to do about them.”

I had a wonderful opportunity to participate in the Asteriod Hackathon last week. MY team name was NOVA. Our team comprised for 4 team members – DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy

We were given live data from NASA and Minor Planet site and literally just had 5 hours to put together a working prototype and solution to the Asteroid big data problem.  We created a web application (works not only on your MAC or PC but also on your iPad and your latest Nexus 7 Android devices) which would help scientists, astronomers and anyone who is interested in Asteriods discover, learn and share information in a fun and interactive way.

P1230307(9)

Our main them was Finding Asteroids Before They Find Us. The goal was to help discover, learn and share Asteroids information to increase awareness within the community.  We created an interactive web app that allowed users to make use of chart filters to find out about the risk for possibilities of future impact with Earth. Find out about the distance of the asteroids to Earth, absolute brightness and rotation of the Asteroid. It allowed users to click and drag on any chart to filter, so that they could transform the filters in multidimensional  way in order to explorer, discover , interesting facts and share data on asteroids with riends and community. We made use of Major Tom who  is an astronaut referenced in David Bowie’s songs “Space Oddity. “Space Oddity” depicts an astronaut who casually slips the bonds of the world to journey beyond the stars. Users could post questions to Major Tom and could also play his song.

The single most important element about WINNING this hackathon  strategically was  team composition. Having a team that is effective working together. Collaboration and communication skills were the two of most critical personal skills demanded of all members as time was limited and communication and coordination of utmost importance.

Winning TEAM NOVA- DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy Possibly Related Posts:

November 20: ICA Fluor Taleo Reference Forum (Spanish)

Linda Fishman Hoyle - Mon, 2014-11-17 09:49

Join us for an Oracle Taleo Customer Reference Forum on Thursday, November 20, 2014, at 2:00 p.m. CDT with ICA Fluor, addressed to the Spanish-speaking communities. ICA Fluor is a joint venture between the Mexican construction and engineering company ICA and Fluor Corporation, one of the world’s largest engineering, procurement, and construction companies.

Since 1993, ICA Fluor has had exclusive rights for both companies for the development of industrial projects in Mexico, Central America, and the Caribbean.

In this session, Mr. Jorge Luis Becerril Sanchez, ICA Fluor’s  HR Manager, shares why they chose to implement Oracle Taleo Recruiting.

Invite your customers and prospects. You can register now to attend the live Spanish Forum session on Thursday, November 20 at 2:00 p.m. CDT and learn more from ICA Fluor directly.

Asteroid Hackathon

Oracle AppsLab - Mon, 2014-11-17 09:49

A couple weeks ago Jeremy Ashley (@jrwashley), Bill Kraus, Raymond Xie and I participated in the Asteroid Hackathon hosted by @EchoUser. The main focus was “to engage astronomers, other space nerds, and the general public, with information, not just data.”

asteroid-hackathon-small

As you might already know, we here at the AppsLab, are big fans of Hackathons as well as ShipIt days or FedEx days. The ability to get together, get our collective minds together and being able to create something in a short amount of time is truly amazing. It also helps to keep us on our toes, technically and creatively.

Our team built what we called “The Daily Asteroid.” The idea behind our project was to highlight the asteroid profile of the current date’s closed approach to Earth or near Earth object (NEO) data. What this means is to show which asteroid is the closest to earth today. A user could “favorite” today’s asteroid and start a conversation with other users about it, using a social network like Twitter.

Screen Shot 2014-11-17 at 9.47.36 AM

We also added the ability to change the asteroid properties (size, type, velocity, angle) and play a scenario to see what damage could it cause if it hit the earth. And to finish up,  we created an Asteroid Hotline using Twilio (@twilio) where you can call to get the latest NEO info using your phone!

We were lucky to be awarded 3rd place or “Best Engagement,” and we had a blast doing it. Considering the small amount time we had, we came out really proud of our results.Possibly Related Posts:

Oracle locks: Identifiying blocking sessions

Yann Neuhaus - Mon, 2014-11-17 09:17

When you have sessions blocked on locks, you probably have all information about the waiters (they call you and anyway their waiting session is visible in v$session our ASH). But you usually need to get enough information that help to identify the blocker.

Here is a query I use to get that quickly, based on V$WAIT_CHAINS

Here is the result I want to get:

session                 wait event                                      minutes USER PRO
----------------------- ----------------------------------------------- ------- ---- ---
 ABCLBP1 '831,54109@1'  SQL*Net message from client                        13.5 SYS  sql
  ABCLBP4 '395,21891@4' enq: TX - row lock contention on TABLE             13.2 SYS  SQL
                          "SYS"."TEST_FRANCK" on rowid AAC0aCAAnAAABSCAAA

I have information about blocking session, waiting session, the type of lock (here TX - row lock) and because it is a row lock I want to know the ROWID of the locked row. 

Here is the query I used to get it:

column "wait event" format a50 word_wrap
column "session" format a25
column "minutes" format 9999D9
column CHAIN_ID noprint
column N noprint
column l noprint
with w as (
select
 chain_id,rownum n,level l
 ,lpad(' ',level,' ')||(select instance_name from gv$instance where inst_id=w.instance)||' '''||w.sid||','||w.sess_serial#||'@'||w.instance||'''' "session"
 ,lpad(' ',level,' ')||w.wait_event_text ||
   case
   when w.wait_event_text like 'enq: TM%' then
    ' mode '||decode(w.p1 ,1414332418,'Row-S' ,1414332419,'Row-X' ,1414332420,'Share' ,1414332421,'Share RX' ,1414332422,'eXclusive')
     ||( select ' on '||object_type||' "'||owner||'"."'||object_name||'" ' from all_objects where object_id=w.p2 )
   when w.wait_event_text like 'enq: TX%' then
    (
     select ' on '||object_type||' "'||owner||'"."'||object_name||'" on rowid '
     ||dbms_rowid.rowid_create(1,data_object_id,relative_fno,w.row_wait_block#,w.row_wait_row#)
     from all_objects ,dba_data_files where object_id=w.row_wait_obj# and w.row_wait_file#=file_id
    )
   end "wait event"
 , w.in_wait_secs/60 "minutes"
 , s.username , s.program
 from v$wait_chains w join gv$session s on (s.sid=w.sid and s.serial#=w.sess_serial# and s.inst_id=w.instance)
 connect by prior w.sid=w.blocker_sid and prior w.sess_serial#=w.blocker_sess_serial# and prior w.instance = w.blocker_instance
 start with w.blocker_sid is null
)
select * from w where chain_id in (select chain_id from w group by chain_id having max("minutes") >= 1 and max(l)>1 )
order by n
/

This query retrieves the wait chains where a session is waiting for more than one minute on a table lock (TM) or row lock (TX) .

When it is a table lock (TM), I get the locked object_id from the P2 parameter, in order to know the table name.

When it is a row lock, I get the table and rowid from V$SESSION. Note that I have to join with dba_data_files in order to convert the absolute file_id to a relative one, and to join to dba_objects in order to convert the object_id to the data_object_id one - in order to built the ROWID.

More information about ROWID, relative file number and data object id in my previous post: From 8.0 extended rowid to 12c pluggable db: Why Oracle Database is still a great software

 

Think Stats, 2nd Edition Exploratory Data Analysis By Allen B. Downey; O'Reilly Media

Surachart Opun - Mon, 2014-11-17 08:15
Lots of Python with data analysis books. This might be a good one that is able to help readers perform statistical analysis with programs written in Python. Think Stats, 2nd Edition Exploratory Data Analysis by Allen B. Downey(@allendowney).
This second edition of Think Stats includes the chapters from the first edition, many of them substantially revised, and new chapters on regression, time series analysis, survival analysis, and analytic methods. Additional, It uses uses pandas, SciPy, or StatsModels in Python. Author developed this book using Anaconda from Continuum Analytics. Readers should use it, that will easy from them. Anyway, I tested on Ubuntu and installed pandas, NumPy, SciPy, StatsModels, and matplotlib packages. This book has 14 chapters relate with processes that author works with a dataset. It's for intermediate reader. So, Readers should know how to program (In a book uses Python), and skill in mathematical + statistical.
Each chapter includes exercises that readers can practice and get more understood. Free Sampler
  • Develop an understanding of probability and statistics by writing and testing code.
  • Run experiments to test statistical behavior, such as generating samples from several distributions.
  • Use simulations to understand concepts that are hard to grasp mathematically.
  • Import data from most sources with Python, rather than rely on data that’s cleaned and formatted for statistics tools.
  • Use statistical inference to answer questions about real-world data.
surachart@surachart:~/ThinkStats2/code$ pwd
/home/surachart/ThinkStats2/code
surachart@surachart:~/ThinkStats2/code$ ipython notebook  --ip=0.0.0.0 --pylab=inline &
[1] 11324
surachart@surachart:~/ThinkStats2/code$ 2014-11-17 19:39:43.201 [NotebookApp] Using existing profile dir: u'/home/surachart/.config/ipython/profile_default'
2014-11-17 19:39:43.210 [NotebookApp] Using system MathJax
2014-11-17 19:39:43.234 [NotebookApp] Serving notebooks from local directory: /home/surachart/ThinkStats2/code
2014-11-17 19:39:43.235 [NotebookApp] The IPython Notebook is running at: http://0.0.0.0:8888/
2014-11-17 19:39:43.236 [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
2014-11-17 19:39:43.236 [NotebookApp] WARNING | No web browser found: could not locate runnable browser.
2014-11-17 19:39:56.120 [NotebookApp] Connecting to: tcp://127.0.0.1:38872
2014-11-17 19:39:56.127 [NotebookApp] Kernel started: f24554a8-539f-426e-9010-cb3aa3386613
2014-11-17 19:39:56.506 [NotebookApp] Connecting to: tcp://127.0.0.1:43369
2014-11-17 19:39:56.512 [NotebookApp] Connecting to: tcp://127.0.0.1:33239
2014-11-17 19:39:56.516 [NotebookApp] Connecting to: tcp://127.0.0.1:54395
Book: Think Stats, 2nd Edition Exploratory Data Analysis
Author: Allen B. Downey(@allendowney)Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Plan puzzle

Jonathan Lewis - Mon, 2014-11-17 07:43

I was in Munich a few weeks ago running a course on Designing Optimal SQL and Troubleshooting and Tuning, but just before I flew in to Munich one of the attendees emailed me with an example of a statement that behaved a little strangely and asked me if we could look at it during the course.  It displays an odd little feature, and I thought it might be interesting to write up what I did to find out what was going on. We’ll start with the problem query and execution plan:


select     s section_size,
           max(program_id) ,avg(program_id)
from       fullclones_test
cross join  (select distinct section_size s from fullclones_test)
where      section_size = (select min(section_size) from fullclones_test)
and        clone_size >= s
group by   s
order by   1; 

Since I found this a little difficult to follow (with repetitions of the same table name, and column aliases switching back and forth) I did a little cosmetic modification; all I’ve done is add table aliases and query block names, then arranged the text for my visual benefit. The aliases and query block names can help when dissecting the anomaly.


select
	/*+ qb_name(main) */
	max(ft1.program_id),
	avg(ft1.program_id),
	ftv.s	section_size
from
	fullclones_test	ft1
cross join
	(
	select	/*+ qb_name(inline) */
		distinct ft2.section_size s
	from	fullclones_test	ft2
	)	ftv
where	ft1.section_size = (
		select 	/*+ qb_name(subq) */
			min(section_size)
		from	fullclones_test	ft3
	)
and	ft1.clone_size >= ftv.s
group by
	ftv.s
order by
	ftv.s
; 

This query ran reasonably quickly (about half a second), producing the following execution plan:


--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                 |      1 |        |   404 (100)|      4 |00:00:00.05 |    1116 |       |       |          |
|   1 |  SORT GROUP BY NOSORT  |                 |      1 |      5 |   404   (6)|      4 |00:00:00.05 |    1116 |       |       |          |
|   2 |   MERGE JOIN           |                 |      1 |  48637 |   299   (7)|  50361 |00:00:00.58 |    1116 |       |       |          |
|   3 |    SORT JOIN           |                 |      1 |      5 |   114  (11)|      5 |00:00:00.02 |     372 |  2048 |  2048 | 2048  (0)|
|   4 |     VIEW               |                 |      1 |      5 |   114  (11)|      5 |00:00:00.02 |     372 |       |       |          |
|   5 |      HASH UNIQUE       |                 |      1 |      5 |   114  (11)|      5 |00:00:00.02 |     372 |  5894K|  3254K|  884K (0)|
|   6 |       TABLE ACCESS FULL| FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.31 |     372 |       |       |          |
|*  7 |    SORT JOIN           |                 |      5 |  20000 |   185   (4)|  50361 |00:00:00.16 |     744 |   619K|   472K|  550K (0)|
|*  8 |     TABLE ACCESS FULL  | FULLCLONES_TEST |      1 |  20000 |   106   (4)|  20076 |00:00:00.09 |     744 |       |       |          |
|   9 |      SORT AGGREGATE    |                 |      1 |      1 |            |      1 |00:00:00.01 |     372 |       |       |          |
|  10 |       TABLE ACCESS FULL| FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.29 |     372 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$071BB01A
   4 - INLINE       / FTV@SEL$1
   5 - INLINE   
   6 - INLINE       / FT2@INLINE   
   8 - SEL$071BB01A / FT1@SEL$1
   9 - SUBQ 
  10 - SUBQ         / FT3@SUBQ

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - access("FT1"."CLONE_SIZE">="FTV"."S")
       filter("FT1"."CLONE_SIZE">="FTV"."S")
   8 - filter("FT1"."SECTION_SIZE"=)

As you can see by comparing the block name / object alias information, we can identify a single full tablescan being executed at line 9 to produce the min(section_size) in the subquery.

We can also see that the “select distinct” executes at lines 5/6 to produce 5 rows which are then joined with a merge join to the first full tablescan of t1.

If you’re wondering about the appearance of a sel$1 despite my efforts to name every query block, that’s the (almost) inevitable side effect of using ANSI syntax – virtually every join after the first two tables will introduce a new (unnameable) query block to introduce the next table.

Now here’s the anomaly: if we eliminate the avg() from the select list we’re going to produce a result that ought to require less work – but look what happens:

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                 |      1 |        | 10802 (100)|      4 |00:02:48.83 |    1116 |       |       |          |
|   1 |  SORT GROUP BY NOSORT  |                 |      1 |      5 | 10802  (94)|      4 |00:02:48.83 |    1116 |       |       |          |
|   2 |   MERGE JOIN           |                 |      1 |    972M| 10697  (95)|   1007M|03:21:28.41 |    1116 |       |       |          |
|   3 |    SORT JOIN           |                 |      1 |  99999 |   380   (4)|  80042 |00:00:00.39 |     372 |  2037K|   674K| 1810K (0)|
|   4 |     TABLE ACCESS FULL  | FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.26 |     372 |       |       |          |
|*  5 |    SORT JOIN           |                 |  80042 |  20000 |   185   (4)|   1007M|00:57:11.13 |     744 |   619K|   472K|  550K (0)|
|*  6 |     TABLE ACCESS FULL  | FULLCLONES_TEST |      1 |  20000 |   106   (4)|  20076 |00:00:00.11 |     744 |       |       |          |
|   7 |      SORT AGGREGATE    |                 |      1 |      1 |            |      1 |00:00:00.01 |     372 |       |       |          |
|   8 |       TABLE ACCESS FULL| FULLCLONES_TEST |      1 |  99999 |   105   (3)|  99999 |00:00:00.28 |     372 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$6B65F52B
   4 - SEL$6B65F52B / FT2@INLINE
   6 - SEL$6B65F52B / FT1@SEL$1
   7 - SUBQ
   8 - SUBQ         / FT3@SUBQ

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("FT1"."CLONE_SIZE">="FT2"."SECTION_SIZE")
       filter("FT1"."CLONE_SIZE">="FT2"."SECTION_SIZE")
   6 - filter("FT1"."SECTION_SIZE"=)

Ignore the timings from lines 2 and 5 – I was using the hint gather_plan_statistics to collect the rowsource execution stats, and those lines are showing a massive sampling error. The query took about 7 minutes 30 seconds to run. The key difference is that line 4 shows that the “select distinct” is NOT aggregated early – the optimizer has used complex view merging to “join then aggregate” rather than “aggregate then join”. As you can see, this was a bad choice and the join has produced over a billion (US) rows at line 2 which then have to aggregated down to just 4 rows in line 1.

The question then is why ? If I put a /*+ no_merge */ hint in query block named “inline” the optimizer accepts the hint and goes back to the plan that aggregates early and runs very quickly – so it’s not a question of the optimizer bypassing some mechanism to avoid getting the wrong answer. I think the only option available to use for further investigation at this point is to examine the 10053 (optimizer) trace file to see what’s going on.

From the (12c)  trace file where we select the avg() we see the following lines:

OJE: Begin: find best directive for query block INLINE (#0)
OJE: End: finding best directive for query block INLINE (#0)
CVM: CBQT Marking query block INLINE (#0) as valid for CVM.
CVM:   Not Merging INLINE (#0) into SEL$1 (#0) due to CBQT directive.

From the equivalent position in the trace file where we select only the max() we see the lines:

OJE: Begin: find best directive for query block INLINE (#0)
OJE: End: finding best directive for query block INLINE (#0)
CVM:   Merging SPJ view INLINE (#0) into SEL$1 (#0)

It’s always hard to decide exactly WHY things happen – but it looks as if the optimizer merges the view heuristically in the max() case “because it can”, but has a heuristic (internal directive) that stops it from merging in the avg() case. What a good job we have hints !

Footnote:

In cases like this it’s always possible that there’s a generic case that might produce wrong results even though the specific case won’t – so it’s worth spending a little time thinking about how the wrong results might appear. It’s also worth keep hold of the SQL as a test case because if there’s a generic block in place to handle specific cases you may find that future enhancements allow the block to be lifted for some cases, and it’s nice to be able to check for such cases as you upgrade.

On the other hand, you can get back to the efficient plan if you change the inline view to be:


        (
        select  /*+
                        qb_name(inline)
                */
                ft2.section_size s , count(*) ct
        from    fullclones_test ft2
        group by
                ft2.section_size
        )       ftv


That’s just a little bit surprising – but I’m not going to pursue this one any further, beyond noting that there are some interesting anomalies available with inline aggregate views, even in 12.1.0.2.

Footnote 2:

If you want to experiment further, here’s the SQL to generate the data set:

create table fullclones_test (
        program_id      number(5,0),
        start_line      number,
        clone_id        number(5,0),
        clone_line      number,
        clone_size      number,
        range_start     number,
        section_size    number(4,0)
)
;

insert into fullclones_test (
        program_id, start_line, clone_id, clone_line,
        clone_size, range_start, section_size
)
Select
        1,
        trunc(dbms_random.value(1,1000)),
        trunc(dbms_random.value(1,10)),
        trunc(dbms_random.value(1,1000)),
        trunc(dbms_random.value(20,100)),
        0,
        20*trunc(dbms_random.value(1,6))
from
        dual
connect by
        level <100000
;

commit;

exec dbms_stats.gather_table_stats(user,'fullclones_test')

Finally, for consistently reproducible results I had engineered my system stats as follows:


        begin
                dbms_stats.set_system_stats('MBRC',16);
                dbms_stats.set_system_stats('MREADTIM',10);
                dbms_stats.set_system_stats('SREADTIM',5);
                dbms_stats.set_system_stats('CPUSPEED',1000);
        end;


What Mode of Oracle 12c Unified Auditing Are You Using and Default Auditing Policies?

Continuing our blog series on Oracle 12 Unified Auditing, how do you know what mode of Unified Auditing that you are using? Use the following SQL –

SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Unified Auditing';

The result will be TRUE or FALSE.  If TRUE, the database is using PURE Unified Auditing.  If FALSE, the database is using Mixed Mode, which is the Oracle 12c default.  Remember that V$OPTION shows what database options are installed, and V$PARAMETER shows the startup parameters for the options which have been installed.  Unified Auditing is enabled by being installed and not by being configured in V$PARAMETER.

Unified Auditing is configured through policies.  If Oracle 12c tenant databases (PDBs) are being used, these polices can be applied to common objects in all PDBs or to individual PDBs.  The table below show the policies installed and/or enabled by default –

Unified Audit Polices Installed With Oracle 12c

Policy Name

Default Enabled

Description

ORA_SECURECONFIG

Yes

Secure configuration audit options

ORA_RAS_POLICY_MGMT

No

Oracle Real Application Security administrative actions on application users, roles, and policies.

ORA_RAS_SESSION_MGMT

No

Run-time Oracle Real Application Security session actions and namespace actions

ORA_ACCOUNT_MGMT

No

Commonly used user account and privilege settings for create user, role, and privilege grants

ORA_DATABASE_PARAMETER

No

Audits commonly used Oracle Database parameter settings, e.g., the initialization file (spfile) changes

To query what policies have been defined you may use –

SELECT * FROM SYS.AUDIT_UNIFIED_POLICIES

To query what polices have been enabled you may use –

SELECT * FROM SYS.AUDIT_UNIFIED_ENABLED_POLICIES

If you have questions, please contact us at mailto:info@integrigy.com

Reference

For more information on Unified Auditing can be found here:

Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

12.1.0.2 on AIX

Laurent Schneider - Mon, 2014-11-17 04:22

just released today http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

only in Enterprise Edition at the moment, and now available on HPUX, zLinux and AIX

#oracle12c #db12102 is out for a bunch of platform #aix #os390 #hpux #zlinux

— laurentsch (@laurentsch) November 17, 2014

This is the first and last patchset for 12cR1

#oracle 12.1.0.2 is the last patch set for Release 12.1.

— laurentsch (@laurentsch) October 9, 2014

My planning for DOAG 2014

Yann Neuhaus - Mon, 2014-11-17 01:53

I'm quickly checking the planning for DOAG these 3 days and here is the list of sessions I would like to attend.

There are still a lot of interesting ones even if my choice is limited by the language (I would like to understand German but I'm limited so sessions in English). And I've still some concurrency issues to solve because I cannot be at two places at the same time.

PeopleSoft and Docker's value proposition

Javier Delgado - Sun, 2014-11-16 12:58
If you haven't heard yet about Docker and/or container technologies, you will soon do. Docker has made one of the biggest impacts in the IT industry in 2014. Since the release of its 1.0 version on past June, it has captured the attention of many big IT vendors, including Google, Microsoft and Amazon. As far as I'm aware, Oracle has not announced any initiative with Docker, except for the Oracle Linux container. Still, Docker can be used with PeopleSoft, and it can actually simplify your PeopleSoft system administration. Let's see how.

What is Container Technology?
Docker is an open platform to build, ship, and run distributed applications. Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

In a way, it is similar to virtualization technologies like VMWare or Virtualbox where you can get an image of a machine and run it anywhere you have the player installed. Docker is similar except that it just virtualizes the application and its dependencies, not the full machine.

Docker virtual machines are called containers. They run as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

Docker uses a layered file system for its containers, in a way that they can be updated by just including the changes since the last update. This greatly reduces the volume of information that needs to be shipped to deliver an update.

How can it be used with PeopleSoft?
As we have seen, Docker containers are much easier to deploy than an entire virtual machine. This means that activities such as installations can be greatly simplified. All you need is to have Docker installed and then download the PeopleSoft container. Of course, this requires that you first do an installation within a Docker container, but this is not more complex than doing an usual installation, it just requires some Docker knowledge in order to take advantage of all its features. Under my point of view, if you are doing a new installation, you should seriously consider Docker. At BNB we have prepared containers with the latest PeopleSoft HCM and FSCM installations so we can quickly deploy them to our customers.

Also, when you make a change to a Docker container, just the incremental changes are applied to existing running instances. This poses a great advantage when you apply a patch or run a PeopleTools upgrade. If you want to apply the patches to a new environments, you just need to make sure that you apply the latest container changes in all the servers running the environment.

Isolation between running instances is also a major advantage when you have multiple environments in the same server. Suppose you want to apply the later Tuxedo patch just in the Development environment, which coexists with other environments on the same server. Unless you had one Tuxedo installation for each environment (which is possible but normally unlikely), you would need to go ahead and hope the patch did not break anything (to be honest, this happens very rarely with Tuxedo, but some other product patches are not so reliable). If you have a separate container for the Development environment you can apply the patch just to it and later deploy the changes to the rest of environments.

Last but not least, the reduced size of Docker containers compared to an entire virtual machine greatly simplifies the distribution to and from the cloud. Docker is of great help if you want to move your on premise infrastructure to the cloud (or the other way around). This is even applicable when you want to keep a contingency system in the cloud, as delivering the incremental container changes made to your on premise system requires less time than using other methods.

Not only that, Docker can be hosted in most operating systems. This means that moving a container from one public cloud facility to another is significantly easier than it was with previous technologies. Exporting a virtual machine from Amazon EC2 to Google Cloud was quite complex (and under some circumstances even not possible).


Limitations
But as any other technology, Docker is no panacea. It has some limitations that may restrict its adoption for your PeopleSoft installation. The main ones I can think of are:

  • Currently there is no support for containers using Windows as a guest operating system. This is not surprising, as Docker in intimately linked to Unix/Linux capabilities. Still, Microsoft has announced a partnership with Docker that will hopefully help to overcome this limitation. For the moment, you will not be able to use Docker for certain PeopleSoft components, such as the PSNT Process Scheduler, which is bad news if you are still using Crystal Reports or Winword reports. Also, if you are using Microsoft SQL Server as your database, this may be a major limitation.


  • Docker is most useful when used for applications, but not data. Logs, traces and databases should normally be kept out of the Docker container.


Conclusions
Although container technology is still in its initial steps, significant benefits are evident for maintaining and deploying applications, PeopleSoft included. Surely enough, the innovation coming on this area will have a big impact in the way PeopleSoft systems are administered.

PS: I would like to thank Nicolás Zocco for his invaluable research on this topic, particularly in installing the proof of concept using PeopleSoft and Docker.

12c SQL Plan Directive: state has changed between 12.1.0.1 and 12.1.0.2

Yann Neuhaus - Sun, 2014-11-16 09:38

SQL plan Directives have been introduced in 12c. If you have scripts that check their states (and I have that as I prefer to monitor closely the features that are new) you probably have seen a difference when going from 12.1.0.1 - the first release of 12c and the only one avilable yet in Standard Edition - and 12.1.0.2 - the first patchest. I'll explain here what are the SQL Plan Directive states and how they changed.

When a SQL Plan Directive is created, it's state is NEW. It means that a misestimate has been encountered but the reason has not been yet determined. Remember that the directive is created only at execution time, so very limited information is available, except the fact that A-Rows is different that E-Rows.

On a subsequent optimization (same or different statement that uses the same table and same column predicates) the optimizer sees the SQL Plan Directive and can update it with more information. If the reason of misestimation is that some statistics are missing then the state of the directive is changed from NEW to MISSING_STATS. 

MISSING_STATS directives lead to short term and long term solutions:

  • each new query will solve missing stats by gathering more statistics with Dynamic Sampling
  • the next dbms_stats gathering will gather extended statistics to definitly fix the issue

Once the latter is done, we can expect to have good estimations without Dynamic Sampling anymore, thanks to the extended statistics. But that has to be checked. So the next query optimization will check it and update the SQL plan Directive accordingly:

  • HAS_STATS to show that statistics (extended statistics here) are now sufficient to get correct estimations
  • PERMANENT to show that the misestimation is still there and extended statistics have to be ignored because they were calculated for an issue that they do not solve.
Finally, the HAS_STATS SQL Plan Directives are purged after some weeks as they are not needed anymore - the issue being solved definitely.   So what has changed in 12.1.0.1 ?   There are only two states now:
  • 'USABLE' that covers the 'NEW', 'MISSING_STATS' and 'PERMANENT' which means that a directive is there to be evaluated by the optimizer, but the issue is not solved yet.
  • 'SUPERSEDED' when it has been solved (the 'HAS_STATS') or it is redundant with another directive, which means that the issue is solved somewhere else.
This is a simplification, but if you want to have the same level of detail that you had in 12.1.0.2 then you can get it from the 'internal state' which is exposed in XML in the NOTES column.   Here is an example of two USABLE state:  
SQL> select directive_id,type,state,reason,notes from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------
NOTES
--------------------------------------------------------------------------------
 1350980939868665098 DYNAMIC_SAMPLING USABLE         SINGLE TABLE CARDINALITY MISESTIMATE

  NEW
  NO
  {EC(DEMO.DEMO_TABLE)[A, B, C, D]}

which is 'NEW' and  
        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------
NOTES
--------------------------------------------------------------------------------
 1350980939868665098 DYNAMIC_SAMPLING USABLE         SINGLE TABLE CARDINALITY MISESTIMATE

  MISSING_STATS
  NO
  {EC(DEMO.DEMO_TABLE)[A, B, C, D]}

which is 'MISSING_STATS'.   And a 'SUPERSEDED' once dbms_stats has run:  
        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------
NOTES
--------------------------------------------------------------------------------
 1350980939868665098 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE

  HAS_STATS
  NO
  {EC(DEMO.DEMO_TABLE)[A, B, C, D]}

which is 'HAS_STATS'
Note that the xml tags were eaten by my blog editor. They are: internal_state,redundant,spd_text
We do full demos of SQL Plan Directives in our 'Oracle 12c new features workshop' and 'Oracle performance tuning workshop'. It's a great feature that brings the CBO to another level of intelligence. And there are some misconceptions about them. Some people think that they store statistics. But that's wrong. Statistics come from cardinality feedback, dynamic sampling, or object statistics. There is no need for another component to store them. The only thing that is stored by SQL Plan Directives are their state. Which makes the state a very important information - and the reason for that blog post.  

 

 

Interstellar Madness

FeuerThoughts - Sun, 2014-11-16 09:19
Saw Interstellar last night. Only had to wait through TWENTY MINUTES of trailers. Had to put fingers in my ears for much of it. So loud, so invasive, so manipulative. Anyway....

I don't watch TV anymore, rarely watch a movie or read a novel. So when I do subject myself to high-resolution artificial input to my brain, it is a jarring experience.
And enjoyable. I haven't stopped watching TV because I don't like it. I have stopped watching TV because I can't help but "like" it, be drawn to it. I am a product of millions of years of evolution, and both Madison Ave (marketeers) and Hollywood know it, and take advantage of it.
Anyway....
I enjoyed watching Interstellar, with its time-traveling plot ridiculousnesses and plenty of engaging human drama. 
But one line really ticked me off. The movie is, to a large extent, a propaganda campaign to get Americans excited about being "explorers and pioneers" again. 
Cooper (McConaughey) complains that "Now we're a generation of caretakers." and asserts that:
"Mankind was born on earth. It was never meant to die here."
That is the worst sort of human species-ism. It is a statement of incredible arrogance. And it is an encouragement to humans to continue to despoil this planet, because don't worry! 
Science and technology can and will save us! Right? 'Cause it sure has done the trick so far. We are feeding more people, clothing more people, putting more people in cars and inside homes with air conditioners, getting iPhones in the hands of more and more humans. 
Go, science, go!
And if we can't figure out how to grow food for 10 billion and then 20 billion people, if we totally exhaust this planet trying to keep every human alive and healthy into old age, not to worry! There are lots of other planets out there and, statistically, lots and lots of them should be able to support human life. Just have to find them and, oh, right, get there.
But there's no way to get there without a drastic acceleration of consumption of resources of our own planet. Traveling to space is, shall we say, resource-intensive.
Where and how did we (the self-aware sliver of human organisms) go so wrong? 
I think it goes back to the development of recorded knowledge (writing, essentially or, more broadly, culture). As long as humans were constrained by the ability to transmit information only orally, the damage we could do was relatively limited, though still quite destructive.
Once, however, we could write down what we knew, then we could build upon that knowledge, generation after generation, never losing anything but a sense of responsibility about how best to use that knowledge.
That sense of responsibility might also be termed "wisdom", and unfortunately wisdom is something that humans acquire through experience in the world, not by reading a book or a webpage. 
Mankind was born on earth and there is no reason at all to think that we - the entire species - shouldn't live and die right here on earth. Especially if we recognize that the price to be paid for leaving earth is the destruction of large swaths of earth and our co-inhabitants and....
Being the moral creatures that we like to think we are, we decide that this price is unacceptable.

Categories: Development

APEX and Privileges Granted through Roles

The Anti-Kyte - Sat, 2014-11-15 15:33

The mystery has finally been solved. England’s surrendering of the Ashes last winter was nothing to do with Australia being a much better cricket team. Thanks to Kevin Pietersen’s recently published Autobiography, we now know that the problem was that there were rather too many silly points in the England dressing room.
Moving swiftly on from that weak pun, the subject at hand can also be rather mystifying at first glance.

In a “traditional” Oracle Forms application, you would have one database user per application users.
Connections via the Application to the database would be done as the individual users.
It’s quite likely that database roles would be used to grant the appropriate privileges.

For applications using other web technologies, the application may interact with the database via a single account, often that of the Application Owner. Whether or not this is a good idea is probably a discussion for another time.

For now though, the question we’re asking is, how an APEX application connect to the database ?
On the face of it, it would seem that it’s pretty similar to the second of the two approaches above. APEX connects as the Parsing Schema (usually the application owner).
As Kevin will tell you, appearances can be deceiving…

The Environment

For the purposes of this post, I’ll be using a simple APEX application that’s been created in it’s own workspace.
The application is called NEW_HR and uses the default APEX Authentication Scheme.
The parsing schema is defined as HR.
At this point the application consists of a login screen and a blank Home Page.
I’ve also created a Workspace Admin user called…well…let’s call it Kevin.
The database version is Oracle 11g Express Edition and the APEX version is 4.2.
This environment uses the embedded PL/SQL Gateway to manage database connections from APEX. This is the default setup on Oracle 11g XE.

Who am I ? No, really

Now, I know that there is no user called KEVIN in my database….

select count(*) 
from dba_users 
where username = 'KEVIN'
/
  COUNT(*)
----------
         0

SQL> 

…so I’d like to know who the database thinks I am when I login through my APEX app. I’d also like to check who the APEX itself thinks I am.

The first step then, is to add a couple of fields to the application Home Page…

First of all, I’ve add an HTML Region called whoami. Apart from the name I’ve just accepted the defaults.

Now to add a field to display the Application User – i.e. who APEX thinks I am.

This is a Display Only Item called P1_APEX_USER in the whoami region.
The source settings for this item are the defaults except for :

Source Used : Always, replacing any existing value in session state
Source value or expression : APP_USER

apex_user_source

Next up is to add a field to display the database user.

The field is defined in the same way as P1_APEX_USER, except for :

Source Type : SQL Query (return single value)

and the source itself which is the following query :

select user from dual

db_user_source

Now, if we connect as Kevin….

login

…we can start to resolve our identity crisis….

whoami

So, as expected, APEX knows that Kevin is the Application user. However, the database user is not HR, rather it’s something called ANONYMOUS.

NOTE – If you’re using the Embedded PL/SQL Gateway ( the default setup for Express Edition) then you’ll be connected as ANONYMOUS. If you have the APEX Listener setup then, unless you’ve changed the default, you’ll be connected as APEX_PUBLIC_USER.
For our current purposes we can treat these accounts as synonymous from a database standpoint.
I’ll continue to refer to ANONYMOUS from here on because (a) I’m running this on XE and (b) the name has slightly more comedic potential.

Let’s find out a bit more about this user whilst trying not to worry that our application has been visited by hacktivists.
Hmmm, maybe not so much comedic potential.

The ANONYMOUS User

Looking in the database, we can confirm that ANONYMOUS is indeed a database user :

select account_status, profile, authentication_type
from dba_users
where username = 'ANONYMOUS'
/

ACCOUNT_STATUS                   PROFILE                        AUTHENTI
-------------------------------- ------------------------------ --------
OPEN                             DEFAULT                        PASSWORD

Doesn’t seem to be anything out of the ordinary there.
Now let’s see what ANONYMOUS has granted to it. For good measure, we can see what objects it owns ( if any).
The query looks like this :

select 'SYSTEM PRIVILEGE' as priv_type,
    null as db_object,
    privilege
from dba_sys_privs
where grantee = 'ANONYMOUS'
union
select 'ROLE GRANTED' as priv_type,
    granted_role as db_object,
    null as privilege
from dba_role_privs
where grantee = 'ANONYMOUS'
union
select 'OBJECT PRIVILEGE' as priv_type,
    owner||'.'||table_name as db_object,
    privilege
from dba_tab_privs
where grantee = 'ANONYMOUS'
union
select 'OWNED OBJECT' as priv_type,
    object_name as db_object,
    null as privilege
from dba_objects
where owner = 'ANONYMOUS'
order by 1,2
/ 

When we run it we get variations on the theme of :

PRIV_TYPE            DB_OBJECT                                                    PRIVILEGE
-------------------- ------------------------------------------------------------ ------------------------------
OBJECT PRIVILEGE     APEX_040000.WWV_FLOW_EPG_INCLUDE_MODULES                     EXECUTE
OBJECT PRIVILEGE     APEX_040200.WWV_FLOW_EPG_INCLUDE_MODULES                     EXECUTE
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           ALTER
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           DELETE
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           FLASHBACK
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           INDEX
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           INSERT
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           ON COMMIT REFRESH
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           QUERY REWRITE
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           REFERENCES
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           SELECT
OBJECT PRIVILEGE     FLOWS_FILES.WWV_FLOW_FILE_OBJECTS$                           UPDATE
SYSTEM PRIVILEGE                                                                  CREATE SESSION

Now, the Object Privileges listed here are probable the result of some of the sample APEX applications I’ve installed.
By default, the only thing granted to ANONYMOUS is the CREATE SESSION privilege.

More pertinent here though is that it has no permissions at all on any objects owned by HR. This begs the question as to how our APEX application will work. Remember, our parsing schema ‎( essentially the Application Owner) is HR. Therefore, it’s reasonable to assume that we’ll want to interact with the tables in that schema.

NOTE – at this point I should add that, of course, ANONYMOUS does have additional privileges – i.e. everything granted to PUBLIC in the database. Whilst th‎is is not strictly relevant to the matter at hand, it’s probably worth bearing in mind when you look at how you implement security around this user.

Anyway, let’s put it to the test…

The Regions Report

In our application we’re going to create a new page – a Report on the HR.REGIONS table so…

In the Application Builder, click on Create Page :

create_page

Select Report and click Next

Select Interactive Report and click Next

Accept the defaults for Page Region Attribute and click Next

In Tab Options choose Use an existing tab set and create a new tab within the existing set
New Tab Label is Regions :

tab_options

Click Next

For the SQL Query :

select region_id, region_name
from regions

Note – we’re not specifying the table owner in this query, even though ANONYMOUS does not have a synonym on the HR.REGIONS table ( let alone any privileges)

query

Click Next

…and click Create

create

When we now connect to the application as Kevin and click on the Regions tab….

regions

So, the report has worked without error, despite the lack of privileges and synonyms. So what’s happening ?

Session Privileges in APEX

To answer this, we’ll need to tweak our earlier privileges query. This time, we’ll use the USER_ version of the views.
We can then it to the Application Home Page in a new reports region to see what ANONYMOUS can actually do when connected via APEX.

First, the new query, using USER_ versions of the views and without the order by clause.

select 'SYSTEM PRIVILEGE' as priv_type,
    null as db_object,
    privilege
from user_sys_privs
union
select 'ROLE GRANTED' as priv_type,
    granted_role as db_object,
    null as privilege
from user_role_privs
union
select 'OBJECT PRIVILEGE' as priv_type,
    owner||'.'||table_name as db_object,
    privilege
from user_tab_privs
union
select 'OWNED OBJECT' as priv_type,
    object_name as db_object,
    null as privilege
from user_objects
where object_type != 'INDEX'
/

Spoiler Alert – the reason I’m not using the SESSION_PRIVS view here is because it will list privileges granted via roles. The distinction between these and directly granted privileges will shortly become apparent.

We now simply create a new interactive reports region called User Privileges on the Home Page, using the above query.
If we now filter on PRIV_TYPE = ‘OWNED OBJECT’, we can see that we’ve magically acquired ownership of all the HR objects…

owned_objects

If we filter on PRIV_TYPE = ‘SYSTEM PRIVILEGE’, we can see that we also seem to have inherited HR’s System Privileges…

sys_privs

So, we can infer from this that, although the database connection from APEX is as the ANONYMOUS user, the session will inherit all of the objects and privileges of the parsing schema.
A reasonable assumption, given the evidence, and a correct one…mostly.

Objects not owned by the parsing schema

I’ve created a simple function in my own schema :

create or replace function name_scandal_fn( i_basename varchar2)
    return varchar2
as
begin
    return i_basename||'gate';
end;
/

Next we’re going to create a role and then grant execute on this function to that role. Finally, we’re going to grant the role to hr :

create role hr_role
/

grant execute on name_scandal_fn to hr_role
/

grant hr_role to hr
/

First off, we’ll test this in SQL*Plus. Connect as HR and …

select mike.name_scandal_fn('Twitter') from dual
/

MIKE.NAME_SCANDAL_FN('TWITTER')
--------------------------------------------------------------------------------
Twittergate

SQL> 

So, we should have no problem invoking this function from our application then.

Let’s create a page with a Display Only field that is populated by a call to this function :

Blank Page :

fn_page1

Called Scandal

fn_page2

…With an HTML Region…

fn_page3

…on a new tab…

fn_page4

…and confirm…

fn_page5

Now, add the Item…

fn_item1

…called P3_SCANDAL_NAME…

fn_item2

Accept the defaults for the Item Attributes settings, and Settings…

… and change the Source settings to :

Source Used : Always, replacing any existing value in session state
Source Type : SQL Query (return single value)
Item Source Value – here we put in our call to the function :

select mike.name_scandal_fn('Twitter') from dual

fn_item3

Finally, hit the create button.

No problems so far. Now, let’s try running the page…

fn_err

Hmmm, not quite what we were expecting.

Looking at the error stack, a possible source of the problem emerges.
In the background, it looks as if APEX is calling a package called WWV_FLOW_FORMS, which in turn calls WWV_FLOW_DYNAMIC_EXEC.
Whilst the source for both of these packages is wrapped, there are some notes availble on the next package in the call stack, WWV_DBMS_SQL here.

Putting all together and looking at the package headers, it would seem reasonable to assume that, rather than running the SQL statement directly, APEX does this via a series of package calls which then run the statement as dynamic SQL.
The effect of calling a (presumably) Definer’s Rights package is that any privileges granted via roles are ignored.

In order to test this theory, we can revoke the role from HR and instead, grant execute on the function directly.
So, connected to SQL*Plus as the function owner ( in my case MIKE) :

revoke hr_role from hr
/

grant execute on name_scandal_fn to hr
/

Now a quick sanity check to make sure that HR can see the function.
Connect as HR and :

SQL> select mike.name_scandal_fn('Twitter') from dual
  2  /

MIKE.NAME_SCANDAL_FN('TWITTER')
--------------------------------------------------------------------------------
Twittergate

SQL> 

Now let’s see what APEX makes of this.
Re-run the page and we can see…

itworks

There you have it. APEX, like Kevin, is just a little bit different.


Filed under: APEX, Oracle, SQL Tagged: apex anonymous user, apex_public_user, granting privileges via roles, parsing schema

Repair Replicat after mount point name change

DBASolved - Sat, 2014-11-15 11:06

Working on Oracle GoldenGate can be an interesting adventure. In such a case, I have been doing some migration work for a client. Half way though the migration, the target system ran out of resources need to create the tablespaces and store files export and trail files (i.e. disk space and a story for another time). The impact to the migration was that everything had to stop until resources were allocated.

Part of the allocation of resources was to change the mount point name. If you know anything about Oracle GoldenGate Replicats, using a static mount point is not the best approach (slipped my mind at the time); however, I made this mistake. When the mount point name changed, all the replicats broke because they couldn’t locate the trail files where specified.

Initial:

When I initially setup the replicat I used a static mount point. Let’s take a look at the create replicat statement I used initially:

--Add Replicat Process
ADD REPLICAT REPM01, EXTTRAIL /orabackup/ggate/trail/ars/ra, DESC "Replicat process for a schema”
START REPLICAT REPM01, ATCSN

As you can see the replicat is looking for the “ra” trail files on the “/orabackup” mount point.

Problem:

During the allocation of space the mount point “/orabackup” was changed to “/orabkup”. How does this affect the replicat? Simple, the replicat will through an OGG-01091 error stating that it coudn’t find the trail file.

ERROR OGG-01091 Unable to open file “/orabackup/ggate/trail/ars/ra000000″ (error 2, No such file or directory).

Solution:

The solution to fixing this problem is to capture the last CSN number from the Checkpoint table.

SQL> select group_name, rba, seqno, log_cmplt_csn from checkpoint where group_name = 'REPM01';

GROUP_NA  RBA       SEQNO   LOG_CMPLT_CSN
--------          ---------- ---------- -----------------------------------
REPM01       544013          1       11108080706671

Once the last completed CSN has been identified, then the replicat can be dropped, recreated with the new path to the trail file.

GGSCI> dblogin userid ggate password
GGSCI> delete replicat REPM01
GGSCI> add replicat REPM01, EXTTRAIL /orabkup/ggate/trail/ars/ra, DESC "Replicat process for a schema”
GGSCI> start replicat REPM01, atcsn 11108080706671
GGSCI> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPM01      00:00:00      00:00:06

Lesson Learned:

When setting up locations for your trail files make sure they are not static locations. Realitve locations should be used. In most Oracle GoldenGate architectures the “dirdat” directory under $OGG_HOME is used for trails files; however, if you need more space for trail files the “dirdat” directory can be linked to a directory on a larger mount point. This will keep the replicat consistant for trail file purposes and make it easier to manage the names of the mount point if the static name changes.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Walled Gardens, #GamerGate, and Open Education

Michael Feldstein - Sat, 2014-11-15 08:41

There were a number of interesting responses to my recent LMS rant. I’m going to address a couple of them in short posts, starting with this comment:

…The training wheels aren’t just for the faculty, they’re for the students, as well. The idea that the internet is a place for free and open discourse is nice, of course, but anyone who pays attention knows that to be a polite fiction. The public internet is a relatively safe place for straight, white, American males, but freedom of discourse is a privilege that only a small minority of our students (and faculty, for that matter) truly enjoy. If people didn’t understand that before, #notallmen/#yesallmen and GamerGate should certainly have driven that home.

As faculty and administrators we have an obligation–legal, and more importantly moral–to help our students understand the mechanisms, and unfortunately, often the consequences, of public discourse, including online communications. This is particularly true for the teenagers who make up the bulk of the undergrad population. Part of transformative teaching is giving people a safe space to become vulnerable and open to change. For those of us who think still of the “‘net” in terms of it’s early manifestations that were substantially open and inclusive research networks and BBS of largely like-minded people (someone else mentioned The Well, although The Well, of course, has always been a walled garden), open access seems tempting. But today’s internet is rarely that safe space for growth and learning. Just because students can put everything on the internet (YikYak, anyone?) doesn’t mean that they should.

In many, if not most, situations, A default stance of of walled garden with easy-to-implement open access options for chosen and curated content makes a great deal of sense….

There are lots of legitimate reasons why students might not want to post on the public internet. A few years back, when I was helping my wife with a summer program that exposed ESL high schoolers to college and encouraged them to feel like it could be something for them, we had a couple of students who did not want to blog. We didn’t put them on the spot by asking why, but we suspected that their families were undocumented and that they were afraid of getting in trouble.

This certainly doesn’t mean that everybody has to use an LMS or lock everything behind a login, but it does mean that faculty teaching open courses need to think about how to accommodate students who won’t or can’t work on the open web. I don’t think this sort of accommodation in any way compromises the ethic of open education. To the contrary, ensuring access for everyone is part of what open education is all about.

The post Walled Gardens, #GamerGate, and Open Education appeared first on e-Literate.

Technical differentiation

DBMS2 - Sat, 2014-11-15 06:00

I commonly write about real or apparent technical differentiation, in a broad variety of domains. But actually, computers only do a couple of kinds of things:

  • Accept instructions.
  • Execute them.

And hence almost all IT product differentiation fits into two buckets:

  • Easier instruction-giving, whether that’s in the form of a user interface, a language, or an API.
  • Better execution, where “better” usually boils down to “faster”, “more reliable” or “more reliably fast”.

As examples of this reductionism, please consider:

  • Application development is of course a matter of giving instructions to a computer.
  • Database management systems accept and execute data manipulation instructions.
  • Data integration tools accept and execute data integration instructions.
  • System management software accepts and executes system management instructions.
  • Business intelligence tools accept and execute instructions for data retrieval, navigation, aggregation and display.

Similar stories are true about application software, or about anything that has an API (Application Programming Interface) or SDK (Software Development Kit).

Yes, all my examples are in software. That’s what I focus on. If I wanted to be more balanced in including hardware or data centers, I might phrase the discussion a little differently — but the core points would still remain true.

What I’ve said so far should make more sense if we combine it with the observation that differentiation is usually restricted to particular domains. I mean several different things by that last bit. First, most software only purports to do a limited class of things — manage data, display query results, optimize analytic models, manage a cluster, run a payroll, whatever. Even beyond that, any inherent superiority is usually restricted to a subset of potential use cases. For example:

  • Relational DBMS presuppose that data fits well (enough) into tabular structures. Further, most RDBMS differentiation is restricted to a further subset of such cases; there are many applications that don’t require — for example — columnar query selectivity or declarative referential integrity or Oracle’s elite set of security certifications.
  • Some BI tools are great for ad-hoc navigation. Some excel at high-volume report displays, perhaps with a particular flair for mobile devices. Some are learning how to query non-tabular data.
  • Hadoop, especially in its early days, presupposed data volumes big enough to cluster and application models that fit well with MapReduce.
  • A lot of distributed computing aids presuppose particular kinds of topologies.

A third reason for technical superiority to be domain-specific is that advantages are commonly coupled with drawbacks. Common causes of that include:

  • Many otherwise-advantageous choices strain hardware budgets. Examples include:
    • Robust data protection features (most famously RAID and two-phase commit)
    • Various kinds of translation or interpretation overhead.
  • Yet other choices are good for some purposes but bad for others. It’s fastest to write data in the exact way it comes in, but then it would be slow to retrieve later on.
  • Innovative technical strategies are likely to be found in new products that haven’t had time to become mature yet.

And that brings us to the main message of this post: Your spiffy innovation is important in fewer situations than you would like to believe. Many, many other smart organizations are solving the same kinds of problems as you; their solutions just happen to be effective in somewhat different scenarios than yours. This is especially true when your product and company are young. You may eventually grow to cover a broad variety of use cases, but to get there you’ll have to more or less match the effects of many other innovations that have come along before yours.

When advising vendors, I tend to think in terms of the layered messaging model, and ask the questions:

  • Which of your architectural features gives you sustainable advantages in features or performance?
  • Which of your sustainable advantages in features or performance provides substantial business value in which use cases?

Closely connected are the questions:

  • What lingering disadvantages, if any, does your architecture create?
  • What maturity advantages do your competitors have, and when (if ever) will you be able to catch up with them?
  • In which use cases are your disadvantages important?

Buyers and analysts should think in such terms as well.

Related links

Daniel Abadi, who is now connected to Teradata via their acquisition of Hadapt, put up a post promoting some interesting new features of theirs. Then he tweeted that this was an example of what I call Bottleneck Whack-A-Mole. He’s right. But since much of his theme was general praise of Teradata’s mature DBMS technology, it would also have been accurate to reference my post about The Cardinal Rules of DBMS Development.

Categories: Other