Skip navigation.

Development

And so it goes

Greg Pavlik - Sat, 2014-04-19 17:28
Between just being flat out busy and frankly finding Twitter a much lazier way to share basic information, this blog has been dormant for too long. In May, it will get a lot more attention - too much is going on in the Big Data space and Hadoop area more specifically to keep so quiet about it. Time to speak up a bit...

Dynamic ADF Forms with the new Dynamic Component (and synch with DB)

Shay Shmeltzer - Fri, 2014-04-11 17:22

I wrote a couple of blogs in the past that talked about creating dynamic UIs based on a model layer that changes (example1 example2). Well in 12c there is a new ADF Faces component af:dynamicComponent that makes dynamic forms even more powerful. This component can be displayed as various UI components at runtime. This allows us to create Forms and tables with full functionality in a dynamic way.

In fact, we use this when you create either a form or a table component in your JSF page dragging over a data control. We now allow you to not specify each field in your UI but just say that you want to show all the fields in the data control.

In the demo below I show you how this is done, and then review how your UI automatically updates when you add fields in your model layer. For example if your DB changed and you used the "Synchronize with DB" and added the field to the VO - that's it no more need to go to every page and add the new field.

Check it out:

<span id="XinhaEditingPostion"></span>

Categories: Development

Install latest patch of APEX 4.2 (4.2.5)

Dimitri Gielis - Fri, 2014-04-11 02:34
A few days ago Oracle brought out a new patch for APEX 4.2, this will be the latest version of this build, the next version of APEX will be 5.0.
If you already have APEX 4.2.x installed you can download a patch from support.oracle.com, the patch number is 17966818.
If you have an earlier version of APEX you can download the full version of APEX and install that.
As with other patch sets, this one is not different; it includes some bug fixes, updates in the packaged apps and the introduction of some new apps. You find the full patch set notes here.
Installing the patch in my APEX 4.2.4 environment took less than 15 minutes and everything went fine. 

I recommend everybody moving to the latest version as this is the final build of APEX 4.2.

Update 16-APR-2014: we actually hit one issue, which was fixed by Oracle today. So I would install this additional patch too. In support.oracle.com search for Patch 18609856: APEX_WEB_SERVICE.CLOBBASE642BLOB CONVERTS INCORRECTLY.
Categories: Development

To_char, Infinity and NaN

XTended Oracle SQL - Mon, 2014-03-31 15:23

Funny that oracle can easily cast ‘nan’,'inf’,'infinity’,'-inf’,'-infinity’ to corresponding binary_float_infinity,binary_double_nan, but there is no any format models for to_char(binary_float_infinity,format) or to_binary_***(text_expr,format) that can output the same as to_char(binary_float_infinity)/to_binary_float(‘inf’) without format parameter:

If a BINARY_FLOAT or BINARY_DOUBLE value is converted to CHAR or NCHAR, and the input is either infinity or NaN (not a number), then Oracle always returns the pound signs to replace the value.

Little example:

SQL> select to_binary_float('inf') from dual;

TO_BINARY_FLOAT('INF')
----------------------
                   Inf

SQL> select to_binary_float('inf','9999') from dual;
select to_binary_float('inf','9999') from dual
                       *
ERROR at line 1:
ORA-01722: invalid number

SQL> select
  2     to_char(binary_float_infinity)         without_format
  3    ,to_char(binary_float_infinity,'99999') with_format
  4    ,to_char(1e6d,'99999')                  too_large
  5  from dual;

WITHOUT_F WITH_FORMAT        TOO_LARGE
--------- ------------------ ------------------
Inf       ######             ######

SQL> select to_char(0/0f) without_format, to_char(0/0f,'tme') with_format from dual;

WITHOUT_F WITH_FORMAT
--------- --------------------------------------------------------------------------
Nan       ################################################################

ps. it’s just crossposting of my old blog.

Categories: Development

Deterministic functions, result_cache and operators

XTended Oracle SQL - Sun, 2014-03-30 16:51

In previous posts about caching mechanism of determinstic functions I wrote that cached results are kept only between fetch calls, but there is one exception from this rule: if all function parameters are literals, cached result will not be flushed every fetch call.
Little example with difference:

SQL> create or replace function f_deterministic(p varchar2)
  2     return varchar2
  3     deterministic
  4  as
  5  begin
  6     dbms_output.put_line(p);
  7     return p;
  8  end;
  9  /
SQL> set arrays 2 feed on;
SQL> set serverout on;
SQL> select
  2     f_deterministic(x) a
  3    ,f_deterministic('literal') b
  4  from (select 'not literal' x
  5        from dual
  6        connect by level<=10
  7       );

A                              B
------------------------------ ------------------------------
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal

10 rows selected.

not literal
literal
not literal
not literal
not literal
not literal
not literal

As you can see, ‘literal’ was printed once, but ‘not literal’ was printed 6 times, so it was returned from cache 4 times.

Also i want to show the differences in consistency between:
1. Calling a function with determinstic and result_cache;
2. Calling an operator for function with result_cache;
3. Calling an operator for function with deterministic and result_cache;

In this example I will do updates in autonomouse transactions to emulate updates in another session during query execution:
Spoiler:: Tables and procedures with updates SelectShow

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;

create table t1 as select 1 id from dual;
create table t2 as select 1 id from dual;
create table t3 as select 1 id from dual;

create or replace procedure p1_update as
  pragma autonomous_transaction;
begin
   update t1 set id=id+1;
   commit;
end;
/
create or replace procedure p2_update as
  pragma autonomous_transaction;
begin
   update t2 set id=id+1;
   commit;
end;
/
create or replace procedure p3_update as
  pragma autonomous_transaction;
begin
   update t3 set id=id+1;
   commit;
end;
/


Spoiler:: Variant 1
SelectShow

create or replace function f1(x varchar2) return number result_cache deterministic
as
  r number;
begin
   select id into r from t1;
   p1_update;
   return r;
end;
/


Spoiler:: Variant 2
SelectShow

create or replace function f2(x varchar2) return number result_cache
as
  r number;
begin
   select id into r from t2;
   p2_update;
   return r;
end;
/
create or replace operator o2
binding(varchar2)
return number
using f2
/


Spoiler:: Variant 3
SelectShow

create or replace function f3(x varchar2) return number result_cache deterministic
as
  r number;
begin
   select id into r from t3;
   p3_update;
   return r;
end;
/
create or replace operator o3
binding(varchar2)
return number
using f3
/


Test:

SQL> set arrays 2;
SQL> select
  2     f1(dummy) variant1
  3    ,o2(dummy) variant2
  4    ,o3(dummy) variant3
  5  from dual
  6  connect by level<=10;

  VARIANT1   VARIANT2   VARIANT3
---------- ---------- ----------
         1          1          1
         2          1          1
         2          1          1
         3          1          1
         3          1          1
         4          1          1
         4          1          1
         5          1          1
         5          1          1
         6          1          1

10 rows selected.

SQL> /

  VARIANT1   VARIANT2   VARIANT3
---------- ---------- ----------
         7         11         11
         8         11         11
         8         11         11
         9         11         11
         9         11         11
        10         11         11
        10         11         11
        11         11         11
        11         11         11
        12         11         11

10 rows selected.

We can see that function F1 returns same results every 2 execution – it is equal to fetch size(“set arraysize 2″),
operator O2 and O3 return same results for all rows in first query execution, but in the second query executions we can see that they are incremented by 10 – it’s equal to number of rows.
What we can learn from that:
1. A calling a function F1 with result_cache and deterministic reduces function executions, but all function results inconsistent with query;
2. Operator O2 returns consistent results, but function is always executed because we invalidating result_cache every execution;
3. Operator O3 works as well as operator O2, without considering that function is deterministic.

All tests scripts: tests.zip

Categories: Development

OGh APEX Conference

Denes Kubicek - Sat, 2014-03-29 04:32
Last week I was presenting at OGh (ORACLE GEBRUIKERSCLUB HOLLAND) APEX World. My topic was "APEX 4.2 Application Deployment and Application Management". I can only recommend this conference to all the APEX users in Europe. This is definitely the biggest APEX conference on our continent. If you don't travel to ODTUG then this is something you shouldn't miss. They have an international track where you can listen to the well known APEX developers and book authors. This time Dan McGhan, Martin Giffy D'Souza, Joel Kallman, Dietmar Aust, Roel Hartman, Peter Raganitsch. For the tracks in Dutch, they are also willing to switch their language to English at any time if there are visitors not understanding Dutch. All together, Dutch people are open minded and I admire their sense for organizing such events - they definitely know how to do it.

Categories: Development

I (re) Join Oracle Corporation!

FeuerThoughts - Wed, 2014-03-26 09:01
On March 17, 2014, I became an employee of Oracle Corporation for the second time. My first round with Oracle started in August 1987. My second son, Eli, was less than a year old. I'd been incredibly bored with my consulting gig, which consisted of babysitting a reporting system on a DEC10 "mainframe", based on a flat-file database – but a database.

So I checked the Help Wanted pages (no Internet, no smartphones, no LinkedIn) and came across an ad from Oracle Corporation. It contained the word "database", so I figured: "Why ?"

I was hired, even though I was completely ignorant of relational databases. Ok, not completely. I'd read an article by Codd and memorized "Twelve Rules of Relational Databases." But no one ever asked me about relational theory. Instead the key  question seemed to be: "Are you comfortable talking in front of groups, large and small?" I was, after all, interviewing for a "pre-sales" (sales consultant) position.

Fortunately (?), I'd been very active for the past several years organizing Americans to protest facets of our government's policies in Central America, and yes I'd spoken often to groups large and small. My manager-to-be at Oracle seemed pleased enough with this, and I got the job. I never thought my political activity would help me land a software job, but that's exactly what happened.

Looking back on that moment, I see now that it foreshadowed a significant, but not widely recognized characteristic of my career: The popularity of my books and trainings stem as much from my communication skills (the delivery) as from whatI am communicating (the content).

I'll get back to that in a moment. Well, joining Oracle changed my life. For one thing, I had to go out and not only buy some suits, but wear them every day. And then after five years with the company, I left to do some consulting, and a few years later ended up publishing Oracle PL/SQL Programming (O'Reilly Media) in 1995. Now that really changed my life!

For the next almost-19 years, I have focused almost exclusively on the Oracle PL/SQL language. I wrote nine more books on the language (probably about 4 too many, actually), of which over 400,000 copies have been sold. I traveled to dozens of countries to share my obsession (expertise) with PL/SQL in trainings and presentations. I built and designed PL/SQL testing tools, code generators, code libraries, and more. I wrote lots of articles for Oracle Magazine and other publications. I attended many, many Kaleidoscopes and Collaborates and International Oracle User Weeks and Oracle Open Worlds and....my wife got really tired of my traveling. Sigh....and that is why I have pledged that in Round 2 with Oracle, I would not start living on airplanes again.

For much of those 19 years, I worked for Quest Software and then Dell as a PL/SQL Evangelist. Quest and Dell helped sstrengthen the PL/SQL community not only by offering such amazing tools as Toad for Oracle, but also by funding my position and giving me a tremendous amount of freedom to continue learning about, writing and writing about PL/SQL.

But I decided last year that I wanted to close out my career as a software professional (I will, after all, be 56 in September 2014) with the company that created the programming language that transformed my life: Oracle Corporation.

Wasn't I lucky that the head of all product development at Oracle, Thomas Kurian, was also a former PL/SQL product manager! Otherwise, Oracle might not have been interested in having me back. ☺

So what will I be doing at Oracle Corporation?

My title continues to be PL/SQL Evangelist, and PL/SQL will continue to be my main focus, of course. I will help promote the language, add to the collateral available for PL/SQL, write articles for Oracle Magazine and post content on Oracle Technology Network, present at the key Oracle developer-related conferences. In other words, all the usual stuff.

But I see my evangelism as a two way street: I want to make sure that developers around the world take the fullest possible advantage of PL/SQL, yet I also want to make sure that Oracle generally and the PL/SQL development team in particular recognize the importance of the PL/SQL community, and leverage it fully.

Ever since 2010 I have been writing daily quizzes (and more) on the PL/SQL Challenge. I have been amazed at the enthusiasm of hundreds of developers to test their knowledge on this site. And it has been fantastic to see many PL/SQL experts who might otherwise never be known or recognized by their peers step forward to share their expertise. This was one of my "hidden" goals of the PL/SQL Challenge.

You see, I have never been entirely comfortable with being (one of) the "go to guys" on PL/SQL. I know very well that for all of my depth and focus on PL/SQL, I am really not very strong technically. I am no Tom Kyte, no Bryn Llewellyn. I only took three computer programming courses in college, all 101 level. I mostly got lucky - and fell into programming at a time when a degree in computer science simply wasn't a requirement (1979!).

It turns out that my main strength, the main reason (I believe) that my books and presentations became so popular, is that I am a good at communicating ideas, techniques, etc. in a way that people find accessible. I never learned how to write and think like a computer scientist, so people can actually understand - and enjoy - what I write. Because of the limitations of my formal training, I often have to think my way step by stepto an understanding of how things work (I can't just know things from my university days). I then share that step-by-step process with my readers, which helps them understand. Finally, I seem to find it impossible to keep my sense of humor out of what I say and write - and boy did my readers appreciate that! :-)

Bottom line: it makes me a little nervous when so many people look to me for "all the answers" to their PL/SQL-related problems. I don't have all the answers. But I am pretty sure that if I do not, there is someone out there, some Oracle technologist who has worked with PL/SQL for years, who has a computer science degree, who has faced different challenges than me, who might just have the answer you need, a code sample to save you hours of work, a piece of advice that can save several bangs of the head against the wall.

But how to get the question to the person who can answer it? Of course the OTN discussion forums and places like Stackoverflow provide a way to expose this expertise and make it available to many. I hope to complement those kinds of efforts with new initiatives at Oracle.  You will see announcements over the next year regarding this community building effort. But in the meantime if you have any ideas for me on this topic, please do not hesitate to send me an email.

The Two Me's Online

I have, for years, offered my thoughts (some might say "rants") on my Feuerthoughts blog and @stevefeuerstein twitter account. Going forward, I will cleanly separate my Oracle-related posts from my personal content. So here's a quick guide to the sites and accounts I will be using.

Oracle
Blog - stevenfeuersteinonplsql.blogspot.com
Twitter - @SFonPLSQL
LinkedIn - www.linkedin.com/pub/steven-feuerstein/0/61/51b/

Personal
Home - www.stevenfeuerstein.com
Blog - feuerthoughts.blogspot.com
Twitter - @stevefeuerstein
Facebook - Steven Feuerstein

If you follow my @stevefeuerstein twitter account, I urge you (if an Oracle technologist and not my mom) to also follow me on @sfonplsql. I will soon ramp up with daily PL/SQL tips and more.

Time to Get to Work!

Lots to do, lots to do. Including coming up to speed on a Macbook. I am making the switch after 30 years with DOS and Windows. Fun, scary, frustrating, liberating. More on that, too, to follow
Categories: Development

When v$sesstat statistics are updated

XTended Oracle SQL - Thu, 2014-03-20 18:41

Craig Shallahamer wrote excellent article “When is v$sesstat really updated?”.
And my today post just a little addition and correction about the difference of updating ‘Db time’ and ‘CPU used by this session’ statistics.

Test #1

In this test I want to show that the statistics will be updated after every fetch call.
I have set arraysize=2, so sql*plus will fetch by 2 rows:
(full script)

-- Result will be fetched by 2 rows:
set arraysize 2;
-- this query generates CPU consumption 
-- in the scalar subquery on fetch phase,
-- so CPU consumption will be separated 
-- into several periods between fetch calls:
with gen as (
            select/*+ materialize */
               level n, lpad(level,400) padding
            from dual
            connect by level<=200
            )
    ,stat as (
            select/*+ inline */
               sid,name,value 
            from v$mystat st, v$statname sn
            where st.statistic#=sn.statistic#
              and sn.name in ('DB time'
                             ,'CPU used by this session'
                             ,'user calls'
                             ,'recursive calls')
            )
--first rows just for avoiding SQL*Plus effect with fetching 1 row at start,
-- others will be fetched by "arraysize" rows:
select null rn,null cnt,null dbtime,null cpu,null user_calls, null recursive_calls from dual
union all -- main query:
select
   rownum rn
  ,(select count(*) from gen g1, gen g2, gen g3 where g1.n>g2.n and g1.n*0=main.n*0) cnt
  ,(select value from stat where sid*0=n*0 and name = 'DB time'                    ) dbtime
  ,(select value from stat where sid*0=n*0 and name = 'CPU used by this session'   ) cpu
  ,(select value from stat where sid*0=n*0 and name = 'user calls'                 ) user_calls
  ,(select value from stat where sid*0=n*0 and name = 'recursive calls'            ) recursive_calls
from gen main
where rownum<=10;
set arraysize 15;

Test results:

SQL> @tests/dbtime

        RN        CNT     DBTIME        CPU USER_CALLS RECURSIVE_CALLS
---------- ---------- ---------- ---------- ---------- ---------------

         1    3980000      12021      11989        200             472
         2    3980000      12021      11989        200             472
         3    3980000      12121      12089        201             472
         4    3980000      12121      12089        201             472
         5    3980000      12220      12186        202             472
         6    3980000      12220      12186        202             472
         7    3980000      12317      12283        203             472
         8    3980000      12317      12283        203             472
         9    3980000      12417      12383        204             472
        10    3980000      12417      12383        204             472

As you can see the statistics are updated after every fetch call.

Test #2

Now since we already tested simple sql query, I want to do a little bit more complicated test with PL/SQL:
I’m going to write single PL/SQL block with next algorithm:
1. Saving stats
2. Executing some pl/sql code with CPU consumption
3. Getting statistics difference
4. Starting query from first test
5. Fetch 10 rows
6. Getting statistics difference
7. Fetch next 10 rows
8. Getting statistics difference
9. Fetch next 10 rows
10. Getting statistics difference
And after executing this block, i want to check statistics.

Full script:

set feed off;

-- saving previous values
column st_dbtime      new_value prev_dbtime      noprint;
column st_cpu_time    new_value prev_cputime     noprint;
column st_user_calls  new_value prev_user_calls  noprint;
column st_recur_calls new_value prev_recur_calls noprint;

select   max(decode(sn.NAME,'DB time'                  ,st.value))*10 st_dbtime
        ,max(decode(sn.NAME,'CPU used by this session' ,st.value))*10 st_cpu_time
        ,max(decode(sn.NAME,'user calls'               ,st.value))    st_user_calls
        ,max(decode(sn.NAME,'recursive calls'          ,st.value))    st_recur_calls
from v$mystat st, v$statname sn
where st.statistic#=sn.statistic# 
  and sn.name in ('DB time','CPU used by this session'
                 ,'user calls','recursive calls'
                 )
/
-- variable for output from pl/sql block: 
var output varchar2(4000);

prompt Executing test...;
----- main test:
declare
   cnt int;
   st_dbtime      number; 
   st_cpu_time    number; 
   st_user_calls  number; 
   st_recur_calls number; 
   cursor c is 
      with gen as (select/*+ materialize */
                     level n, lpad(level,400) padding
                   from dual
                   connect by level<=200)
      select
          rownum rn
        , (select count(*) from gen g1, gen g2, gen g3 where g1.n>g2.n and g1.n*0=main.n*0) cnt
      from gen main
      where rownum<=60;
   
   type ctype is table of c%rowtype;
   c_array ctype;
   
   procedure SnapStats(descr varchar2:=null)
   is
      st_new_dbtime      number;
      st_new_cpu_time    number;
      st_new_user_calls  number;
      st_new_recur_calls number;
   begin
      select   max(decode(sn.NAME,'DB time'                 ,st.value))*10 st_dbtime
              ,max(decode(sn.NAME,'CPU used by this session',st.value))*10 st_cpu_time
              ,max(decode(sn.NAME,'user calls'              ,st.value))    st_user_calls
              ,max(decode(sn.NAME,'recursive calls'         ,st.value))    st_recur_calls
          into st_new_dbtime,st_new_cpu_time,st_new_user_calls,st_new_recur_calls
      from v$mystat st, v$statname sn
      where st.statistic#=sn.statistic#
        and sn.name in ('DB time','CPU used by this session'
                       ,'user calls','recursive calls'
                       );
      if descr is not null then
         :output:= :output || descr ||':'||chr(10)
                || 'sesstat dbtime:     ' || (st_new_dbtime      - st_dbtime      )||chr(10)
                || 'sesstat cputime:    ' || (st_new_cpu_time    - st_cpu_time    )||chr(10)
                || 'sesstat user calls: ' || (st_new_user_calls  - st_user_calls  )||chr(10)
                || 'sesstat recur calls:' || (st_new_recur_calls - st_recur_calls )||chr(10)
                || '======================================'||chr(10);
      end if;
      st_dbtime      := st_new_dbtime     ;
      st_cpu_time    := st_new_cpu_time   ;
      st_user_calls  := st_new_user_calls ;
      st_recur_calls := st_new_recur_calls;
   end;
   
begin
   -- saving previous stats:
   SnapStats;

   -- generating cpu load:
   for i in 1..1e7 loop
      cnt:=cnt**2+cnt**1.3-cnt**1.2;
   end loop;
   -- getting new stats:
   SnapStats('After pl/sql loop');
   
   open c;
   SnapStats('After "open c"');
   fetch c bulk collect into c_array limit 10;
   SnapStats('After fetch 10 rows');
   fetch c bulk collect into c_array limit 10;
   SnapStats('After fetch 20 rows');
   fetch c bulk collect into c_array limit 10;
   SnapStats('After fetch 30 rows');
   close c;
   SnapStats('After close c');
end;
/ 

prompt 'Delta stats after statement(ms):';
select   max(decode(sn.NAME,'DB time'                 ,st.value))*10
          - &&prev_dbtime      as delta_dbtime
        ,max(decode(sn.NAME,'CPU used by this session',st.value))*10
          - &&prev_cputime     as delta_cpu_time
        ,max(decode(sn.NAME,'user calls'              ,st.value))  
          - &&prev_user_calls  as delta_user_calls
        ,max(decode(sn.NAME,'recursive calls'         ,st.value))  
          - &&prev_recur_calls as delta_recur_calls
from v$mystat st, v$statname sn
where st.statistic#=sn.statistic# 
  and sn.name in ('DB time','CPU used by this session'
                 ,'user calls','recursive calls'
                 )
/
prompt 'Test results:';
col output format a40;
print output;
set feed off;

Output:

SQL> @tests/dbtime2

Executing test...
'Delta stats after statement(ms):'

DELTA_DBTIME DELTA_CPU_TIME DELTA_USER_CALLS DELTA_RECUR_CALLS
------------ -------------- ---------------- -----------------
       18530          18460                5                33

Test results:
OUTPUT
----------------------------------------
After pl/sql loop:
sesstat dbtime:     0
sesstat cputime:    4350
sesstat user calls: 0
sesstat recur calls:2
======================================
After "open c":
sesstat dbtime:     0
sesstat cputime:    20
sesstat user calls: 0
sesstat recur calls:4
======================================
After fetch 10 rows:
sesstat dbtime:     0
sesstat cputime:    4680
sesstat user calls: 0
sesstat recur calls:2
======================================
After fetch 20 rows:
sesstat dbtime:     0
sesstat cputime:    4680
sesstat user calls: 0
sesstat recur calls:2
======================================
After fetch 30 rows:
sesstat dbtime:     0
sesstat cputime:    4690
sesstat user calls: 0
sesstat recur calls:2
======================================
After close c:
sesstat dbtime:     0
sesstat cputime:    0
sesstat user calls: 0
sesstat recur calls:3
======================================

We can notice that “CPU time” is updated at the same time as recursive calls, but “DB time” is updated only with “User calls”. Although this difference is not so important(because in most cases we can use other statistics in sum), but i think, if you want to instrument some code, it gives reason to check out desirable statistics for update time.

Categories: Development

APEX 5.0 Page Designer Videos

Denes Kubicek - Wed, 2014-03-19 01:34
There is a nice series of videos regarding APEX 5.0 Page Designer on YouTube. You can watch it here. David Peake is showing how the new Page Designer works. It is worth of having look at it and much easier that finding it out yourself. Enjoy.

Categories: Development

Installing Agent using Pull mechanism on Windows-without cygwin or CURL

Arun Bavera - Tue, 2014-03-18 15:49

Installing Agent using Pull mechanism on Windows
               1)Get the Latest Agent software using below URL format:
https://my.emservername.net/em/install/getAgentImage?user=AGENT_DOWNLOAD&password=oracle123&version=12.1.0.3.0&script=download&host=myhostname&type=agtimage&platform="Microsoft Windows (32-bit)"
You can use CURL on Linux if you don't have the browser installed:
curl --data @pswd.txt "https://%omsHost%:%omsPort%/em/install/getAgentImage?user=%username%&version=%version%&script=download&host=%hostname%&type=%type%&platform=%platform%" --insecure -o "%dest%"\agent.zip -S
Platforms                       VersionLinux x86                                          12.1.0.3.0 Linux x86-64                                    12.1.0.3.0   Microsoft Windows (32-bit)              12.1.0.3.0Microsoft Windows x64 (64-bit)       12.1.0.3.0
2) Save it as “getAgentImage.zip”
3)  Login as administrator, Unzip it to a stage directory
 For Example:unzip D:\tmp/getAgentImage.zip -d  D:\agtImg12c
4) Get the correct hostname from nslookup
5) Deploy the Agent  with correct hostname highlighted below For Example:
D:/agtImg12c/agentDeploy.bat AGENT_BASE_DIR=D:/Oracle/agent12c AGENT_INSTANCE_HOME=D:/Oracle/agent12c/agent_instOMS_HOST=my.emservername.net EM_UPLOAD_PORT=4900 AGENT_REGISTRATION_PASSWORD=oracle123   AGENT_PORT=3875  ORACLE_HOSTNAME=myhostname
Note: forward “/” in the path---------------------------------------------------------------------------------
Refer:EM 12c: How to Install EM 12c Agent in Silent Mode using a Response File (Doc ID 1360083.1)http://docs.oracle.com/cd/E24628_01/install.121/e24089/install_agent_usng_rsp.htm#CEGGACJE
Categories: Development

Troubleshooting the Discovery of Targets on OEM12c

Arun Bavera - Fri, 2014-03-14 13:41
Troubleshooting the Discovery of Targets on OEM12c

NOTE : Please enable perl trace to debug mode and then after discovery please send perl log file to OracleSupport for further observation.

  1. Go to /agent_inst/sysman/config
  2. Open emd.properties file and
  3. Change this line EMAGENT_PERL_TRACE_LEVEL=INFO to EMAGENT_PERL_TRACE_LEVEL=DEBUG
  4. Restart agent
  5. Run discovery again and send us /agent_inst/sysman/log/emagent_perl.trc file.
Categories: Development

APEX Plugin Page

Denes Kubicek - Thu, 2014-03-13 11:50
I just noticed that http://www.apex-plugin.com is programmed using php and jomla. Shouldn't it use Oracle APEX?

Categories: Development

APEX IR Query Plugin

Denes Kubicek - Thu, 2014-03-13 11:18
I already posted about the function for getting the IR query here:

http://www.deneskubicek.blogspot.de/2013/05/getting-interactive-report-query.html

Now, I decided to create a dynamic action plugin based on that function. The plugin has been submitted to the plugin directory at http://www.apex-plugin.com/ and should appear there soon. If you want to see how it works and download the code, you can do it here as well: https://apex.oracle.com/pls/apex/f?p=43045:1

Categories: Development

How Humans Lost Their Hands (and saved the world)

FeuerThoughts - Wed, 2014-03-12 18:10
My first creation (and destruction) mythstory....

How Humans Lost Their Hands (and saved the world)
Copyright 2014 Steven Feuerstein A very long time ago, our home, the Earth, was very beautiful and full of life on the inside, but on its surface it was empty.
So the Great God Bacta shook the planet and brought forth multi-cellular life in all its ever-lasting glory and ever-changing beauty.
And among the multitude of multitude of creatures, one little piece of Earth's life was Sura, and she was the First Woman, mother of all the humans.
She was, however, sad, because she was all alone, and lonely.
Bacta felt sorry for Sura and in a moment of love for this new surface life, bestowed upon Sura four gifts:
1. A fat fish made of silver, with jewels for eyes.
Bacta declared: as long as Sura possessed the fish, humans would never lack for food. Bacta would make sure the harvests are plentiful and the hunting good.
2. A golden coconut, always full of cool, fresh coconut water.
Bacta declared: as long as Sura possessed the coconut, humans would never lack for water. Bacta would make sure that the rivers flowed and the water creatures cleaned the water, and that humans would always have enough to drink.
3. A glowing jade stone in the shape of a pulsing heart.
Bacta declared: as long as Sura possessed the jade heart, Bacta would make sure that humans would love each other and protect each other from harm.
4. A fossilized shark fin, dark and sleek.
Bacta declared: as long as Sura possessed the fin, humans would be born with their amazing hands, with which humans can make all sorts of things.
Sura thanked Bacta for the wonderful gifts, but still looked sad. Then Bacta remembered, and it created Beto, the First Man.
Beto, the First Man, was also the happiest man, because his job was to make little baby humans with Sura.
Beto was good at his job, and so was Sura, and soon humans - and the things they made with their amazing hands- covered the Earth. After all, Sura still possessed the fish and coconut and jade and fin, so humans ate and drank and loved and built all they wanted.
And they were so busy eating and drinking and loving and building, that they didn't notice all the other creatures who had stopped eating, no longer drank, felt neither love nor hate….because they were simply no more.
Bacta discovered what was happening and appeared before the multitude of humans, to declare:
"You are eating so much food
and drinking so much water
and loving so few and so little
and building so many roads
that the last butterfly has died."
A great murmur went up amongst the humans: "Butterflies dead? So what?"
Bacta was not done. "I have never before had to take back a gift, but I hereby take back the fish. I will no longer make ensure that you have food to eat. Perhaps this will teach you to live in the world, instead of eating it."
And so it came to be.
A swarm of locusts ate the crops that humans planted. Worms ate the fruit in the trees. The rains stopped and everything went bone dry.
Many humans died.
But after a while, the humans who survived figured out how to stop the pests and after a while rain returned. With GMOs, and Roundup, and DDT, humans could get back to eating and drinking and loving to their heart's content. And they did.
Then Bacta appeared for a second time before the humans, very angry, and said:
"You are still eating too much food
and still drinking too much water
and still loving too few and too little
and still building too many roads.
Now, the last elephant has just died."
A great murmur went up amongst the humans: "Elephant? What is an elephant? Dead? So what?"
Bacta was not done. "Only once before have I had to take back a gift, but now I take back the coconut. I will no longer make sure that you will have water to drink. Perhaps that will teach you to live in the world around you, instead of drinking it dry."
And so it came to be. The water in the rivers turned blood red and the water in the seas caught fire.
Many humans died.
But after a while, humans learned how to clean the water so they could drink it, and live. They had to keep this good water apart from the bad water and so from that time on, everyone drank water from plastic bottles.
Bacta was outraged. Water was the source of life, the home for all bacteria. Humans were even ruining that?  Bacta appeared before humans in a fury, and said:
"You are stilleating too much food
and still drinking too much water
and still loving too few and too little
and still building too many roads.
Now the last frog has just died."
A great murmur went up amongst the humans: "Frogs are slimy. Frogs are gross. Good riddance, frogs."
Bacta was not done. "Only two times before have I had to take back a gift, but I now take back the jade heart. I will no longer make sure that humans love and take care of each other. Perhaps that will teach you to live in the world around you, instead of covering it with humans."
And so it came to be. Families stuck together, even tighter than before, but friends were no longer trusted, and everyone else was a danger, and not to be trusted.
Yet if you are not trusted, then after a while you act untrustworthily. Without trust and love, between the many groups of humans around the world, violence broke out and wars swept the continents.
Many humans died.
But after a while, those who stayed inside or had the biggest guns, wrote contracts agreeing to help one another. And then the lawyers ruled the land, along with the police.
Which meant that humans could get back to eating and drinking and not loving the world, which they did, with a vengeance.
Forests disappeared. Coral died and turned into rock. Without trees, rivers dried up. Without coral, the fish and then whales had no food, and they died, too.
Many humans died, but many more kept on eating and killing.
When Bacta appeared for the fourth time, humans trembled before the roaring voice of a billion billion bacteria:
"STILL  you eat too much and
STILL you drink too much and
STILL you love too little and
STILL you build, build, build."
The humans were confused. What else were they supposed to do, with their amazing hands and their amazing minds?
Bacta was not done. "You build so much that there's no room for anything but humans. And then you have more humans.
"Only three times before have I had to take back a gift, but now I have come to take back the fin. I will no longer make sure that humans are born with hands that allow them to build, and in building, destroy. Perhaps that will teach you to fit into the world, instead of fitting the world to your desires."
And so it came to be. From that time forward , human babies were born without hands. In their place were just two stubby fingers, and no wrist.
The humans with ten fingers called these tragic babies Four Fingers.
Ten Fingers helped the Four Fingers. They built special gloves for the two, lonely fingers on each hand, and built special machines to do things for Four Fingers they could not do for themselves.
And then Ten Fingers and Four Fingers got back to eating and drinking up the world.
Aren't humans amazing?
But after a while, all the Ten Fingers died, and then a little while after that, the machines stopped working and the gloves wore out.
Many four-fingered humans died.
The ones that survived worked hard for their food with their four fingers, but didn't eat too much.
They got thirsty from their work, but didn't drink too much.
And the only way they could survive was to work together, so they came to love each other dearly.
But they didn't have hands, and never would, so they didn't build any machines.
Which means they didn't spoil the water.
And they didn't cut down all the trees.
Happily, soon (after just 100 generations of Four Fingers) the water was pure again, and the forests were full of trees again, and new creatures evolved to take the place of all the creatures humans had killed.
And Bacta looked up at its creation, and was, for the first time in a long time, pleased.







Categories: Development

Formspider Authentication and Authorization Repository for PL/SQL

Gerger Consulting - Sat, 2014-03-08 06:19

We are very excited to announce the release of Formspider Desktop 1.7. FS Desktop 1.7 includes one of the most requested features by our customers.Built-In Security RepositoryFS Desktop 1.7 comes with an integrated authentication and authorization repository. Formspider developers are now able to secure both data and UI elements and grant access rights to users directly from the Formspider IDE.Formspider Security Highlights:
  • Security both at data and UI layer.
    Grant read and write access at the query, column and row level. Control editable, enable, visible attributes of every UI component.
  • Security aware UI components. 
    Formspider automatically detects the read&write privileges defined at the data layer and restricts access to data bound UI objects.
  • Incredibly easy to manage. 
    Build and maintain your security model quickly with the point and click graphical user interface.
  • Create reports on your security model easily with SQL. 
    No LDAP, no XML, no files. Formspider security model is implemented in relational database tables.
Learn more about Formspider Security at the Learning Center.Download Formspider Desktop 1.7 now.

Categories: Development

Oracle APEX Cookbook : Second Edition

Denes Kubicek - Fri, 2014-03-07 00:54
I was asked to review this book and write a couple of words about it. Of course I didn't go through all the 444 pages of the book because I don't have enough time. From what I could read and review, I can say that this book contains a lot of well explained examples for almost all APEX features. So, if you are new to APEX and want to speed up your learning process then this is a good choice. Experienced APEX developers can also use this book as a reference.

Categories: Development

Useful Queries in OEM12c PART-3

Arun Bavera - Wed, 2014-03-05 23:00
SOA11g:

FROM OEM12c Views

Total BPEL Instances in Last 1 Hour
SELECT  max(value) as  TotalInvocations , sysdate-2/24  Start_Time,sysdate-1/24  End_Time --((??EMIP_BIND_END_DATE??)-2/24)  Start_Time,  ((??EMIP_BIND_END_DATE??)-1/24)  End_Time
FROM mgmt$metric_details
WHERE target_name like '/PROD%'
AND collection_timestamp >= sysdate-2/24
AND collection_timestamp <= sysdate-1/24
AND metric_column = 'compositeInstances.delta'

SOA SERVICES PERFORMANCE MONITOR - PRD - HOURLY
select Process_Name,to_number(Invocations,'9999999999999') as NUM_INSTANCES,round(to_number(Invocations,'9999999999999')/60,4) as THRUPUT_PER_MIN from (
SELECT SUBSTR(target_name, INSTR(target_name,'/', -1, 1)+1) AS Process_Name,
       max(value) as Invocations
FROM mgmt$metric_details
WHERE target_name like '/PROD%'
AND collection_timestamp >= sysdate-2/24 --TO_DATE('3/5/2014 1:00:03 PM','MM/DD/YYYY HH:MI:SS AM') --((??EMIP_BIND_END_DATE??)-2/24)
AND collection_timestamp <= sysdate-1/24 --TO_DATE('3/5/2014 2:00:03 PM','MM/DD/YYYY HH:MI:SS AM')--((??EMIP_BIND_END_DATE??)-1/24)
AND metric_column = 'compositeInstances.delta'
GROUP BY SUBSTR(target_name, INSTR(target_name,'/', -1, 1)+1)
ORDER BY to_number(Invocations,'9999999999999') desc
)
where to_number(Invocations,'9999999999999')>0

ORDER BY to_number(Invocations,'9999999999999') desc


FROM Direct Database link or Metric Extension:
Total SOA11g Composite Instances in Last 1 Hour

SELECT   COUNT(*) AS TOTAL_NUM_INSTANCES,((sysdate)-2/24)  Start_Time,  ((sysdate)-1/24)  End_Time
FROM     MY_soainfra.CUBE_INSTANCE@MY_SOA_DBLINK
WHERE     MY_soainfra.CUBE_INSTANCE.CREATION_DATE >= (sysdate)-2/24 
          AND MY_soainfra.CUBE_INSTANCE.CREATION_DATE <= (sysdate)-1/24

      AND MY_soainfra.CUBE_INSTANCE.state in (5,6)

SOA11g-SERVICES PERFORMANCE MONITOR - PRD - HOURLYSELECT PROCESS_NAME,
       DOMAIN_NAME,
/*         TO_CHAR (MIN (INSTANCE_START), 'MM/DD/YYYY HH24:MI')
            AS PROCESS_LOAD_START,
         TO_CHAR (MAX (ACTIVITY_END), 'MM/DD/YYYY HH24:MI') AS PROCESS_LOAD_END, */
         COUNT (*) AS NUM_INSTANCES,
                 COUNT (*) * 60
            / (EXTRACT (
                  SECOND FROM (MAX (ACTIVITY_END) - MIN (INSTANCE_START)))
               + EXTRACT (
                    MINUTE FROM (MAX (ACTIVITY_END) - MIN (INSTANCE_START)))
                 * 60
               + EXTRACT (
                    HOUR FROM (MAX (ACTIVITY_END) - MIN (INSTANCE_START)))
                 * 3600
               + EXTRACT (
                    DAY FROM (MAX (ACTIVITY_END) - MIN (INSTANCE_START)))
                 * 86400),
            2)
            AS THRUPUT_PER_MIN_ANOTHER, 
         ROUND (
            AVG (
                 EXTRACT (SECOND FROM ACTIVITY_END - INSTANCE_START)
               + EXTRACT (MINUTE FROM ACTIVITY_END - INSTANCE_START) * 60
               + EXTRACT (HOUR FROM ACTIVITY_END - INSTANCE_START) * 3600),
            2)
            AS DURATION_AVG,
         ROUND (
            MEDIAN (
                 EXTRACT (SECOND FROM ACTIVITY_END - INSTANCE_START)
               + EXTRACT (MINUTE FROM ACTIVITY_END - INSTANCE_START) * 60
               + EXTRACT (HOUR FROM ACTIVITY_END - INSTANCE_START) * 3600),
            2)
            AS DURATION_MEDIAN,
         ROUND (
            STDDEV (
                 EXTRACT (SECOND FROM ACTIVITY_END - INSTANCE_START)
               + EXTRACT (MINUTE FROM ACTIVITY_END - INSTANCE_START) * 60
               + EXTRACT (HOUR FROM ACTIVITY_END - INSTANCE_START) * 3600),
            2)
            AS DURATION_STDDEV,
         ROUND (
            MIN (
                 EXTRACT (SECOND FROM ACTIVITY_END - INSTANCE_START)
               + EXTRACT (MINUTE FROM ACTIVITY_END - INSTANCE_START) * 60
               + EXTRACT (HOUR FROM ACTIVITY_END - INSTANCE_START) * 3600),
            2)
            AS DURATION_MIN,
         ROUND (
            MAX (
                 EXTRACT (SECOND FROM ACTIVITY_END - INSTANCE_START)
               + EXTRACT (MINUTE FROM ACTIVITY_END - INSTANCE_START) * 60
               + EXTRACT (HOUR FROM ACTIVITY_END - INSTANCE_START) * 3600),
            2)
            AS DURATION_MAX,
         ROUND (
                 EXTRACT (SECOND FROM MAX (ACTIVITY_END) - MIN (INSTANCE_START))
               + EXTRACT (MINUTE FROM MAX (ACTIVITY_END) - MIN (INSTANCE_START)) * 60
               + EXTRACT (HOUR FROM MAX (ACTIVITY_END) - MIN (INSTANCE_START)) * 3600
               + EXTRACT (DAY FROM MAX (ACTIVITY_END) - MIN (INSTANCE_START)) * 86400,
            2)
            AS LOAD_DURATION,
         SUM(GT_2SEC) AS GT2SEC,
         ROUND(100 - 100*SUM(GT_2SEC)/COUNT(*),2) AS PER_LT2SEC,
         SUM(GT_5SEC) AS GT5SEC,
         ROUND(100 - 100*SUM(GT_5SEC)/COUNT(*),2) AS PER_LT5SEC,
         SUM(GT_15SEC) AS GT15SEC,
         ROUND(100 - 100*SUM(GT_15SEC)/COUNT(*),3) AS PER_LT15SEC,
         SUM(GT_30SEC) AS GT30SEC,
         ROUND(100 - 100*SUM(GT_30SEC)/COUNT(*),3) AS PER_LT30SEC,
         ROUND(100*SUM(SUCCESS_STATE)/COUNT(*),2) AS PER_SUCCESS,
         ROUND(100*SUM(FAULTED_STATE)/COUNT(*),2) AS PER_FAULT
    FROM (  SELECT CUBE_INSTANCE.TITLE TITLE,
                   CUBE_INSTANCE.DOMAIN_NAME,
                   CUBE_INSTANCE.component_name PROCESS_NAME, 
                   CUBE_INSTANCE.CIKEY CIKEY,
                   CUBE_INSTANCE.CREATION_DATE INSTANCE_START,
                   CUBE_INSTANCE.MODIFY_DATE ACTIVITY_END,
                   DECODE(CUBE_INSTANCE.STATE,5,1,0) AS SUCCESS_STATE,
                   CASE WHEN CUBE_INSTANCE.STATE = 6 THEN 1 ELSE 0 END AS FAULTED_STATE,
                   CUBE_INSTANCE.MODIFY_DATE - CUBE_INSTANCE.CREATION_DATE,
                   CASE WHEN CUBE_INSTANCE.MODIFY_DATE - CUBE_INSTANCE.CREATION_DATE > '0 00:00:02.000000' THEN 1 ELSE 0 END AS GT_2SEC,
                 CASE WHEN CUBE_INSTANCE.MODIFY_DATE - CUBE_INSTANCE.CREATION_DATE > '0 0:0:05.000000' THEN 1 ELSE 0 END AS GT_5SEC,
                 CASE WHEN CUBE_INSTANCE.MODIFY_DATE - CUBE_INSTANCE.CREATION_DATE > '0 0:0:15.000000' THEN 1 ELSE 0 END AS GT_15SEC,
                 CASE WHEN CUBE_INSTANCE.MODIFY_DATE - CUBE_INSTANCE.CREATION_DATE > '0 0:0:30.000000' THEN 1 ELSE 0 END AS GT_30SEC
               FROM MY_soainfra.CUBE_INSTANCE@MY_SOA_DBLINK
             WHERE  CUBE_INSTANCE.CREATION_DATE >= ((??EMIP_BIND_END_DATE??)-2/24)
         AND CUBE_INSTANCE.CREATION_DATE <=((??EMIP_BIND_END_DATE??)-1/24)
        -- and component_name='SalesOrderEBS'
          AND CUBE_INSTANCE.state in (5,6)
          ORDER BY CUBE_INSTANCE.CREATION_DATE ASC)
GROUP BY PROCESS_NAME, DOMAIN_NAME

ORDER BY NUM_INSTANCES desc

List all metric names for SOA Composite targets
select distinct metric_name from mgmt$metric_hourly where target_type='oracle_soa_composite';


List the names of all of the SOA Composite targets
select distinct target_name from mgmt$metric_hourly where target_type='oracle_soa_composite';

List the metric columns for the soainfra_composite_label_rollup_by_revision metric
select distinct metric_column from mgmt$metric_hourly where target_type='oracle_soa_composite' and metric_name='soainfra_composite_label_rollup_by_revision';

select distinct metric_column from mgmt$metric_hourly where target_type='soa_composite_component'
select distinct metric_column from mgmt$metric_details where target_type='soa_composite_component'
select distinct metric_column from mgmt$metric_hourly where target_type='soa_composite_service'

select distinct metric_column,target_type  from mgmt$metric_hourly where target_type like '%soa%'; --and metric_name='soainfra_composite_label_rollup_by_revision';

select distinct target_type from mgmt$metric_hourly where target_type like '%soa%';

Categories: Development

Single SQL vs SQL+PL/SQL

XTended Oracle SQL - Fri, 2014-02-28 14:58

Everyone knows Tom Kyte’s mantra:

You should do it in a single SQL statement if at all possible.

But we all know that “Every rule has an exception
There are many different cases when pl/sql with sql can be more efficient than only sql, and i dont want to catalog them. I just want to show a couple examples of such exceptions:

1. Running totals by several dimensions

Simple example from forum:

select dt,
       dim1,
       dim2,
       val,
       sum(val) over(partition by dim1 order by dt) dim1_cumulative_sum,
       sum(val) over(partition by dim2 order by dt) dim2_cumulative_sum,
       sum(val) over(partition by dim1, dim2 order by dt) dim1_dim2_cumulative_sum
  from mg_t
 order by dt;

This query will be very hard for big data sets, so we can do it efficiently with pl/sql:

create or replace function xf_to_drop return xt2_to_drop pipelined
is
   type tt  is table of number index by pls_integer;
   type tt2 is table of tt index by pls_integer;
   dim1_c tt;
   dim2_c tt;
   dim12_c tt2;
begin
   for r in (
            select dt,
                   dim1,
                   dim2,
                   val
              from mg_t
             order by dt
   )
   loop
      dim1_c(r.dim1):=case when dim1_c.exists(r.dim1) then dim1_c(r.dim1) else 0 end + r.val;
      dim2_c(r.dim1):=case when dim2_c.exists(r.dim1) then dim2_c(r.dim1) else 0 end + r.val;
      dim12_c(r.dim1)(r.dim2):=case
                                  when dim12_c.exists(r.dim1)
                                   and dim12_c(r.dim1).exists(r.dim2)
                                  then dim12_c(r.dim1)(r.dim2)
                                  else 0
                               end + r.val;
      pipe row (xt1_to_drop( r.dt
                            ,r.dim1
                            ,r.dim2
                            ,r.val
                            ,dim1_c(r.dim1)
                            ,dim1_c(r.dim1)
                            ,dim12_c(r.dim1)(r.dim2)
                           ));
   end loop;
end;
/
Spoiler:: Full example SelectShow
create table mg_t as
select trunc(sysdate) + level/1440 dt,
       trunc(3 * dbms_random.value()) dim1,
       trunc(3 * dbms_random.value()) dim2,
       trunc(100 * dbms_random.value()) val
  from dual
connect by level <= 3e6;

create type xt1_to_drop is object(
       dt                  date
      ,dim1                number
      ,dim2                number
      ,val                 number
      ,dim1_cumulative_sum number
      ,dim2_cumulative_sum number
      ,dim1_dim2_cumulative_sum number
);
create type xt2_to_drop as table of xt1_to_drop;

create or replace function xf_to_drop return xt2_to_drop pipelined
is
   type tt is table of number index by pls_integer;
   type tt2 is table of tt index by pls_integer;
   dim1_c tt;
   dim2_c tt;
   dim12_c tt2;
begin
   for r in (
            select dt,
                   dim1,
                   dim2,
                   val
              from mg_t
             order by dt
   )
   loop
      dim1_c(r.dim1):=case when dim1_c.exists(r.dim1) then dim1_c(r.dim1) else 0 end + r.val;
      dim2_c(r.dim1):=case when dim2_c.exists(r.dim1) then dim2_c(r.dim1) else 0 end + r.val;
      dim12_c(r.dim1)(r.dim2):=case
                                  when dim12_c.exists(r.dim1)
                                   and dim12_c(r.dim1).exists(r.dim2)
                                  then dim12_c(r.dim1)(r.dim2)
                                  else 0
                               end + r.val;
      pipe row (xt1_to_drop( r.dt,r.dim1,r.dim2,r.val,dim1_c(r.dim1),dim1_c(r.dim1),dim12_c(r.dim1)(r.dim2)));
   end loop;
end;
/
exec for r in (select * from table(xf_to_drop)) loop null; end loop;

2. Finding connected components

Assume that we have big table with many-to-many relationship:

create table test (clientid NUMBER(10), accountid NUMBER(10));

How we can find all connected groups?

This example also taken from our russian forum and there was very good and simple sql-only solution, but it’s not efficient on big data sets:

select min(group_member_id) as group_max_id, accountid, clientid
  from  (select clientid as group_member_id
         , connect_by_root accountid as accountid
         , connect_by_root clientid  as clientid
      from test
      connect by nocycle decode(accountid, prior accountid, 1, 0)
                       + decode(clientid,  prior clientid,  1, 0)
                       = 1
  ) a
  group by accountid, clientid
  order by group_max_id, accountid
/

We can try to remember algorithms courses and adopt one of the several algorithms for connected components:
Spoiler:: Based on weighted quick-union algorithm SelectShow

declare
   type int_array    is table of pls_integer index by pls_integer;
   type arr_elems    is table of sys.ku$_objnumset index by pls_integer;
   root              int_array;
   root_elems        arr_elems;

   n        int;
   clients  int_array;
   accounts int_array;

    l integer:=dbms_utility.get_time();

    procedure print(v in varchar2) is
    begin
      dbms_output.put_line(to_char((dbms_utility.get_time-l)/100,'0999.99')||' '||v);
      l:=dbms_utility.get_time();
    end;

   
   function get_root(n int) return pls_integer is
   begin
      if root.exists(n) then 
         return root(n);
      else 
         return null;
      end if;
   end;
   
   procedure update_root(old_root pls_integer,new_root pls_integer) is
      i       pls_integer;
      elem    pls_integer;
      cnt_old pls_integer;
      cnt_new pls_integer;
   begin
      if old_root!=new_root then 
         --root_elems(new_root):=root_elems(new_root) multiset union all root_elems(old_root);
         cnt_old:=root_elems(old_root).count;
         cnt_new:=root_elems(new_root).count;
         root_elems(new_root).extend(cnt_old);
         for i in 1..cnt_old
         loop
            elem := root_elems(old_root)(i);
            root(elem):=new_root;
            root_elems(new_root)(cnt_new+i):=elem;
         end loop;
         root_elems(old_root).delete;
      end if;
   end;
   
   procedure add_elem(p_root pls_integer, p_elem pls_integer) is
   begin
      if not root_elems.exists(p_root) then
         root_elems(p_root):=sys.ku$_objnumset(p_elem);
      else
         root_elems(p_root).extend();
         root_elems(p_root)(root_elems(p_root).count):=p_elem;
      end if;
   end;
   
   procedure add_link(clientid pls_integer,accountid pls_integer) is
      r1       pls_integer;
      r2       pls_integer;
      new_root pls_integer;
   begin
      r1:=get_root(clientid);
      r2:=get_root(accountid);
      
      if r1 is null or r2 is null then
         new_root := coalesce(r1,r2,clientid);
         if r1 is null then add_elem(new_root,clientid ); root(clientid) :=new_root; end if;
         if r2 is null then add_elem(new_root,accountid); root(accountid):=new_root; end if;
      else
         new_root := least(r1,r2);
         root(clientid) :=new_root;
         root(accountid):=new_root;
         update_root(greatest(r1,r2),new_root);
      end if;
   end;
   
   function str_format(p int) return varchar2 is
   begin
      return utl_lms.format_message('(%d, %d) = group #%d'
                                   ,clients(p)
                                   ,accounts(p)
                                   ,get_root(clients(p))
                                   );
   end;
begin
   print('start');
   select clientid,accountid 
          bulk collect into clients,accounts
   from test 
--   where rownum<=1000
   ;
   print('fetched');
   n:=clients.count;
   dbms_output.put_line('count='||n);
   for i in 1..n loop
      add_link(clients(i),accounts(i));
   end loop;
   print('processed');
---
/*
   for i in 1..n loop
      dbms_output.put_line(str_format(i));
   end loop;
--   */
end;


We can also try even more interesting special algorithms for parallel processing:
CONNECTED COMPONENTS ALGORITHMS
FOR MESH-CONNECTED PARALLEL COMPUTERS

Categories: Development

APEX Listener => Oracle REST Data Services

Dimitri Gielis - Wed, 2014-02-26 08:05

Yesterday a new version of the Oracle APEX Listener was released. The new version is 2.0.6... and the biggest change in this release is the change in name. From now on we won't speak about the "APEX Listener" anymore, but we speak about "Oracle REST Data Services".

The name explains more what the piece of software is doing and it will be less confusing for non-APEX people. Many people thought about the Database Listener when talking about the APEX Listener. Or others just wanting REST, don't question the use of APEX anymore.

The "Oracle REST Data Services" still carries the same purpose for us as APEX developers:
- serve as a replacement for the (older) mod_plsql
- enable the RESTful Services in APEX (found in SQL Workshop and Application wizard)

Categories: Development

Is Evolution Irrefutable and Compelling?

FeuerThoughts - Sat, 2014-02-22 05:25
On my PL/SQL Challenge website, we have a feature called Roundtable, which offers an opportunity to discuss "big picture" questions relevant to Oracle programmers.

The current discussion (well, sharing, really) asks players to share the programming languages with which they work.

In part of my answer, I wrote:

I should learn new stuff...but, heck, I am 55. I have spent a very large percentage of the last 35 years in front of a computer or talking to other people about how to work best in front of a computer.

I'd rather learn other new stuff, so for the past year I have been intensively studying evolution. How truly incredible and amazing! Now there's a "language" that blows my mind: The coding in DNA is mind-boggling. The irrefutable and compelling logic of evolution is astonishing.

If you have not read about evolution lately (and certainly almost anything you learned in school was both superficial and is now out of date), I strongly encourage you to check out:

Your Inner Fish, Neil Shubin
The Beak of the Finch, Jonathan Weiner
The Darwinian Tourist, Christopher Wills
Why Evolution is True, Jerry Coyne

To which one player responded:


 ==> [The irrefutable and compelling logic of evolution is astonishing.] <==
Is religious zeal allowed on this site? If it is, I am very happy to hear. I have tons of it. Irrefutable huh? Huh. Sounds like religious zeal to me. Please let me know!

I responded in part as follows:

The irrefutable and compelling logic of evolution is astonishing to those of us who use and celebrate science to understand the world and live within that world. Evolution is accepted as fact within the scientific community (which is not to say there aren't a few scientists here and there who reject it, I suppose) and is demonstrated in virtually every branch of science active today.

and offered to start a discussion on my personal blog, where it would be more appropriate to delve into our different opinions about evolution.

So here it is!  I look forward to at least a response from Mike to get this going, and I would ask those who submit a post to tell me what books or articles by scientists that document the evolutionary process have you read. I am not asking if you believed any of it, but simply whether (and which) you have exposed yourself to this information.


 
Categories: Development