Feed aggregator

RACcheck for Single Instance Databases

Asif Momen - Tue, 2013-02-12 01:21
Starting with RACcheck 2.2.0, RACcheck support is extended to Oracle Single Instance Databases, Oracle Restart and RAC One Node configurations. 


I downloaded the latest version of the tool and tested it against my play database. Below is a sample RACcheck output from a single instance database:



[oracle@localhost raccheck]$ ./raccheck -v

RACCHECK  VERSION: 2.2.0_20121109
[oracle@localhost raccheck]$ 
[oracle@localhost raccheck]$ 
[oracle@localhost raccheck]$ ./raccheck -a

List of running databases
1. testdb
2. None of above

Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].
. .


Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

. . . . . . . . . . . . . . .
-------------------------------------------------------------------------------------------------------
                                                 Oracle Stack Status                           
-------------------------------------------------------------------------------------------------------
Host Name  CRS Installed  ASM HOME       RDBMS Installed  CRS UP    ASM UP    RDBMS UP  DB Instance Name
-------------------------------------------------------------------------------------------------------
localhost   No              No              Yes             No         No       Yes      testdb  
-------------------------------------------------------------------------------------------------------



Refer to the MOS document mentioned below to learn more about RACcheck tool. You may download the latest version of RACcheck from My Oracle Support (MOS).

References: 

RACcheck - RAC Configuration Audit Tool [ID 1268927.1]

Deploying a Virtualized Oracle Database Appliance ODA 2.5

Fuad Arshad - Mon, 2013-02-11 17:12


So I finally got the opportunity  to deploy ODA 2.5 on a development Oracle Database Appliance. The Documentation  is very lacking and needs more refinement for the masses.
Here are the steps to deploy ODA 2.4 by reimaging the ODA . Concepts and procedures for bare maetaling the box remain the same.
    1. Use the ILOM to connect to the box via remote control and mount the VM Image  ( Need to do this on both hosts individually)
    2. After the reboot . The imaging Process will start and should takes between 1-2 hrs (Took about 2 hrs 30 minutes for me)
    3. Once you get the Oracle VM Server 3.1.1 screen. Your box has been imaged with a dom0 image.
    4. If you are using the ILOM you can ALT-F2 to get a login prompt
    5. Login as root
    6. Ensure that both boxes have been reimaged before starting the next step
    7. Run /opt/oracle/oak/bin/oakcli configure firstnet  ( on the first node)
    8. You have 2 options  (Local or Global)
    9. Global should be selectedi f both nodes are ready ot be ip'ed
    10. Select the network net1 , net2, net3, net4)
    11. Please note it is a little confusing but Here is a break down
        a.  priv1=bond0 (Interconnect)
        b.  Net1=bond1
        c. Net2=bond2
        d. Net3=bond3
        d. Net4=xbond0
    12. Oracle Failed to mention this but on startup does provide the MAC Addresses as well as ethernet names and bond info so be careful and ensure that  you understand your network topology prior to installing.
    13. You do want to make sure you have a DNS entry and a new IP Address for the DOM0 for each Server Node (2 x Dom0)
    14. Needless to say the network should be same on both nodes e.g public should be cabled on net1 on both nodes for consistency
    15. The network config will configure the public on both nodes  for Dom0 only
    16. After the config scp patch 16186172 into /OVS on the Dom0 box 0
    17. Unzip the patch 1 & 2 files and cat them together  cat a b >templates.
    18. Deploy the oda_base
 [root@odadb1-dom0 bin]# ./oakcli deploy oda_base
Enter the template location: /OVS/templateBuild-2013-01-15-08-53.tar.gz
Core Licensing Options:
1. 2 CPU Cores
2. 4 CPU Cores
3. 6 CPU Cores
4. 8 CPU Cores
5. 10 CPU Cores
6. 12 CPU Cores
Selection[1 : 6] : 5
ODA base domain memory in GB(min 8, max 88)[default 80] :
INFO: Using default memory size i.e. 80 GB
INFO: Node 0
INFO: Deployment in non local mode
INFO: Running the command to copy the template /OVS/templateBuild-2013-01-15-08- 53.tar.gz to remote node 1
templateBuild-2013-01-15-08-53.tar.gz 100% 4620MB 47.6MB/s 01:37
INFO: Node 0
INFO: Spawned the process 26679 in the deployment node 0
INFO: Trying to setup on deployment node 0
INFO: Spawned the process 26680 in the node 1
INFO: Trying to setup on node 1
templateBuild-2013-01-15-08-53/swap.img
......
templateBuild-2013-01-15-08-53/u01.img
Using config file "/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/vm.cfg" .
Started domain oakDom1 (id=1)
INFO: Deployment in local mode
INFO: Node 1
INFO: Extracted the image files on node 1
INFO: Node 1
INFO: The VM Configuration data is written to /OVS/Repositories/odabaseRepo/Virt ualMachines/oakDom1/vm.cfg file
INFO: Running /sbin/losetup /dev/loop0 /OVS/Repositories/odabaseRepo/VirtualMach ines/oakDom1/System.img command to mount the image file
INFO: Mount is successfully completed on /dev/loop0
INFO: Making change to the /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 /tmpmnt/boot/grub/grub.conf file
INFO: Node 1
INFO: Node 1
INFO: Assigning IP to the second node...
INFO: Node 1
INFO: Created oda base pool
INFO: Starting ODA Base...
Using config file "/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/vm.cfg" .
Started domain oakDom1 (id=1)
INFO: Deployment in local mode
INFO: Node 0
INFO: Extracted the image files on node 0
INFO: Node 0
INFO: The VM Configuration data is written to /OVS/Repositories/odabaseRepo/Virt ualMachines/oakDom1/vm.cfg file
INFO: Running /sbin/losetup /dev/loop0 /OVS/Repositories/odabaseRepo/VirtualMach ines/oakDom1/System.img command to mount the image file
INFO: Mount is successfully completed on /dev/loop0
INFO: Making change to the /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 /tmpmnt/boot/grub/grub.conf file
INFO: Node 0
INFO: Node 0
INFO: Assigning IP to the first node...
INFO: Node 0
INFO: Created oda base poo1
INFO: Starting ODA Base…
19. Once oda_base is deployed
                
 [root@podadb31-dom0 bin]# ./oakcli show oda_base
ODA base domain
ODA base CPU cores :10
ODA base domain memory :80
ODA base template :/OVS/templateBuild-2013-01-15-08-53.tar.g
20.Once the oda_base is installed you will have to vnc in using the dom0 port 5900 to get access to the database server ( Due to a bug you will need to vnc in on both servers first and press the press any key to continue).
21. Once logged in you will need to IP the oda_base
22.You can either use the /opt/oracle/oak/bin/oakcli configure firstnet ( Please note it detects VM Environment and gives eth1,eth2,eth3 and eth4 as options )
23. Better to use  ./oakcli deploy
24. Oakcli now has option to change the proxy port for ASR as wella s configure external ASR server
25. External ASR server needs a server name as well as port (no definition of what the port is supposed to be )
26.  Also due to a bug if vm manager is bounced   you will have to vnc in and hit "press any key to continue". you can see that below
 The deployment process has not changed and will follow the same deployment steps.
This is the first release of ODA on a virtualized platform and glitches are to be expected. but it does seems to have been rushed out .
Please feel free to comment or ask questions. I have only deployed a DOm0 and the ODA_base here but i will deploy an app shortly and post my experience

Update: Edited steps and changed the network names

Changing your PS Database Platform: The Test Phase

Brent Martin - Mon, 2013-02-11 02:51

So far I've written about how you might approach the plan, design, and build phases of a PeopleSoft replatforming project.  This time around I'd like to spend some time on the Test phase.


Just because you’re only changing some SQL around doesn’t mean that you can take shortcuts with testing.  You’ll want to run an entire stem-to-stern test of your PeopleSoft system to insure that everything works as expected.


 One thing to keep in mind:  99% of your defects will be with custom and customized code.   The delivered code won’t generate nearly as many problems, so if you’re deciding where to spend your testing resources definitely focus on the custom processes.


 As I mentioned in the Build phase, Unit Testing is critical.  It’s arguably the most important testing that you will do so come up with a mechanism to track unit test results with the objects modified and make sure you have 100% custom code coverage.


 Testing the data migration process is important too.  Databases differ in subtle ways and you’ll want to make sure your normal and extended character sets make it across correctly.  Naturally you’ll want to run row count reports, but you’ll need to go beyond that.   You’ll want to make sure the data in the underlying tables are identical and nothing has been lost in translation.  One simple way is to use ODBC to join to tables in the source and target databases and create queries that insure the individual columns are identical.  Unfortunately that approach is extremely slow.  Another approach is to run hash algorithms on key character fields and summarize the totals, comparing the results on both the source and target database.


System/Integration Testing is also very important.  For one thing, you’ll want to have confidence that the system behaves the way you expect it to.  And interfaces will generate problems themselves.  One common problem is that your interface programs probably assume the default date format for a database platform, and interfaces that don’t specify a date format can choke when the incoming date format doesn’t match what’s expected, or they can send the wrong date format to an output file.  Switching from a Non-Unicode database platform to Unicode can cause other problems.  You’ll want to execute all of your interfaces and make sure results are valid. 


User Acceptance Testing is important as well.  Users who know the processes should spend some time in the system making sure all of the critical functionality works and they feel comfortable everything is working as expected. They’ll need to be on the lookout for date format issues, performance issues, data entry discrepancies, etc.  They should also spend quality time with their reporting to make sure no new errors were introduced during the build phase.


And finally Performance Testing should be conducted to flesh out any new problems introduced by the new platform.  The performance testing should include Online Performance testing,  Batch performance testing, and if possible data from PeopleSoft Performance Monitor should be captured and compared to baseline performance data from the old system. 


Online performance testing is typically conducted using a tool like LoadRunner or Rational Performance Tester.  You record scripts based on a set of highly used business processes and play them back in volume.  While the test is executing your admin team will monitor system performance on the various servers and look for bottlenecks.  At the end of the day this is necessary for a performance testing effort (especially if you’re making multiple changes like migrating to new hardware and/or a PeopleTools release).  However, it’s not sufficient to identify all performance issues.


One of the big deficiencies of online performance testing is that it doesn’t test batch jobs, or if it does test them it is very limited.  For batch testing, you’ll want to define a testing mechanism that is realistic according to real, observed jobs that run in your production environment.  You’ll want to make sure that the jobs have real data to process.  And you’ll want to make sure the jobs are sequenced in such a way that dependencies are tracked to some extent.  After all of that, you’ll want to come up with a way to track and report results.  I’ll write more about the approach and toolset I’ve used to get realistic batch tests in the future so stay tuned. 


The other deficiency of online performance testing is that PeopleSoft is just too complicated of an application to expect a robot to crawl all of the web pages looking for slowness.   Sure, during yoRead More...

Fun with Date Math

Chet Justice - Sun, 2013-02-10 20:42
(First off, sorry Mike, I'm hoping this will break my writer's block...)

On Friday I was asked to look at a report that wasn't returning all of the data. Sample:
Year/Month  Total Sales Total Sales (YAGO)
------------------------------------------
01/31/2013 $1,000,000 $900,000
03/31/2013 $950,000
For reference, YAGO is "Year Ago."

Notice anything funny there?

Yeah, February is missing. The (OBIEE) report has a filter on Jan, Feb and Mar of 2013. But it wasn't showing up. I confirmed via manual SQL (hah!) that there was (YAGO) data in there for February. Any ideas?

I immediately suspected one of two things:
- If the Date (month) dimension had a "year ago" column it was wrong.
- The join in OBIEE was doing it wrong.

I checked the date dimension first. It was fine. It didn't even have a YAGO column, so nothing to see there. I looked at the join between the date dimension and the fact table...
(YEAR ("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE" ) - 1 ) * 10000 
+ MONTH ("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE" ) * 100
+ CASE WHEN DayOfMonth("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE") = 29 THEN 28 ELSE
DayOfMonth("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE") END
= "DW".""."DW"."My_Fact_Table"."MONTH_DIM_KEY"
I want to tear my eyes out when I see stuff like that. I don't even want to know what it does. * 1000? * 100? Shoot me.

OK, so the MONTH_DIM_KEY is in the YYYYMMDD format. MONTHEND_DATE is a date data-type that corresponds to the last day of the month. For February 2013, it's 20130228, For February 2012, it should be 20120229. <<< Leap Year!!! I'm going to make a wild guess and say that the formula up there isn't working. How to test it though? That's logical SQL (OBIEE), it doesn't run in the database. I just ran the report and grabbed the SQL submitted to the database. This is what it looked like:
          AND ( TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'yyyy' ), '9999' ) - 1 ) * 10000 +
TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'MM' ), '99' ) * 100 +
CASE
WHEN TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' ) = 29
THEN 28
ELSE TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' )
END = MONTH_DIM_KEY
AND( MONTHEND_DATE IN( TO_DATE( '2013-01-31', 'YYYY-MM-DD' ), TO_DATE(
'2013-02-28', 'YYYY-MM-DD' ), TO_DATE( '2013-03-31', 'YYYY-MM-DD' ) ) )
Eyes are burning again. This is also the "prettified" SQL after I hit Ctrl + F7 in SQL Developer. The very first thing I do with OBIEE generated SQL.

One part of that wouldn't be so bad, but it's three formulas adding up to some mysterious number (presumably the last day of the month, for the previous year, in YYYYMMDD format). So I moved all those formulas up into the SELECT part of the statement. Let's see what they are doing.
SELECT
( TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'yyyy' ), '9999' ) - 1 ) * 10000 part_1,
TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'MM' ), '99' ) * 100 part_2,
CASE
WHEN TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' ) = 29
THEN 28
ELSE TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' )
END part_3
FROM my_month_dim
WHERE MONTHEND_DATE IN ( TO_DATE( '2013-01-31', 'YYYY-MM-DD' ),
TO_DATE( '2013-02-28', 'YYYY-MM-DD' ),
TO_DATE( '2013-03-31', 'YYYY-MM-DD' ) )
That resulted in this:
PART_1         PART_2         PART_3
20120000 100 31
20120000 200 28
20120000 300 31
So PART_3 is definitely incorrect. Am I going to bother to figure out why? I have some serious issues inside of my brain which simply do not allow me to do date math. I avoid it at all costs...instead choosing to use whatever the system provides me.

One of my favorites, especially when dealing with leap years, is ADD_MONTHS.

If date is the last day of the month or if the resulting month has fewer days than the day component of date, then the result is the last day of the resulting month

That's why. Add -12 months to February 28, 2013. You don't get back February 28, 2012, you get back the 29th, as it should be. Do the same thing starting with February 29th, 2012. Add 12 months, subtract 12 months. It's always right. Yay for someone figuring this out so I don't ever have to do so.

Sadly, OBIEE doesn't have the equivalent of ADD_MONTHS (or LAST_DAY), you have to build out a string and then concatenate it all together, not pleasant. So I cheated, I used EVALUATE. Here's my working solution.
TO_NUMBER( TO_CHAR( ADD_MONTHS( MONTHEND_DATE, -12 ), 'YYYYMMDD' ) ) = DW_MONTH_DIM_KEY
Oops, that's the physical SQL. How about the OBIEE SQL:
CAST( EVALUATE( 'TO_NUMBER( TO_CHAR( ADD_MONTHS( %1, %2 ), %3 ) )',
"EDW".""."DW"."Dim_DW_MONTH_DIM_CBM"."MONTHEND_DATE",
-12, 'YYYYMMDD' ) AS INTEGER ) = "DW".""."DW"."My_Fact_Table"."MONTH_DIM_KEY"
Categories: BI & Warehousing

Send tweets from Oracle with OAuth

Kubilay Çilkara - Thu, 2013-02-07 17:03
Twitter is a great real time social platform. Timelines and hashtags are a great way to communicate to an audience of subscribers relevant information.

There are lots of websites and applications which use twitter to provide content to their users. There are programming languages, Java, PHP, Python which have build Twitter API libraries to quickly send and receive content from Twitter but I haven't come accross a fully fledged library in PL/SQL, which would enable you to send a Tweet from PL/SQL using Twitter's latest oAuth security protocol. 

There are blogs out there which show you how to send a tweet from Oracle, but not many using the new oAuth security protocol.

The only blog post I have seen so far which uses PL/SQL and oAuth to send a tweet from Oracle is of Some coding hero's heroic mumblings an Oracle consultant. 

I have spend time reviewing his code with the comments made on his post which he has wrote in 2010 and managed to trim it down to use only one of his PL/SQL procedures.  The procedure below sends the 140 characters tweet  for you using oAuth. To be able to do this, as Some Coding Hero says, you will have to create a Twitter Application for your twitter handle. Actually you don't create an application you just sign up and obtain 4 security codes! Once you register your application with Twitter,  you are given 4 oAuth security Codes as follows:


With the above 4 oAuth security codes you can use Somecodingheros last block of code only to send a tweet from Oracle - I used Oracle Apex - like this:

new Twitter API 1.1 update and the use of Oracle Wallet is added to the script in green below on  21-JUN-2013

DECLARE  
  
-- Twitter API 1.1 update
  l_oauth_request_token_url CONSTANT VARCHAR2 (500) := 'https://api.twitter.com/1.1/statuses/update.json';   
--
  l_oauth_consumer_key CONSTANT VARCHAR2 (500) := 'xxxxxxxxx';  
  l_oauth_token  CONSTANT VARCHAR2 (500) := 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx';  
  l_oauth_secret CONSTANT VARCHAR2 (500) := 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';  
  l_oauth_nonce VARCHAR2 (500);  
  l_oauth_signature_method CONSTANT VARCHAR2 (10) := urlencode ('HMAC-SHA1');  
  l_oauth_timestamp VARCHAR2 (100);  
  l_oauth_version CONSTANT VARCHAR2 (5) := urlencode ('1.0');  
  l_oauth_consumer_secret CONSTANT VARCHAR2 (500) := 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';  
  
  l_http_method VARCHAR2 (5) := 'POST';  
  l_oauth_base_string VARCHAR2 (2000);  
  
  l_oauth_key VARCHAR2 (500) := l_oauth_consumer_secret || '&' || l_oauth_secret ;  
    
  l_sig_mac RAW (2000);  
  l_base64_sig_mac VARCHAR2 (100);  
    
  http_req UTL_HTTP.req;  
  http_resp UTL_HTTP.resp;  
    
  l_update_send VARCHAR2(2000);  
  l_oauth_header  VARCHAR2(2000);  
    
  l_line  VARCHAR2(1024);  
    
  resp_name  VARCHAR2(256);  
  resp_value VARCHAR2(1024);  
   
-- put the tweet in the urlencode function below 
  l_content varchar2(140) := urlencode('@somecodinghero thank you');  
  
  l_random varchar2(25);  
  
BEGIN  

 -- Oracle Wallet
    utl_http.set_wallet('file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle', 'putyourwalletpasswordhere');


 -- Get the timestamp  
  SELECT urlencode ((SYSDATE - TO_DATE ('01-01-1970', 'DD-MM-YYYY'))  * (86400)) 
  INTO l_oauth_timestamp  
  FROM DUAL;  
  
  -- RANDOM oauth_nonce  
  SELECT dbms_random.string('A',25)  
  INTO l_random  
  FROM DUAL;  
    
  SELECT urlencode (UTL_ENCODE.base64_encode(UTL_I18N.string_to_raw (l_random, 'AL32UTF8')))  
  INTO l_oauth_nonce  
  FROM DUAL;  
  
  
   
  l_oauth_base_string := l_http_method   
                          || '&'  
                          || urlencode (l_oauth_request_token_url)  
                          || '&'  
                          || urlencode ( 'oauth_consumer_key'  
                              || '='  
                              || l_oauth_consumer_key  
                              || '&'  
                              || 'oauth_nonce'  
                              || '='  
                              || l_oauth_nonce  
                              || '&'  
                              || 'oauth_signature_method'  
                              || '='  
                              || l_oauth_signature_method  
                              || '&'  
                              || 'oauth_timestamp'  
                              || '='  
                              || l_oauth_timestamp  
                              || '&'  
                              || 'oauth_token'  
                              || '='  
                              || l_oauth_token  
                              || '&'  
                              || 'oauth_version'  
                              || '='  
                              || l_oauth_version  
                              || '&'  
                              || 'status'  
                              || '='  
                              || l_content);  
                                
  DBMS_OUTPUT.put_line (l_oauth_base_string);  
    
  l_sig_mac := DBMS_CRYPTO.mac (  UTL_I18N.string_to_raw (l_oauth_base_string, 'AL32UTF8')  
                                , DBMS_CRYPTO.hmac_sh1  
                                , UTL_I18N.string_to_raw (l_oauth_key, 'AL32UTF8'));  
                                  
  DBMS_OUTPUT.put_line ('Combined sig: ' || l_oauth_key);  
                                  
  l_base64_sig_mac := UTL_RAW.cast_to_varchar2 (UTL_ENCODE.base64_encode (l_sig_mac));  
    
  DBMS_OUTPUT.put_line ('MAC Signature (Base64-encoded): ' ||  l_base64_sig_mac);  
    
  l_update_send := l_oauth_request_token_url || '?status=' || l_content;  
                     
    http_req := UTL_HTTP.begin_request (  l_update_send  
                                        , l_http_method  
                                        , UTL_HTTP.http_version_1_1);  
                                          
   DBMS_OUTPUT.put_line ('UPDATE URL ' || l_update_send);  
     
   UTL_HTTP.set_response_error_check (TRUE);  
   UTL_HTTP.set_detailed_excp_support (TRUE);  
     
   
    
    l_oauth_header := 'OAuth oauth_nonce="' || l_oauth_nonce || '", '  
                      || 'oauth_signature_method="'|| l_oauth_signature_method || '", '  
                      || 'oauth_timestamp="'|| l_oauth_timestamp || '", '  
                      || 'oauth_consumer_key="'|| l_oauth_consumer_key || '", '  
                      || 'oauth_token="' || l_oauth_token || '", '  
                      || 'oauth_signature="' || urlencode (l_base64_sig_mac) || '", '  
                      || 'oauth_version="' || l_oauth_version || '"';  
                        
    utl_http.set_header ( r => http_req,   
                          NAME => 'Authorization', VALUE => l_oauth_header);  
                            
    DBMS_OUTPUT.put_line  ('HEADER: ' || l_oauth_header);                          
                            
    utl_http.write_text(  r => http_req, DATA => l_content);   
       
    http_resp := utl_http.get_response(r => http_req);  
       
   DBMS_OUTPUT.put_line('GETTING RESPONSE HEADERS! ');  
     
   FOR i IN 1..utl_http.get_header_count(http_resp) LOOP  
    utl_http.get_header(http_resp, i, resp_name, resp_value);  
    dbms_output.put_line(resp_name || ': ' || resp_value);  
   END LOOP;  
     
  DBMS_OUTPUT.put_line('Getting content:');  
  BEGIN  
      LOOP  
        utl_http.read_line(http_resp, resp_value, TRUE);  
        dbms_output.put_line(resp_value);  
      END LOOP;  
        
      EXCEPTION  
      WHEN utl_http.end_of_body THEN  
        DBMS_OUTPUT.put_line('No more content.');  
  END;  
  
   utl_http.end_response(r => http_resp);  
   
  
  EXCEPTION  
    when others then  
      DBMS_OUTPUT.put_line('HTTP ERROR: ' || utl_http.get_detailed_sqlerrm);  
  
END;  

Thank you Somecodinghero!
Categories: DBA Blogs

StarAnalytics Bought by IBM

Look Smarter Than You Are - Wed, 2013-02-06 15:13

On February 1, it was announced that Star Analytics (one of our favorite software companies in the world) is being bought by IBM (not one of our favorite software companies in the world) for an undisclosed amount.  Star, founded in 2004, made two excellent products (Star Integration Server and Star Command Center) and IBM's strategy, at the moment, is to continue the two products under IBM's Business Analytics Software group.

As  everyone knows, IBM has been on an acquisitions kick for the last 5 years particularly around business analytics.  They own Cognos, TM/1, Clarity and a whole lot of other products... or at least they bought the companies that made those products and then stopped some of those products and continued others.  Unlike Oracle that is quite good at buying companies and then immediately knowing which products they want to strategically continue, IBM can take some time to make up their mind and half the time, people internal to IBM don't know which products are being discontinued.  There are still people internal to IBM that are touting Clarity's planning and consolidations products, and those have been virtually dead since IBM first bought Clarity.

It may seem odd to some that Star was bought by IBM considering that IBM owns Cognos and Star is traditionally awesome at Hyperion integration not Cognos, TM/1, and the like.  What many people don't realize is that Star's products have been expanded beyond their traditional Hyperion roots over the last few years and now talk well to other products including relational databases.  Star Integration Server is still found almost exclusively at Hyperion shops, and one has to believe that part of the reason IBM bought Star is to be able to easily extract data from Essbase, Planning, and HFM.

Judging from IBM's announcement and FAQ on the purchase, it seems that being able to extract and control Oracle (particularly Hyperion) is the main reason they bought Star.  That makes it odd that Oracle didn't go ahead and buy them instead.  All I can think of is that either IBM offered a better price or Oracle felt they had competing products with some of the same functionality already (I'll be getting to that in a second).

So what does that mean for you?  If you bought Star's products, congratulations.  They are excellent products and I would continue using them for as long as IBM continues to support them.  If you're considering a purchase, I would wait until IBM decides what they're going to do.  At bare minimum, IBM will probably begin to favor Cognos and TM/1 more than Hyperion and for a lot of us, Hyperion expertise was the reason we bought Star's products.

If you want to consider something else, I would suggest buying Hyperion Financial Data Quality Management or Oracle Data Integrator instead of Star Integration Server and Hyperion Financial Close Management instead of Star Command Center.  They don't exactly overlap functionality-wise, but they are the closest replacements I can readily think of.  Note that Star Integration Server has some very cool extraction technologies that are patented, so any product extracting data or hierarchies from Hyperion is probably going to be a lot slower, for the time being, than Star.

We will miss you, Star Analytics.  It was a good 7+ year run, and the Hyperion world will always fondly remember your company, your products, and your employees (particularly your iconic leader, and my close friend, Quinlan Eddy).  May your staying agreements at IBM be short.
Categories: BI & Warehousing

Add Code w/ Syntax Highlighting to PowerPoint

Tyler Muth - Wed, 2013-02-06 09:12
…or any media that supports rich text or HTML. I often include code-snippets or queries in technical presentations. I find them much more visually appealing and easier to read if they include syntax highlighting. If you’re using Notepad++ on Windows, it’s trivial. Simply go to the “Plugins” menu > “NppExport” > “Copy all formats to […]
Categories: DBA Blogs, Development

“Refresh” a CSV File

Tyler Muth - Tue, 2013-02-05 09:22
In a previous post I discussed a PDF reader that allows you to refresh the PDF after changes are made. This is the same concept except it’s for CSV files. I’ve been using Perl to convert sqlplus spool output to CSV which is much easier to read into R. As I’m updating my code and […]
Categories: DBA Blogs, Development

“Refresh” a PDF

Tyler Muth - Tue, 2013-02-05 08:50
I’ve been working in R (wikipedia entry) a lot lately to graph performance data about Oracle systems. In general I output PDF files. The problem with most PDF readers (on Windows) is that they either get an exclusive lock on the file when it’s open (yes you, Adobe Acrobat), or you have to close and […]
Categories: DBA Blogs, Development

Changing your PS Database Platform: The Build Phase

Brent Martin - Tue, 2013-02-05 03:05

In last two postings I wrote about how you might plan a project where you migrate your PeopleSoft application from one database platform to another, and how you might approach the Design phase.  I wanted to share my thoughts about the Build phase in this article.  I'll share my thoughts about the Test and Cutover phases in my next posting(s).


The Build Phase


The Build Phase is always my favorite part of any PeopleSoft project, probably because I come from a Development background.  The Build phase of a replatforming project is in some ways very straightforward, and in some ways it is more difficult.  The problem isn’t in the coding changes – it’s not too difficult to make a piece of SQL work on a different database platform -- the challenge is in Unit Testing.  Every SQL that is touched must be unit tested, and that will be the biggest part of the effort.


Most developers are used to unit testing their own work.  But it's a good idea to use a code and testing review where developers document each object change and unit test and another developer reviews the results.  Since there will be many small changes, the documentation requirement should be light, but it should include a trace file that proves that each App Engine step, PeopleCode SQL, and SQR Function was executed was tested.  How structured your process is will depend on the size and location of your team.  Insuring quality with process and documentation might not be as important in a small shop, but is critical to your success if you have a large development team located off shore.


Unit testing is the only opportunity you’ll have to actually test each piece of modified code.  Subsequent phases will test the system overall, but you will probably not achieve 100% code coverage.  Fortunately, almost all of your defects can actually be caught in unit testing of a replatforming project so you should use this to your advantage.  Defects that get missed will haunt you in later testing phases where they’ll be more visible and more expensive to fix.


Also as part of this phase, your DBA team should  execute another mock cutover using the tools and steps you decided you will use for the real cutover.  The resulting database (plus the code generated in the Build phase) will be the starting point for your first test database.


And the testing team should start building the test scripts for the subsequent test phases here.  Since we’re not changing application functionality, they should be able to leverage existing scripts from prior upgrades or implementation and enhance them for functionality that was added since the original scripts were created.

Oracle BI Apps – What are customers interested in?

Oracle e-Business Suite - Sun, 2013-02-03 08:05

A very informative Post by Rajesh Dhanapal from Infosys.

In the current era the customers are not interested to wait for months and years for BI project implementation. The customers are keen on quick wins rather than implementing the BI solution from scratch by following traditional BI approach. The Oracle BI Apps solution provides head start for the organisation to reach the to-be state quicker with reduced time and effort, and reduced risk.Read Details


Categories: APPS Blogs

Behavior of Tab Canvases

Gerd Volberg - Thu, 2013-01-31 05:00
The behavior of tab canvases ist very interesting, when your form gets more and more tabpages. Let's say, you have only three pages in your tab canvas. (Width Style Property = Fixed)
Forms automatically renders all three pages in the same size. That's what we expected.

In the next step we create a fourth tab:

If there is not enough space for all page-labels in the same size, then we see a first kind of resizing. All labels are readable. But the size of some pages is reduced.

If we create more tabpages:
At one point the label-text shrinks and some letters were exchanged by dots.

And in the last stage:

The last type of look and feel is, when we get a horizontal scrollbar. Think on this, when you use many tabpages in your form

Have fun
Gerd

Bug: ORA-00979: not a GROUP BY expression

Asif Momen - Wed, 2013-01-30 13:51
Bugs and performance degradation are part of database upgrade stories and we have witnessed yet another post-upgrade bug after upgrading our database from Oracle 10gR2 (10.2.0.5) to Oracle 11gR2 (11.2.0.2). 

Following query (I have simplified the query for the demonstration purpose) was running happily within Oracle 10gR2:


SQL>
SQL> select * from ( select TRUNC(dt,'MM')
  2          from test
  3          group by TRUNC(dt,'mm'));

TRUNC(DT,
---------
01-JAN-13

SQL>


However, the same query started to throws an error (ORA-00979) when executed in Oracle 11gR2 (11.2.0.2): 


SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> 


SQL> select * from ( select TRUNC(dt,'MM')
          from test
          group by TRUNC(dt,'mm'));
  2    3  select * from ( select TRUNC(dt,'MM')
                             *
ERROR at line 1:
ORA-00979: not a GROUP BY expression


SQL> 



At this point I normally do a search on My Oracle Support (MOS) to see if I get  some hits pertaining to this problem and found the following bug information:



Bug 11067251 - False ORA-979 with TRUNC(date, format) or ROUND(date, format) - superceded [ID 11067251.8]


Apparently, you hit the bug when you use either TRUNC or ROUND functions in an inline view. Executing the same query with little modification (removing inline view) in 11.2.0.2 was however successful.


SQL> select TRUNC(dt,'MM')
 from test
group by TRUNC(dt,'mm');
  2    3  
TRUNC(DT,
---------
01-JAN-13

SQL> 

The above bug confirms that 11.2.0.2 is affected and proposes following two workarounds: 

1) Use NO_MERGE hint or
2) Disable view merging "_simple_view_merging=false" 

As it was not possible to rewrite the application at this point, so we disabled view merging at the system level. Well, disabling view merging at the system level might appear as a bad choice but I think it is a right decision at this time. We will soon be upgrading this database to 11.2.0.3. This will kill two birds with one stone, a) bug fix and b) upgrading to the latest patch level (who knows what new bugs are waiting for us ???).

References: 

  • My Oracle Support:  Bug 11067251 - False ORA-979 with TRUNC(date, format) or ROUND(date, format) - superceded [ID 11067251.8]


Dummy output parameters

Rob van Wijk - Tue, 2013-01-29 12:31
Yesterday I encountered a code snippet which taught me something I did not think was possible. But it is, as I'll show in this blogpost. It's not spectacular in any way, just convenient at most. When you need a function or procedure to retrieve some value, you'll start examining the existing code base if a function already exists that does the job. Probably you won't find an exact match, but Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com2

Webcast: Using Oracle OLAP data in multiple BI clients. Feb. 5 at 9:00 PST

Keith Laker - Tue, 2013-01-29 11:35

Want to learn how to connect multiple BI clients such as SAP BusinessObjects Analysis, Cognos and Microsoft Excel to Oracle OLAP, quickly and easily, using the XMLA version of the MDX Provider for Oracle OLAP?  Check out the Oracle webcast on February 5 at 9:00 AM PST.

You will see how it works and how customers in different industries are using this solution.

Just sign up here and join us on February 5th.
Categories: BI & Warehousing

Changing Your PS Database Platform: The Design Phase

Brent Martin - Sun, 2013-01-27 22:52

In my previous article I described how you might approach planning and requirements gathering for a PeopleSoft database replatforming project.  You would exit that phase with solid RICE object inventories and an idea of any new PeopleTools functionality you will deploy.


I’d like to take you through some thoughts and considerations for the remaining phases (Design, Build, Test, Deploy).  This article will just focus on the Design phase and i'll publish the other articles in the near future.  For this discussion, I’m going to assume we’re doing a PeopleTools upgrade along with the replatforming effort, and we’re going to keep existing functionality/features/customizations the same as the current state.  This is usually a good idea because the technology changes will be challenging enough. 


The Design Phase


You might think a basic replatforming project doesn’t require a lot of design.  Once you have the object inventories from the planning/requirements gathering phase you have enough information to start modifying the SQL to make it work on the new database platform.   The one thing I would suggest though would be to bundle the raw object lists into logical chunks of work.  For example, if you have a component, page and process that work together to execute a process you should bundle them together so a developer can unit test all of them at the same time.  If you want to deploy new PeopleTools features you’ll want to spend some time deciding which ones will be useful and how you will configure, test, and deploy them.


But there’s a bit more work you need to do in this phase.  First, you’ll want to identify any external system that might need to be updated as well.  Any system that uses database links or has dependencies on your current PeopleTools environment (think Component Interface libraries) will need to be investigated to determine the impact and appropriate action to take.


Another decision you’ll need to make is with reports and queries.  You probably have a LOT of public queries, and you may have a lot of reports.  nVisions in particular seem to multiply if you don’t have strong governance processes to limit users to specific standard reports. 


So how do you deal with this situation?  It’s not always cost effective to upgrade and test every one.  Here are a couple of suggestions to manage this problem:


1)   Ask your users to provide their business critical reports and queries.  This will be the list that you “certify” that they will work correctly and perform well on the new platform.  You’ll spend whatever time is necessary during development and the testing phases to make very sure that these queries and reports are defect free.


2)   Identify all of the reports and queries that have been run in the last couple of years via process scheduler and query audit tables.  All of these will be your priority 2 set.  Scan this set using automated techniques to identify problems, correct any that fall out, and unit test everything that is modified.  Be sure a good percentage of these are tested in future test phases and give users an opportunity to test them during user acceptance testing.


3)   Other reports and queries won’t be touched.  Breaks here will be handled by your post go-live break-fix process.


The Design Phase is also when you should prepare your testing plan and your communication plan.


While this phase is progressing, your DBA team should execute an initial replatform from the database on the old platform to the database on the new platform.  For this exercise, we’ll just use data mover to extract every table on the source database platform to a flat file, and to import it to the new database platform.  Once on the new DB platform you’ll need to manually adust tables like PSDBOWNER, PSOPTIONS, etc.  Execute the PeopleTools upgrade (if necessary) and you’re done. Don’t expect this to go quickly the first time around – allow yourself 2-4 weeks in the schedule.  And capture all of the steps in detail because it will be the start of your cutover plan.  The environment this exercise produces will become your new development environment so that you can start your build phase. 


Also during this phase you should make decisions about the technologies and tools you’ll use to make your job easier.  One is Oracle GoldenGate.  GoldenGate is a Change Data Capture tool that supports multiple database platforms.  It gives you some amazing capabilities around extracting table data from one platform in parallel, shipping the extracted files to a target file server, and importing them in parallel while the extract is stiRead More...

Active Directory user management with the IdentityStore framework of FMW

Edwin Biemond - Sun, 2013-01-27 12:09
With the IdentityStore framework of Fusion Middleware you can change or create any user or role in almost every LDAP server. With this framework the authenticated user change their own password from ADF or in your own ADF Admin page you can create an user with its roles without knowing LDAP or know what LDAP server is used. IdentityStore framework will automatically detect the configured

Is DELETE necessary?

Kubilay Çilkara - Sat, 2013-01-26 04:35
A  delete is a very destructive operation for a database. An operation which requires a lot of  thinking and preparation and most of the time is irrevocable. You can get in a lot of trouble if you delete the wrong set of rows and lose all that very important data!

Maybe the DELETE command should be banned from SQL. It was probably invented when the disks were tiny and it might not be suitable for our times, especially nowadays when we talk about things like BIG DATA.

Why do we have to delete anyway? We should invalidate and age out data and never destroy it. I am one of those who would say ‘keep it’ and never lose it. Even a typo, an error while entering somebody’s name in a text field, tells us a story. The number of attempts to get it right, the number of characters typed which were wrong, the time the wrong entry took place and much more, is data. Temporal databases come to mind. Why delete it?

But, one can argue that not deleting can be dangerous, too! One can claim that by refusing to delete we might end up with a lot of old, duplicated, bad quality, uncleaned, irrelevant and untrustworthy data.
 

Maybe the worse thing that can happen to data is to refuse to delete it enough. Perhaps because of this lack of enough deletes and fear of deletes we end up with all the 'bad data' found in some  database systems today.

Whatever you choose to do, delete or not delete, one thing's for sure and that is that you should know what you are deleting or not deleting. Choosing confidently what to delete comes from understanding and knowing your data model, and the implications a delete will have on the data model and never because the data is not required for your project! 




Categories: DBA Blogs

Oracle Database Appliance 2.5 And Virtualization (Updated)

Fuad Arshad - Fri, 2013-01-25 12:21
So the cats out of the bag and Oracle has finally announced that Jan 31st ODA 2.5 will be available. The ODA 2.5 is a pretty significant update to the ODA Platform. it will have 1. Virtualization as an option with OVM 3.1.1 2. Official External ASR management. So The virtualization is huge since it allows for using the capacity of the ODA for things other than the Database. The concept is a DOM0 and an ODA_BASE which is a privileged domain that ensures that The Database Appliance performs optimally. Intially the DOMU or user Domain will not have access to ASM Disks or ACFS mounts but that is something that will show up in a future release . The User Domain will not be patched as part of the Single Patching technique which mean if you go with virtualization the user domain needs to be patched independently. Also initially only 250GB extra slice is the only space available for the DOM0 and DOMU areas. This can bring in a little more complexity but from a DR perspective it helps in building idle capacity with a smaller footprint . The question that is important is should ODA deployments start with a Virtual deployment even though there might be no requirement for it initially. Oracle will ship the physical deployment and customers can bare metal it to be virtualized. Oracle Claims to be able to provide virtualization with ease of use but only time will tell. I am pretty excited about this since i can build virtualized boxes and do DR at a cheaper cost but the same horse power that i need. I do intend to test this and see how the ease of use to determine if that is my goign in stance for all new ODA deployments . Update: The Database Appliance has 500GB of Local Disk space which means DOM0 and DOMU will have access to 250GB and ODA_BASE will have 250GB available to it.

How long did Oracle materialized view refresh run?

Ittichai Chammavanijakul - Mon, 2013-01-21 09:38

The LAST_REFRESH_DATE column of the DBA_MVIEWS or the LAST_REFRESH column of the DBA_MVIEW_REFRESH_TIMES indicates the start refresh time. But what if we’d like to find out how long the refresh of the materialized view really takes. Well, we can query the DBA_MVIEW_ANALYSIS.

For Complete Refresh, the refresh duration will be in the FULLREFRESHTIM column of the DBA_MVIEW_ANALYSIS. For Fast Refresh duration, it will be in the INCREFRESHTIM column.

Both values are in seconds.

SELECT mview_name, last_refresh_date, fullrefreshtim, increfreshtim
FROM dba_mview_analysis
WHERE owner='JOHN';

MVIEW_NAME               LAST_REFRESH_DATE      FULLREFRESHTIM INCREFRESHTIM
------------------------ ---------------------- -------------- -------------
MV_CHANGE_HISTORY        07-JAN-13 04.36.58 PM               0            36
MV_ITEM_HISTORY          07-JAN-13 04.36.58 PM               0             9

This shows that the recent refresh of the MV_CHANGE_HISTORY and MV_ITEM_HISTORY are the fast refreshes for 36 and 9 seconds respectively.

Put in one query to calculate and display the end time.

SELECT 
   mview_name,
   last_refresh_date "START_TIME",
   CASE
      WHEN fullrefreshtim <> 0 THEN
         LAST_REFRESH_DATE + fullrefreshtim/60/60/24
      WHEN increfreshtim <> 0 THEN
         LAST_REFRESH_DATE + increfreshtim/60/60/24
      ELSE
         LAST_REFRESH_DATE
   END "END_TIME",
   fullrefreshtim,
   increfreshtim
FROM all_mview_analysis
WHERE owner='JOHN';

MVIEW_NAME              START_TIME             END_TIME               FULLREFRESHTIM INCREFRESHTIM
----------------------- ---------------------- ---------------------- -------------- -------------
MV_CHANGE_HISTORY       07-JAN-13 04.36.58 PM  07-JAN-13 04.37.34 PM               0            36
MV_ITEM_HISTORY         07-JAN-13 04.36.58 PM  07-JAN-13 04.37.07 PM               0             9

Reference: How To Calculate MVIEW Refresh Duration? What Does DBA_MVIEWS.LAST_REFRESH_DATE and DBA_MVIEW_REFRESH_TIMES.LAST_REFRESH Indicate? [ID 1513554.1]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator