Feed aggregator

Changing your PS Database Platform: Cutover

Brent Martin - Sun, 2013-02-17 00:33

So far I've written about how you might approach the plan, design, build and test phases of a PeopleSoft replatforming project.  This time around I'd like to spend some time on the Cutover.

You’ll probably want to do at least 4 mock cutovers. One to build the initial development environment on the new hardware.  One to start System Test. One to start User Acceptance Testing, and a “dress rehearsal” to prove out your cutover plan/strategy. 

Start the cutover plan when you do your first migration. Capture tasks and timing. And continue to refine it with each additional mock cutover.

For the 3rd mock cutover, include items in the cutover plan for communication, external systems that will need to be modified and moved in parallel, shutdown sequence for batch, expected timeline, contact lists, etc.  By now your communication plan should be fairly explicit and there should be no surprises from the extended IT team or the business as to what will happen and when.

One to two weeks prior to cutover, execute a “dress rehearsal” where you actually move your production database in as realistic of a fashion as possible.  Validate your final timings and make sure nothing was missed.

 Two words about cutover communications:  They’re important.  You need to keep all of your stakeholders informed of where you are in the cutover, raise any issues quickly, and insure all of the hand offs are executed cleanly with no loss of time.  Identify a single point of contact (or contacts if you’ll be running cutover around the clock) who can get status from the team members without bugging them too much and prepare regular communications to the interested stakeholders.   

 In addition, you’ll probably want to maintain two open conference call bridge lines:  One for executive/stakeholder updates, and another to allow your technical teams to quickly collaborate on handoffs or issues that arise.

 A good cutover plan will include a final “Go/No-Go” decision point prior to starting any cutover activities.  If you have no “Severity 1” or “Showstopper” issues the cutover should proceed on schedule.

 Now the cutover plan becomes the script for everything over the next hours and days.  A common scenario follows:  Users close out transactions.  Batch schedule is stopped in a controlled manner. Final interface files are sent.  Validation reports are run that users will use to validate the system when it comes back up.  Finally user accounts are disabled, the application is stopped, and the DBA team (who is hopefully caught up on sleep) takes over.

 Now the DBA team executes the data migration using whatever tool you decided on.  Row count reports and other validation will be executed when it’s complete and the PeopleTools upgrade will start on the database.  This can be the longest part of the process.  Then all of your customizations are migrated in, the application is configured and a non-destructive technical checkout is conducted.

 It’s typical at this point to allow a limited number of users log in and enter and process real production transactions. This allows any problems to be identified and resolved before the system is turned over to the larger user population.

 Finally we’re ready to go.  Get your project sponsors and executives on the phone for a final Go/No-Go decision.   Once you get the green light, unlock all of the users and start your batch schedule back up in a controlled manner.  Congratulations!  This is a big accomplishment!!

Nice way to bring some coolness to Oracle statistics

Slavik Markovich - Fri, 2013-02-15 20:12
Turns out that Tanel has an artist hidden deep down inside!


Slavik Markovich - Fri, 2013-02-15 19:51
These are some amazing statistics…

SQL Access to Salesforce data

Kubilay Çilkara - Thu, 2013-02-14 13:13
In this post I will talk about an ODBC/JDBC driver solution I discovered lately which enables you to Access your Salesforce data using the standard SQL query language.

The company which provides these drivers is called Progress|DataDirect 

Their JDBC/ODBC Salesforce Connect XE drivers, acts as translators between SQL and SOQL (The Salesforce proprietary query language). So you can write your joins, use expressions like SUBSTRING on your Salesforce data as if the data was in a relational database.

I found the concept quite interesting.  If you already know SQL -and there are many people which do- you can just query data in any Salesforce standard objects like Account, Contact and custom objects with the use of a SQL Client tool.

For my post I used SQuireL SQL Client and the JDBC driver. You can easily point your own SQL Client tool to these drivers too.  ODBC and JDBC are data access standards many tools comply with.

To get started

1. Download Progress|DataDirect Salesforce.com JDBC driver the file will be called something like this PROGRESS_DATADIRECT_CONNECT_JDBC_5.1.0.jar  ( I have downloaded the 15 day trial version)

2. Install the driver as per instructions found here and more generic info like User Guide is here

3. Configure SQuireL SQL Client to use the driver (to install SQuireL go here)

Once you have downloaded the driver and you have installed it. Start the SQuireL Client tool and register the Salsforce.com JDBC driver with SQuireL like this:

Go to Drivers tab on the right and click the + sign.

    Add a driver by using the plus sign (+) and fill in the driver details like below. You can find out the Class Name and the Website URLs, Extra Class Payt for the Progress|DataDirect Salesforce.com JDBC drivers here. You can find out more about connection properties here DataDirect Connect Series for JDBC User's Guide

      Once you have configured the driver is time to add an Alias SQuireL connection to your Salesforce Org. I used my developer org below. Just follow the wizard to add the alias. It will ask you for the driver to use. Choose the Salesforce driver you have crated above. 

      Once you have created the alias (connection) is time to test your alias and connect to Salesforce with the new JDBC driver. Provide your Salesforce credentials like this:

      Once you connect then you can issue a SQL join and write any SQL statement to your Salesforce Org as below.  Below I am writing a SQL join between Account with Contact Salesforce standard objects.

      What the driver really does is to translate your SQL to SOQL.

      To read more about the JDBC and other drivers go to the company site Datadirect.com

      There is potential here, imagine all the data profiling, data quality operations and data integrations, you can do 'in place' in SaaS and Cloud systems with SQL, without having to move the data around a lot. 

      More resources

      More blogs and resources on SQL Access to Salesforce can be found below. There is a very interesting blog post which shows you how you can access the Salesforce data from within Oracle directly here too.

      Categories: DBA Blogs

      Introduction to Collated Views with Couchbase 2.0

      Tugdual Grall - Wed, 2013-02-13 05:38
      Most of the applications have to deal with "master/detail" type of data: breweries and beer department and employees invoices and items  ... This is necessary for example to create application view like the following: With Couchbase, and many of the document oriented databases you have different ways to deal with this, you can: Create a single document for each master and embed all the Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

      RACcheck for Single Instance Databases

      Asif Momen - Tue, 2013-02-12 01:21
      Starting with RACcheck 2.2.0, RACcheck support is extended to Oracle Single Instance Databases, Oracle Restart and RAC One Node configurations. 

      I downloaded the latest version of the tool and tested it against my play database. Below is a sample RACcheck output from a single instance database:

      [oracle@localhost raccheck]$ ./raccheck -v

      RACCHECK  VERSION: 2.2.0_20121109
      [oracle@localhost raccheck]$ 
      [oracle@localhost raccheck]$ 
      [oracle@localhost raccheck]$ ./raccheck -a

      List of running databases
      1. testdb
      2. None of above

      Select databases from list for checking best practices. For multiple databases, select 1 for All or comma separated number like 1,2 etc [1-2][1].
      . .

      Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

      . . . . . . . . . . . . . . .
                                                       Oracle Stack Status                           
      Host Name  CRS Installed  ASM HOME       RDBMS Installed  CRS UP    ASM UP    RDBMS UP  DB Instance Name
      localhost   No              No              Yes             No         No       Yes      testdb  

      Refer to the MOS document mentioned below to learn more about RACcheck tool. You may download the latest version of RACcheck from My Oracle Support (MOS).


      RACcheck - RAC Configuration Audit Tool [ID 1268927.1]

      Deploying a Virtualized Oracle Database Appliance ODA 2.5

      Fuad Arshad - Mon, 2013-02-11 17:12

      So I finally got the opportunity  to deploy ODA 2.5 on a development Oracle Database Appliance. The Documentation  is very lacking and needs more refinement for the masses.
      Here are the steps to deploy ODA 2.4 by reimaging the ODA . Concepts and procedures for bare maetaling the box remain the same.
          1. Use the ILOM to connect to the box via remote control and mount the VM Image  ( Need to do this on both hosts individually)
          2. After the reboot . The imaging Process will start and should takes between 1-2 hrs (Took about 2 hrs 30 minutes for me)
          3. Once you get the Oracle VM Server 3.1.1 screen. Your box has been imaged with a dom0 image.
          4. If you are using the ILOM you can ALT-F2 to get a login prompt
          5. Login as root
          6. Ensure that both boxes have been reimaged before starting the next step
          7. Run /opt/oracle/oak/bin/oakcli configure firstnet  ( on the first node)
          8. You have 2 options  (Local or Global)
          9. Global should be selectedi f both nodes are ready ot be ip'ed
          10. Select the network net1 , net2, net3, net4)
          11. Please note it is a little confusing but Here is a break down
              a.  priv1=bond0 (Interconnect)
              b.  Net1=bond1
              c. Net2=bond2
              d. Net3=bond3
              d. Net4=xbond0
          12. Oracle Failed to mention this but on startup does provide the MAC Addresses as well as ethernet names and bond info so be careful and ensure that  you understand your network topology prior to installing.
          13. You do want to make sure you have a DNS entry and a new IP Address for the DOM0 for each Server Node (2 x Dom0)
          14. Needless to say the network should be same on both nodes e.g public should be cabled on net1 on both nodes for consistency
          15. The network config will configure the public on both nodes  for Dom0 only
          16. After the config scp patch 16186172 into /OVS on the Dom0 box 0
          17. Unzip the patch 1 & 2 files and cat them together  cat a b >templates.
          18. Deploy the oda_base
       [root@odadb1-dom0 bin]# ./oakcli deploy oda_base
      Enter the template location: /OVS/templateBuild-2013-01-15-08-53.tar.gz
      Core Licensing Options:
      1. 2 CPU Cores
      2. 4 CPU Cores
      3. 6 CPU Cores
      4. 8 CPU Cores
      5. 10 CPU Cores
      6. 12 CPU Cores
      Selection[1 : 6] : 5
      ODA base domain memory in GB(min 8, max 88)[default 80] :
      INFO: Using default memory size i.e. 80 GB
      INFO: Node 0
      INFO: Deployment in non local mode
      INFO: Running the command to copy the template /OVS/templateBuild-2013-01-15-08- 53.tar.gz to remote node 1
      templateBuild-2013-01-15-08-53.tar.gz 100% 4620MB 47.6MB/s 01:37
      INFO: Node 0
      INFO: Spawned the process 26679 in the deployment node 0
      INFO: Trying to setup on deployment node 0
      INFO: Spawned the process 26680 in the node 1
      INFO: Trying to setup on node 1
      Using config file "/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/vm.cfg" .
      Started domain oakDom1 (id=1)
      INFO: Deployment in local mode
      INFO: Node 1
      INFO: Extracted the image files on node 1
      INFO: Node 1
      INFO: The VM Configuration data is written to /OVS/Repositories/odabaseRepo/Virt ualMachines/oakDom1/vm.cfg file
      INFO: Running /sbin/losetup /dev/loop0 /OVS/Repositories/odabaseRepo/VirtualMach ines/oakDom1/System.img command to mount the image file
      INFO: Mount is successfully completed on /dev/loop0
      INFO: Making change to the /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 /tmpmnt/boot/grub/grub.conf file
      INFO: Node 1
      INFO: Node 1
      INFO: Assigning IP to the second node...
      INFO: Node 1
      INFO: Created oda base pool
      INFO: Starting ODA Base...
      Using config file "/OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1/vm.cfg" .
      Started domain oakDom1 (id=1)
      INFO: Deployment in local mode
      INFO: Node 0
      INFO: Extracted the image files on node 0
      INFO: Node 0
      INFO: The VM Configuration data is written to /OVS/Repositories/odabaseRepo/Virt ualMachines/oakDom1/vm.cfg file
      INFO: Running /sbin/losetup /dev/loop0 /OVS/Repositories/odabaseRepo/VirtualMach ines/oakDom1/System.img command to mount the image file
      INFO: Mount is successfully completed on /dev/loop0
      INFO: Making change to the /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 /tmpmnt/boot/grub/grub.conf file
      INFO: Node 0
      INFO: Node 0
      INFO: Assigning IP to the first node...
      INFO: Node 0
      INFO: Created oda base poo1
      INFO: Starting ODA Base…
      19. Once oda_base is deployed
       [root@podadb31-dom0 bin]# ./oakcli show oda_base
      ODA base domain
      ODA base CPU cores :10
      ODA base domain memory :80
      ODA base template :/OVS/templateBuild-2013-01-15-08-53.tar.g
      20.Once the oda_base is installed you will have to vnc in using the dom0 port 5900 to get access to the database server ( Due to a bug you will need to vnc in on both servers first and press the press any key to continue).
      21. Once logged in you will need to IP the oda_base
      22.You can either use the /opt/oracle/oak/bin/oakcli configure firstnet ( Please note it detects VM Environment and gives eth1,eth2,eth3 and eth4 as options )
      23. Better to use  ./oakcli deploy
      24. Oakcli now has option to change the proxy port for ASR as wella s configure external ASR server
      25. External ASR server needs a server name as well as port (no definition of what the port is supposed to be )
      26.  Also due to a bug if vm manager is bounced   you will have to vnc in and hit "press any key to continue". you can see that below
       The deployment process has not changed and will follow the same deployment steps.
      This is the first release of ODA on a virtualized platform and glitches are to be expected. but it does seems to have been rushed out .
      Please feel free to comment or ask questions. I have only deployed a DOm0 and the ODA_base here but i will deploy an app shortly and post my experience

      Update: Edited steps and changed the network names

      Changing your PS Database Platform: The Test Phase

      Brent Martin - Mon, 2013-02-11 02:51

      So far I've written about how you might approach the plan, design, and build phases of a PeopleSoft replatforming project.  This time around I'd like to spend some time on the Test phase.

      Just because you’re only changing some SQL around doesn’t mean that you can take shortcuts with testing.  You’ll want to run an entire stem-to-stern test of your PeopleSoft system to insure that everything works as expected.

       One thing to keep in mind:  99% of your defects will be with custom and customized code.   The delivered code won’t generate nearly as many problems, so if you’re deciding where to spend your testing resources definitely focus on the custom processes.

       As I mentioned in the Build phase, Unit Testing is critical.  It’s arguably the most important testing that you will do so come up with a mechanism to track unit test results with the objects modified and make sure you have 100% custom code coverage.

       Testing the data migration process is important too.  Databases differ in subtle ways and you’ll want to make sure your normal and extended character sets make it across correctly.  Naturally you’ll want to run row count reports, but you’ll need to go beyond that.   You’ll want to make sure the data in the underlying tables are identical and nothing has been lost in translation.  One simple way is to use ODBC to join to tables in the source and target databases and create queries that insure the individual columns are identical.  Unfortunately that approach is extremely slow.  Another approach is to run hash algorithms on key character fields and summarize the totals, comparing the results on both the source and target database.

      System/Integration Testing is also very important.  For one thing, you’ll want to have confidence that the system behaves the way you expect it to.  And interfaces will generate problems themselves.  One common problem is that your interface programs probably assume the default date format for a database platform, and interfaces that don’t specify a date format can choke when the incoming date format doesn’t match what’s expected, or they can send the wrong date format to an output file.  Switching from a Non-Unicode database platform to Unicode can cause other problems.  You’ll want to execute all of your interfaces and make sure results are valid. 

      User Acceptance Testing is important as well.  Users who know the processes should spend some time in the system making sure all of the critical functionality works and they feel comfortable everything is working as expected. They’ll need to be on the lookout for date format issues, performance issues, data entry discrepancies, etc.  They should also spend quality time with their reporting to make sure no new errors were introduced during the build phase.

      And finally Performance Testing should be conducted to flesh out any new problems introduced by the new platform.  The performance testing should include Online Performance testing,  Batch performance testing, and if possible data from PeopleSoft Performance Monitor should be captured and compared to baseline performance data from the old system. 

      Online performance testing is typically conducted using a tool like LoadRunner or Rational Performance Tester.  You record scripts based on a set of highly used business processes and play them back in volume.  While the test is executing your admin team will monitor system performance on the various servers and look for bottlenecks.  At the end of the day this is necessary for a performance testing effort (especially if you’re making multiple changes like migrating to new hardware and/or a PeopleTools release).  However, it’s not sufficient to identify all performance issues.

      One of the big deficiencies of online performance testing is that it doesn’t test batch jobs, or if it does test them it is very limited.  For batch testing, you’ll want to define a testing mechanism that is realistic according to real, observed jobs that run in your production environment.  You’ll want to make sure that the jobs have real data to process.  And you’ll want to make sure the jobs are sequenced in such a way that dependencies are tracked to some extent.  After all of that, you’ll want to come up with a way to track and report results.  I’ll write more about the approach and toolset I’ve used to get realistic batch tests in the future so stay tuned. 

      The other deficiency of online performance testing is that PeopleSoft is just too complicated of an application to expect a robot to crawl all of the web pages looking for slowness.   Sure, during yoRead More...

      Fun with Date Math

      Chet Justice - Sun, 2013-02-10 20:42
      (First off, sorry Mike, I'm hoping this will break my writer's block...)

      On Friday I was asked to look at a report that wasn't returning all of the data. Sample:
      Year/Month  Total Sales Total Sales (YAGO)
      01/31/2013 $1,000,000 $900,000
      03/31/2013 $950,000
      For reference, YAGO is "Year Ago."

      Notice anything funny there?

      Yeah, February is missing. The (OBIEE) report has a filter on Jan, Feb and Mar of 2013. But it wasn't showing up. I confirmed via manual SQL (hah!) that there was (YAGO) data in there for February. Any ideas?

      I immediately suspected one of two things:
      - If the Date (month) dimension had a "year ago" column it was wrong.
      - The join in OBIEE was doing it wrong.

      I checked the date dimension first. It was fine. It didn't even have a YAGO column, so nothing to see there. I looked at the join between the date dimension and the fact table...
      (YEAR ("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE" ) - 1 ) * 10000 
      + MONTH ("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE" ) * 100
      + CASE WHEN DayOfMonth("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE") = 29 THEN 28 ELSE
      DayOfMonth("DW".""."DW"."My_Month_Dim"."MONTHEND_DATE") END
      = "DW".""."DW"."My_Fact_Table"."MONTH_DIM_KEY"
      I want to tear my eyes out when I see stuff like that. I don't even want to know what it does. * 1000? * 100? Shoot me.

      OK, so the MONTH_DIM_KEY is in the YYYYMMDD format. MONTHEND_DATE is a date data-type that corresponds to the last day of the month. For February 2013, it's 20130228, For February 2012, it should be 20120229. <<< Leap Year!!! I'm going to make a wild guess and say that the formula up there isn't working. How to test it though? That's logical SQL (OBIEE), it doesn't run in the database. I just ran the report and grabbed the SQL submitted to the database. This is what it looked like:
                AND ( TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'yyyy' ), '9999' ) - 1 ) * 10000 +
      TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'MM' ), '99' ) * 100 +
      WHEN TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' ) = 29
      THEN 28
      ELSE TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' )
      AND( MONTHEND_DATE IN( TO_DATE( '2013-01-31', 'YYYY-MM-DD' ), TO_DATE(
      '2013-02-28', 'YYYY-MM-DD' ), TO_DATE( '2013-03-31', 'YYYY-MM-DD' ) ) )
      Eyes are burning again. This is also the "prettified" SQL after I hit Ctrl + F7 in SQL Developer. The very first thing I do with OBIEE generated SQL.

      One part of that wouldn't be so bad, but it's three formulas adding up to some mysterious number (presumably the last day of the month, for the previous year, in YYYYMMDD format). So I moved all those formulas up into the SELECT part of the statement. Let's see what they are doing.
      ( TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'yyyy' ), '9999' ) - 1 ) * 10000 part_1,
      TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'MM' ), '99' ) * 100 part_2,
      WHEN TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' ) = 29
      THEN 28
      ELSE TO_NUMBER( TO_CHAR( MONTHEND_DATE, 'dd' ), '99' )
      END part_3
      FROM my_month_dim
      WHERE MONTHEND_DATE IN ( TO_DATE( '2013-01-31', 'YYYY-MM-DD' ),
      TO_DATE( '2013-02-28', 'YYYY-MM-DD' ),
      TO_DATE( '2013-03-31', 'YYYY-MM-DD' ) )
      That resulted in this:
      PART_1         PART_2         PART_3
      20120000 100 31
      20120000 200 28
      20120000 300 31
      So PART_3 is definitely incorrect. Am I going to bother to figure out why? I have some serious issues inside of my brain which simply do not allow me to do date math. I avoid it at all costs...instead choosing to use whatever the system provides me.

      One of my favorites, especially when dealing with leap years, is ADD_MONTHS.

      If date is the last day of the month or if the resulting month has fewer days than the day component of date, then the result is the last day of the resulting month

      That's why. Add -12 months to February 28, 2013. You don't get back February 28, 2012, you get back the 29th, as it should be. Do the same thing starting with February 29th, 2012. Add 12 months, subtract 12 months. It's always right. Yay for someone figuring this out so I don't ever have to do so.

      Sadly, OBIEE doesn't have the equivalent of ADD_MONTHS (or LAST_DAY), you have to build out a string and then concatenate it all together, not pleasant. So I cheated, I used EVALUATE. Here's my working solution.
      Oops, that's the physical SQL. How about the OBIEE SQL:
      CAST( EVALUATE( 'TO_NUMBER( TO_CHAR( ADD_MONTHS( %1, %2 ), %3 ) )',
      -12, 'YYYYMMDD' ) AS INTEGER ) = "DW".""."DW"."My_Fact_Table"."MONTH_DIM_KEY"
      Categories: BI & Warehousing

      Send tweets from Oracle with OAuth

      Kubilay Çilkara - Thu, 2013-02-07 17:03
      Twitter is a great real time social platform. Timelines and hashtags are a great way to communicate to an audience of subscribers relevant information.

      There are lots of websites and applications which use twitter to provide content to their users. There are programming languages, Java, PHP, Python which have build Twitter API libraries to quickly send and receive content from Twitter but I haven't come accross a fully fledged library in PL/SQL, which would enable you to send a Tweet from PL/SQL using Twitter's latest oAuth security protocol. 

      There are blogs out there which show you how to send a tweet from Oracle, but not many using the new oAuth security protocol.

      The only blog post I have seen so far which uses PL/SQL and oAuth to send a tweet from Oracle is of Some coding hero's heroic mumblings an Oracle consultant. 

      I have spend time reviewing his code with the comments made on his post which he has wrote in 2010 and managed to trim it down to use only one of his PL/SQL procedures.  The procedure below sends the 140 characters tweet  for you using oAuth. To be able to do this, as Some Coding Hero says, you will have to create a Twitter Application for your twitter handle. Actually you don't create an application you just sign up and obtain 4 security codes! Once you register your application with Twitter,  you are given 4 oAuth security Codes as follows:

      With the above 4 oAuth security codes you can use Somecodingheros last block of code only to send a tweet from Oracle - I used Oracle Apex - like this:

      new Twitter API 1.1 update and the use of Oracle Wallet is added to the script in green below on  21-JUN-2013

      -- Twitter API 1.1 update
        l_oauth_request_token_url CONSTANT VARCHAR2 (500) := 'https://api.twitter.com/1.1/statuses/update.json';   
        l_oauth_consumer_key CONSTANT VARCHAR2 (500) := 'xxxxxxxxx';  
        l_oauth_token  CONSTANT VARCHAR2 (500) := 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx';  
        l_oauth_secret CONSTANT VARCHAR2 (500) := 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';  
        l_oauth_nonce VARCHAR2 (500);  
        l_oauth_signature_method CONSTANT VARCHAR2 (10) := urlencode ('HMAC-SHA1');  
        l_oauth_timestamp VARCHAR2 (100);  
        l_oauth_version CONSTANT VARCHAR2 (5) := urlencode ('1.0');  
        l_oauth_consumer_secret CONSTANT VARCHAR2 (500) := 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';  
        l_http_method VARCHAR2 (5) := 'POST';  
        l_oauth_base_string VARCHAR2 (2000);  
        l_oauth_key VARCHAR2 (500) := l_oauth_consumer_secret || '&' || l_oauth_secret ;  
        l_sig_mac RAW (2000);  
        l_base64_sig_mac VARCHAR2 (100);  
        http_req UTL_HTTP.req;  
        http_resp UTL_HTTP.resp;  
        l_update_send VARCHAR2(2000);  
        l_oauth_header  VARCHAR2(2000);  
        l_line  VARCHAR2(1024);  
        resp_name  VARCHAR2(256);  
        resp_value VARCHAR2(1024);  
      -- put the tweet in the urlencode function below 
        l_content varchar2(140) := urlencode('@somecodinghero thank you');  
        l_random varchar2(25);  

       -- Oracle Wallet
          utl_http.set_wallet('file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle', 'putyourwalletpasswordhere');

       -- Get the timestamp  
        SELECT urlencode ((SYSDATE - TO_DATE ('01-01-1970', 'DD-MM-YYYY'))  * (86400)) 
        INTO l_oauth_timestamp  
        FROM DUAL;  
        -- RANDOM oauth_nonce  
        SELECT dbms_random.string('A',25)  
        INTO l_random  
        FROM DUAL;  
        SELECT urlencode (UTL_ENCODE.base64_encode(UTL_I18N.string_to_raw (l_random, 'AL32UTF8')))  
        INTO l_oauth_nonce  
        FROM DUAL;  
        l_oauth_base_string := l_http_method   
                                || '&'  
                                || urlencode (l_oauth_request_token_url)  
                                || '&'  
                                || urlencode ( 'oauth_consumer_key'  
                                    || '='  
                                    || l_oauth_consumer_key  
                                    || '&'  
                                    || 'oauth_nonce'  
                                    || '='  
                                    || l_oauth_nonce  
                                    || '&'  
                                    || 'oauth_signature_method'  
                                    || '='  
                                    || l_oauth_signature_method  
                                    || '&'  
                                    || 'oauth_timestamp'  
                                    || '='  
                                    || l_oauth_timestamp  
                                    || '&'  
                                    || 'oauth_token'  
                                    || '='  
                                    || l_oauth_token  
                                    || '&'  
                                    || 'oauth_version'  
                                    || '='  
                                    || l_oauth_version  
                                    || '&'  
                                    || 'status'  
                                    || '='  
                                    || l_content);  
        DBMS_OUTPUT.put_line (l_oauth_base_string);  
        l_sig_mac := DBMS_CRYPTO.mac (  UTL_I18N.string_to_raw (l_oauth_base_string, 'AL32UTF8')  
                                      , DBMS_CRYPTO.hmac_sh1  
                                      , UTL_I18N.string_to_raw (l_oauth_key, 'AL32UTF8'));  
        DBMS_OUTPUT.put_line ('Combined sig: ' || l_oauth_key);  
        l_base64_sig_mac := UTL_RAW.cast_to_varchar2 (UTL_ENCODE.base64_encode (l_sig_mac));  
        DBMS_OUTPUT.put_line ('MAC Signature (Base64-encoded): ' ||  l_base64_sig_mac);  
        l_update_send := l_oauth_request_token_url || '?status=' || l_content;  
          http_req := UTL_HTTP.begin_request (  l_update_send  
                                              , l_http_method  
                                              , UTL_HTTP.http_version_1_1);  
         DBMS_OUTPUT.put_line ('UPDATE URL ' || l_update_send);  
         UTL_HTTP.set_response_error_check (TRUE);  
         UTL_HTTP.set_detailed_excp_support (TRUE);  
          l_oauth_header := 'OAuth oauth_nonce="' || l_oauth_nonce || '", '  
                            || 'oauth_signature_method="'|| l_oauth_signature_method || '", '  
                            || 'oauth_timestamp="'|| l_oauth_timestamp || '", '  
                            || 'oauth_consumer_key="'|| l_oauth_consumer_key || '", '  
                            || 'oauth_token="' || l_oauth_token || '", '  
                            || 'oauth_signature="' || urlencode (l_base64_sig_mac) || '", '  
                            || 'oauth_version="' || l_oauth_version || '"';  
          utl_http.set_header ( r => http_req,   
                                NAME => 'Authorization', VALUE => l_oauth_header);  
          DBMS_OUTPUT.put_line  ('HEADER: ' || l_oauth_header);                          
          utl_http.write_text(  r => http_req, DATA => l_content);   
          http_resp := utl_http.get_response(r => http_req);  
         FOR i IN 1..utl_http.get_header_count(http_resp) LOOP  
          utl_http.get_header(http_resp, i, resp_name, resp_value);  
          dbms_output.put_line(resp_name || ': ' || resp_value);  
         END LOOP;  
        DBMS_OUTPUT.put_line('Getting content:');  
              utl_http.read_line(http_resp, resp_value, TRUE);  
            END LOOP;  
            WHEN utl_http.end_of_body THEN  
              DBMS_OUTPUT.put_line('No more content.');  
         utl_http.end_response(r => http_resp);  
          when others then  
            DBMS_OUTPUT.put_line('HTTP ERROR: ' || utl_http.get_detailed_sqlerrm);  

      Thank you Somecodinghero!
      Categories: DBA Blogs

      StarAnalytics Bought by IBM

      Look Smarter Than You Are - Wed, 2013-02-06 15:13

      On February 1, it was announced that Star Analytics (one of our favorite software companies in the world) is being bought by IBM (not one of our favorite software companies in the world) for an undisclosed amount.  Star, founded in 2004, made two excellent products (Star Integration Server and Star Command Center) and IBM's strategy, at the moment, is to continue the two products under IBM's Business Analytics Software group.

      As  everyone knows, IBM has been on an acquisitions kick for the last 5 years particularly around business analytics.  They own Cognos, TM/1, Clarity and a whole lot of other products... or at least they bought the companies that made those products and then stopped some of those products and continued others.  Unlike Oracle that is quite good at buying companies and then immediately knowing which products they want to strategically continue, IBM can take some time to make up their mind and half the time, people internal to IBM don't know which products are being discontinued.  There are still people internal to IBM that are touting Clarity's planning and consolidations products, and those have been virtually dead since IBM first bought Clarity.

      It may seem odd to some that Star was bought by IBM considering that IBM owns Cognos and Star is traditionally awesome at Hyperion integration not Cognos, TM/1, and the like.  What many people don't realize is that Star's products have been expanded beyond their traditional Hyperion roots over the last few years and now talk well to other products including relational databases.  Star Integration Server is still found almost exclusively at Hyperion shops, and one has to believe that part of the reason IBM bought Star is to be able to easily extract data from Essbase, Planning, and HFM.

      Judging from IBM's announcement and FAQ on the purchase, it seems that being able to extract and control Oracle (particularly Hyperion) is the main reason they bought Star.  That makes it odd that Oracle didn't go ahead and buy them instead.  All I can think of is that either IBM offered a better price or Oracle felt they had competing products with some of the same functionality already (I'll be getting to that in a second).

      So what does that mean for you?  If you bought Star's products, congratulations.  They are excellent products and I would continue using them for as long as IBM continues to support them.  If you're considering a purchase, I would wait until IBM decides what they're going to do.  At bare minimum, IBM will probably begin to favor Cognos and TM/1 more than Hyperion and for a lot of us, Hyperion expertise was the reason we bought Star's products.

      If you want to consider something else, I would suggest buying Hyperion Financial Data Quality Management or Oracle Data Integrator instead of Star Integration Server and Hyperion Financial Close Management instead of Star Command Center.  They don't exactly overlap functionality-wise, but they are the closest replacements I can readily think of.  Note that Star Integration Server has some very cool extraction technologies that are patented, so any product extracting data or hierarchies from Hyperion is probably going to be a lot slower, for the time being, than Star.

      We will miss you, Star Analytics.  It was a good 7+ year run, and the Hyperion world will always fondly remember your company, your products, and your employees (particularly your iconic leader, and my close friend, Quinlan Eddy).  May your staying agreements at IBM be short.
      Categories: BI & Warehousing

      Add Code w/ Syntax Highlighting to PowerPoint

      Tyler Muth - Wed, 2013-02-06 09:12
      …or any media that supports rich text or HTML. I often include code-snippets or queries in technical presentations. I find them much more visually appealing and easier to read if they include syntax highlighting. If you’re using Notepad++ on Windows, it’s trivial. Simply go to the “Plugins” menu > “NppExport” > “Copy all formats to […]
      Categories: DBA Blogs, Development

      “Refresh” a CSV File

      Tyler Muth - Tue, 2013-02-05 09:22
      In a previous post I discussed a PDF reader that allows you to refresh the PDF after changes are made. This is the same concept except it’s for CSV files. I’ve been using Perl to convert sqlplus spool output to CSV which is much easier to read into R. As I’m updating my code and […]
      Categories: DBA Blogs, Development

      “Refresh” a PDF

      Tyler Muth - Tue, 2013-02-05 08:50
      I’ve been working in R (wikipedia entry) a lot lately to graph performance data about Oracle systems. In general I output PDF files. The problem with most PDF readers (on Windows) is that they either get an exclusive lock on the file when it’s open (yes you, Adobe Acrobat), or you have to close and […]
      Categories: DBA Blogs, Development

      Changing your PS Database Platform: The Build Phase

      Brent Martin - Tue, 2013-02-05 03:05

      In last two postings I wrote about how you might plan a project where you migrate your PeopleSoft application from one database platform to another, and how you might approach the Design phase.  I wanted to share my thoughts about the Build phase in this article.  I'll share my thoughts about the Test and Cutover phases in my next posting(s).

      The Build Phase

      The Build Phase is always my favorite part of any PeopleSoft project, probably because I come from a Development background.  The Build phase of a replatforming project is in some ways very straightforward, and in some ways it is more difficult.  The problem isn’t in the coding changes – it’s not too difficult to make a piece of SQL work on a different database platform -- the challenge is in Unit Testing.  Every SQL that is touched must be unit tested, and that will be the biggest part of the effort.

      Most developers are used to unit testing their own work.  But it's a good idea to use a code and testing review where developers document each object change and unit test and another developer reviews the results.  Since there will be many small changes, the documentation requirement should be light, but it should include a trace file that proves that each App Engine step, PeopleCode SQL, and SQR Function was executed was tested.  How structured your process is will depend on the size and location of your team.  Insuring quality with process and documentation might not be as important in a small shop, but is critical to your success if you have a large development team located off shore.

      Unit testing is the only opportunity you’ll have to actually test each piece of modified code.  Subsequent phases will test the system overall, but you will probably not achieve 100% code coverage.  Fortunately, almost all of your defects can actually be caught in unit testing of a replatforming project so you should use this to your advantage.  Defects that get missed will haunt you in later testing phases where they’ll be more visible and more expensive to fix.

      Also as part of this phase, your DBA team should  execute another mock cutover using the tools and steps you decided you will use for the real cutover.  The resulting database (plus the code generated in the Build phase) will be the starting point for your first test database.

      And the testing team should start building the test scripts for the subsequent test phases here.  Since we’re not changing application functionality, they should be able to leverage existing scripts from prior upgrades or implementation and enhance them for functionality that was added since the original scripts were created.

      Oracle BI Apps – What are customers interested in?

      Oracle e-Business Suite - Sun, 2013-02-03 08:05

      A very informative Post by Rajesh Dhanapal from Infosys.

      In the current era the customers are not interested to wait for months and years for BI project implementation. The customers are keen on quick wins rather than implementing the BI solution from scratch by following traditional BI approach. The Oracle BI Apps solution provides head start for the organisation to reach the to-be state quicker with reduced time and effort, and reduced risk.Read Details

      Categories: APPS Blogs

      Behavior of Tab Canvases

      Gerd Volberg - Thu, 2013-01-31 05:00
      The behavior of tab canvases ist very interesting, when your form gets more and more tabpages. Let's say, you have only three pages in your tab canvas. (Width Style Property = Fixed)
      Forms automatically renders all three pages in the same size. That's what we expected.

      In the next step we create a fourth tab:

      If there is not enough space for all page-labels in the same size, then we see a first kind of resizing. All labels are readable. But the size of some pages is reduced.

      If we create more tabpages:
      At one point the label-text shrinks and some letters were exchanged by dots.

      And in the last stage:

      The last type of look and feel is, when we get a horizontal scrollbar. Think on this, when you use many tabpages in your form

      Have fun

      Bug: ORA-00979: not a GROUP BY expression

      Asif Momen - Wed, 2013-01-30 13:51
      Bugs and performance degradation are part of database upgrade stories and we have witnessed yet another post-upgrade bug after upgrading our database from Oracle 10gR2 ( to Oracle 11gR2 ( 

      Following query (I have simplified the query for the demonstration purpose) was running happily within Oracle 10gR2:

      SQL> select * from ( select TRUNC(dt,'MM')
        2          from test
        3          group by TRUNC(dt,'mm'));



      However, the same query started to throws an error (ORA-00979) when executed in Oracle 11gR2 ( 

      SQL> select * from v$version;

      Oracle Database 11g Enterprise Edition Release - 64bit Production
      PL/SQL Release - Production
      CORE      Production
      TNS for Linux: Version - Production
      NLSRTL Version - Production


      SQL> select * from ( select TRUNC(dt,'MM')
                from test
                group by TRUNC(dt,'mm'));
        2    3  select * from ( select TRUNC(dt,'MM')
      ERROR at line 1:
      ORA-00979: not a GROUP BY expression


      At this point I normally do a search on My Oracle Support (MOS) to see if I get  some hits pertaining to this problem and found the following bug information:

      Bug 11067251 - False ORA-979 with TRUNC(date, format) or ROUND(date, format) - superceded [ID 11067251.8]

      Apparently, you hit the bug when you use either TRUNC or ROUND functions in an inline view. Executing the same query with little modification (removing inline view) in was however successful.

      SQL> select TRUNC(dt,'MM')
       from test
      group by TRUNC(dt,'mm');
        2    3  


      The above bug confirms that is affected and proposes following two workarounds: 

      1) Use NO_MERGE hint or
      2) Disable view merging "_simple_view_merging=false" 

      As it was not possible to rewrite the application at this point, so we disabled view merging at the system level. Well, disabling view merging at the system level might appear as a bad choice but I think it is a right decision at this time. We will soon be upgrading this database to This will kill two birds with one stone, a) bug fix and b) upgrading to the latest patch level (who knows what new bugs are waiting for us ???).


      • My Oracle Support:  Bug 11067251 - False ORA-979 with TRUNC(date, format) or ROUND(date, format) - superceded [ID 11067251.8]

      Dummy output parameters

      Rob van Wijk - Tue, 2013-01-29 12:31
      Yesterday I encountered a code snippet which taught me something I did not think was possible. But it is, as I'll show in this blogpost. It's not spectacular in any way, just convenient at most. When you need a function or procedure to retrieve some value, you'll start examining the existing code base if a function already exists that does the job. Probably you won't find an exact match, but Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com2

      Webcast: Using Oracle OLAP data in multiple BI clients. Feb. 5 at 9:00 PST

      Keith Laker - Tue, 2013-01-29 11:35

      Want to learn how to connect multiple BI clients such as SAP BusinessObjects Analysis, Cognos and Microsoft Excel to Oracle OLAP, quickly and easily, using the XMLA version of the MDX Provider for Oracle OLAP?  Check out the Oracle webcast on February 5 at 9:00 AM PST.

      You will see how it works and how customers in different industries are using this solution.

      Just sign up here and join us on February 5th.
      Categories: BI & Warehousing


      Subscribe to Oracle FAQ aggregator