Feed aggregator

NetFlix – Blockbuster Movies

Marco Gralike - Thu, 2016-09-22 23:08
I mean, in principle it’s not that special, being that the info is already on…

Oracle Open World 2016 from a PeopleSofter point of view: Thursday 22nd and Wrap Up

Javier Delgado - Thu, 2016-09-22 18:33
So Open World 2016 has come to an end. But before the curtains fell, there was still some activity and interesting PeopleSoft sessions to attend.

Reporting and AnalyticsMy day started with a session name Getting the most Out of PeopleSoft - Reporting and Analytics [CON7075] with the participation of Matthew Haavisto, Jody Schnell and Ramasimha Rangaraju.

Reporting has evolved a lot in the last few years, and not only in PeopleSoft. Gone are (or should be) the days in which a report meant a PDF or a print out. Today reporting is not only interactive but also actionable. I actually delivered a presentation on this topic back in April 2016 at the PeopleSoft Tour in Madrid. I later recorded it in YouTube, but unfortunately it is only available in Spanish.



PeopleSoft is not an exception to this reporting evolution. Tools like Pivot Grids, actionable charts and Simplified Analytics all point to the same direction. Unfortunately, not all users are ready for this transition, as I have heard from many customers that upper management do not want to use a digital device to access the reports, so they still prefer the printed alternatives. And yes, I'm writing this as of September 2016.

Anyway, going back to the session, there were some points that I found particularly interesting:

  • The ability in PeopleTools 8.55 to generate submittable PDFs using BI Publisher. This functionality is particularly useful for government forms, but can also be used to gather and process ad-hoc data from users.
  • Oracle JET has been adopted as the charting engine, giving PeopleSoft a more consistent user experience with other Oracle products. Given the amount of development effort dedicated to Oracle JET charting features, PeopleSoft may take a quick benefit a rapidly evolve its charting capabilities.
  • The introduction of Self Service Scheduling simplifies the execution of reports by linking them to pages and hiding the complexity of run controls to users.

Another point I found interesting was the explanation of how macros adversely affect PS/nVision performance, as they require PeopleSoft to execute them twice, first using the Microsoft recommended openXML method and then, as the first does not support macros, using the traditional PeopleSoft Excel automation. Interesting to know!



Meet the PeopleTools ExpertsThe next session was one of my preferred ones, as it consists of several round tables where you can directly talk to the PeopleTools development team. It is also useful to hear the concerns and doubts of customers and partners.

There were plenty of questions about Fluid User Interface, Cloud Architecture, Lifecycle Management and so on. If you ever attend Oracle Open World in the future, I strongly recommend this session.

PeopleTools Product Team Panel DiscussionJust after lunch, the next session was this classic of Oracle Open World. It consists in an open discussion between the PeopleTools Product Team and customers and partners. It is always interesting to attend this type of sessions and listed to thoughts and ideas from the PeopleSoft community.


Monitoring and Securing PeopleSoft My last session was Hands-On with PeopleSoft: Develop Practices to Harden and Protect PeopleSoft [CON7074] delivered by Greg Kelly. The presentation was basically around the Securing Your PeopleSoft Application Environment document available here. I found it really illustrative, and taking into account my rather shallow knowledge of security, rather scary :). Next time I will make sure I prepare myself upfront to take more advantage of Greg's wide knowledge on the area.

Wrap UpThis was an interesting edition of Oracle Open World in what is related to PeopleSoft. There were not many announcements made since the last edition. Still, I think the PeopleSoft team at Oracle is doing a great job. This is still a great product indeed.

On the other hand, I have the feeling that PeopleSoft customers are lagging behind in terms of adoption of new features. Now, personally I don't think this is because the update of PeopleSoft to the latest features is complex. Actually, we can say that with Selective Adoption, DPK and other Lifecycle Management tools it has never been this easy to update. The barrier, in my opinion, is not in the product, but in marketing. All Oracle marketing and sales horsepower has been exclusively dedicated during the last years to their cloud offering. Under these circumstances, it is reasonable to have uncertainties about how wise is to perform future investment in PeopleSoft as opposed to moving to the cloud. And we know uncertainty does not accelerate investment...

From a more personal standpoint, this was great event in terms of networking. Being able to meet PeopleSoft talents such as Jim Marion, Graham Smith, Sasank Venama and many others including the PeopleSoft development team is always the best way to nurture and trigger new ideas.

Just my two cents. Thanks for following this blog during this event!

PS: I bought Jim Marion's book from the bookshop at Moscone South and have a good deal of fun and learning guaranteed for my flight back to Madrid.



OOW 2016: nouveautés base de donnée

Yann Neuhaus - Thu, 2016-09-22 14:20

Voici quelques infos sur ce qui a été annoncé ou présenté à l’Oracle Open World. L’info est relayée un peu partout principalement en anglais, donc voici un résumé à l’attention des francophones

Oracle Database 12c Release 2

Soyons clair, la base de donnée n’est pas le sujet principal de l’Open World. Comme prévu, c’est une sortie ‘Cloud First’ mais la version rendu publique lundi est une version limitée.
Si vous avez utilisé le ‘Schema as a Service’ c’est un peu la même idée sauf qu’il s’agit de ‘PDB as a Service’ ici. En multitenant, la consolidation par Schema est remplacée par la consolidation par PDB qui a l’avantage de présenter virtuellement une base complète, avec ses objects publics, ses multiples schemas, etc.
Donc pas d’accès d’administration: c’est un service “managed” – administré par Oracle.
Le multitenant permet de donner des droits DBA sur une PDB tout en empêchant d’interagir avec le reste du système. Ce sont des nouvelles fonctionnalités de la 12.2, entre autres les “lockdown profiles” qui ont été développées dans ce but.
Le service s’appelle “Exadata Express Cloud Service” car il tourne sur Exadata (donc compression HCC et bientôt SmartScan). La plupart des options sont d’ailleurs disponibles (In-Memory, Advanced Compression,…)
“Express” est pour la facilité et rapidité de provisonning: quelques minutes. Le but est qu’un développeur puisse en 5 minutes créer un service base de donnée facilement accessible (par SQL*Net encrypté). L’idée c’est qu’il soit aussi facile pour un développeur de créer une base Oracle que de créer des bases Postgres, Cassandra, MongoDB,…
Et bien sûr si on met toutes les option, le développeur va les utiliser et elles deviendront nécessaires en production.

Il y aura bientôt un Data Center en Europe. Pour le moment, c’est seulement aux USA. Le prix est attractif (CHF 170 par mois) mais la base est assez limitée en terme de CPU, stockage et mémoire. C’est principalement pour du développement et du bac à sable.

Donc la 12c Release 2 pour le moment n’est disponible que sous la forme de PDBaaS sur Exadata Express Cloud Service:

EXCS

Avant la fin de l’année, on devrait avoir la 12.2 en DBaaS (non-managed que l’on connait actuellement sir le PaaS Oracle) et la version General Availability viendra ensuite, probablement en 2017

 

Cet article OOW 2016: nouveautés base de donnée est apparu en premier sur Blog dbi services.

Rimini Street Permanently Enjoined from Illegally Accessing Oracle’s Software and Must Pay Oracle More Than $100 Million

Oracle Press Releases - Thu, 2016-09-22 12:07
Press Release
Rimini Street Permanently Enjoined from Illegally Accessing Oracle’s Software and Must Pay Oracle More Than $100 Million Court Finds Rimini Street Conducted Significant Litigation Misconduct

Redwood Shores, Calif.—Sep 22, 2016

The United States District Court for the District of Nevada granted Oracle’s motion for a permanent injunction against continued copyright infringement by Rimini Street and continued computer access violations by Rimini and its President and CEO, Seth Ravin, in Oracle’s long-running litigation against Rimini and Ravin. The Court also awarded Oracle $46 million in attorneys’ fees and costs against Rimini and Ravin personally on top of the $50M in damages awarded by the jury last year, including a $14 million award against Ravin personally. Additionally, the Court granted Oracle’s motion for an award of prejudgment interest, which we anticipate will exceed $24 million.

“The Court’s order sends a powerful message to protect innovators,” said Oracle’s General Counsel Dorian Daley.  “This order helps to protect Oracle’s investment in its software to benefit its customers and confirms that Rimini’s infringement was unrepentant. Rimini and Ravin continue to falsely assert that their conduct is permissible. Their customers and prospective customers are on notice of their grave misconduct and the consequences of that misconduct.”

The Court awarded the permanent injunction because “Rimini’s business model was built entirely on its infringement of Oracle’s copyrighted software.” The Court emphasized that “the evidence in action established Rimini’s callous disregard for Oracle’s copyrights and computer systems when it engaged in the infringing conduct.”  In awarding Oracle its attorneys’ fees, the Court noted that “Oracle successfully prevailed on its claim for copyright infringement as the jury found that Rimini infringed every one of the 93 separate copyright registrations at issue.” Further, the “$50 million verdict against defendants” was “five times the damages number presented at trial by defendants’ damages expert,” demonstrating Oracle’s “substantial success” against Rimini.

Significantly, the Court found that Rimini’s previous denial of copyright infringement was based on “a conscious disregard for the manner that Rimini used … Oracle’s copyrighted software,” and that Rimini’s “significant litigation misconduct,” including the intentional “destruction of evidence,” warranted the award of fees.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

OBIEE12c - Upgrading to Version 12.2.1.1

Rittman Mead Consulting - Thu, 2016-09-22 10:36

INTRODUCTION

The new version of OBIEE 12c, 12.2.1.1 to be exact, is out, so let’s talk about it. It’s my intent that after reading this, you can expect some degree of comfort in regards to possibly doing this thing yourself (should you find yourself in just such a circumstance), but if not, feel free to drop us a line or give us a ring. It should be noted that Oracle documentation explicitly indicates that you’re going to need to upgrade to OBIEE version 12.2.1.0, which is to say you’re going to have to bring your 11g instance up to 12c before you can proceed with another upgrade. A colleague here at RM and I recently sat down to give the upgrade process (click there for the Oracle doc) a go on one of our hosted windows servers, and here’s the cut and dry of it. The examples throughout will be referencing both Linux and Windows, so choose how you’d like. Now, if you’ve gone through the 12c install process before, you’ll be plenty familiar with roughly 80% of the steps involved in the upgrade. Just to get this out of the way, no, it’s not a patch (in the sense that you’re actually going through the OBIEE patching process using OPatch). In fact, the process almost exactly mirrors a basic 12c install, with the addition of a few steps that I will make darn sure we cover in their entirety below. Speaking of which, I’m not going to do a play-by-play of the whole thing, but simply highlight those steps that are wholly unfamiliar. To provide some context, let’s go through the bullet points of what we’ll actually be doing during the upgrade.

  1. First, we’ll make sure we have a server appropriate, supported version of java installed (8_77 is the lowest version) and that this guy corresponds to the JAVA_HOME you’ve got set up.

  2. Next, we’ll be running the install for the WebLogic server into a NEW oracle home. That’s right, you heard me. A. new. oracle. home.

  3. After that, we’ll be running a readiness check to make sure our OBIEE bits won’t run into any trouble during the actual upgrade process. This checks all OBIEE components, including those schemas you installed during the initial install process. Make sure to have your application database admin credentials on hand (we’ll talk about what you need below in more detail). The end of this step will actually have us upgrade all those pieces the readiness checker deems worthy of an upgrade.

  4. Next, we’ll reconfigure and upgrade our existing domain by running the RECONFIGURATION WIZARD!!!!! and upgrade assistant, respectively.

  5. Lastly, we’ll start up our services, cross our fingers, hold onto our four leaf clovers, etc.. (just kidding, at least about that last part).

Before we begin, however, let’s check off a few boxes on the ‘must have’ list.

  • Download all the files here, and make sure you get the right versions for whatever kind of server your version of OBIEE hangs out in. The java version will be 8_101 which will work out just fine even though the minimum needed is 8_77.

  • Get those database credentials! If you don’t know, drop everything and figure out how you’re going to access the application database within which the OBIEE 12c schemas were installed. You’ll need the user name/pass for the SYS user (or user with SYS privileges), and the database connection string as well, including the service name, host, and port.

  • Make sure you have enough disk space wherever you’re installing the upgrade. The downloads for the upgrade aren’t small. You should have at least 150GB, on a DEV box, say. You don’t want to have to manage allocating additional space at a time like this, especially if it involves putting in a ticket with IT (wink wink)! Speaking of which, you’ll also need the server credentials for whichever user 12c was installed under. Note that you probably don’t need root if it was a linux machine, however there have been some instances where I’ve needed to have these handy, as there were some file permission issues that required root credentials and were causing errors during an install. You’ll also need the weblogic/obiee admin user (if you changed the name for some reason).

  • Lastly, make sure you’re at least a tad bit familiar with both the path to the oracle and to the domain home.

SETTING UP JAVA

After downloading the version of Java you need, go ahead update it via the .rpm or .exe, etc… Make sure to update any environment variables you have set up, and to update both the JAVA_HOME variable AND the PATH to reference the new Java location. As stated above, at the time of this blog, the version we used, and that is currently available, is 8_101. During the upgrade process, we got a warning (see below) about our version not being 8_77. If this happens to you, just click Next. Everything will be alright, promise.

Java Version Warning

A NEW HOME FOR ORACLE

Did you click the link to the upgrade doc yet? If not, do so now, as things are about to get kind of crazy. Follow along as we walk through the next steps if you’d like. First, stop services and disable the SSL like it tells you to. Then, start OBIEE services back up and then run the infrastructure jar (java -jar fmw_12.2.1.1.0_infrastructure.jar) for the WebLogic server install. Again, I’m not going to go pic by pic here as you can assume most everything resembles the initial 12c install process, and this part is no different. The one piece of this puzzle we need to focus on is establishing a new oracle home. After skipping those auto updates, move onto step 3 where we are, in fact, going to designate a new oracle home. You’ll see that, after completing the WebLogic install, we’ll have a bunch of updated feature sets, in addition to some new directories in our 12.2.1.1 oracle home. For example, if your original home is something like:

/u01/app/oracle/fmw

change it to:

New Oracle Home

when it asks you to enter a new one.

Breeze through the rest of the steps here, and remember to save that response file!

UPDATING OBIEE

Unzip both of the fmw_12.2.1.1.0_bi_linux64_Disk#_#of2.zip files, making sure that your OBIEE install files are in the same directory. For windows, this will be the executable from the first zip file, and the zip file from the second part of disk 1. Execute the binary (on linux) or .exe, going through the usual motions and then in step 3, enter the NEW oracle home for 12.2.1.1. In the example above, it would be:

/u01/app/oracle/fmw2

for Linux, and likewise, for Windows:

Enter Existing Oracle Home

Again, there isn’t too much to note or trap you here beyond just making sure that you take special care not to enter your original oracle home, but the one you created in the previous section. Proceed through the next steps as usual and remember, save your response file!

UPDATING THE 12C SCHEMAS - USING THE READINESS CHECKER AND UPDATE ASSISTANT

Run the readiness checker from:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua -readiness

This next series of steps will take you through all the schemas currently deployed on your application database and confirm that they won’t explode once you take them through the upgrade process. In step 2 of 6, make sure that you’re entering the port for EM/Console (9500 by default). Remember when I said you’re going to need the DB credentials you used to install 12c in the first place? Well here’s where we’re going to use them. The readiness checker will guide you through a bunch of screens that essentially confirms the credentials for each schema installed, and then presents a report detailing which of these will actually get upgraded. That is to say, there are some that won’t be. I really like this new utility as an extra vote of confidence for a process that can admittedly be oftentimes troublesome.

Readiness Checker

Readiness Report

Once you’ve validated that those schemas ready for update, go ahead and stop OBI12c services using the EXISTING oracle home.

Pro tip: they’ve made it super easy to do this now by just pointing your bash_profile to the binaries directory in OBIEE’s bitools folder (ORACLE_HOME/user_projects/domains/bi/bitools/bin). After logging this entry in your profile, you can simply type start.sh or stop.sh to bring everything up or down, not to mention take advantage of the myriad other scripts that are in there. Don't type those paths out every time.

I digress… After the services come down, run the upgrade assistant from within the NEW oracle home, as below:

Citing the previous example:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua

After bringing up the install dialogue box, move on to step 2, and select the All Schemas Used by a Domain option (as in the example above), unless of course you’d like to hand select which ones you’d like to upgrade. I suppose if you were thinking about scrapping one you had previously installed, then this would be a good option for you. Make sure the domain directory you specify is from your existing/old 12c instance, as below:

Upgrade Assistant-Existing Domain

Move through the next series of steps, which are more or less self explanatory (no tricks here, promise), once again validating connection credentials until you get to step 12. As always, save the response file, select Upgrade, and then watch the magic happen,….hopefully. Congratulations, you’ve just updated your schemas!

Schema Update Protocol Complete

WHO INVITED A WIZARD TO THE PARTY? - RECONFIGURING THE BI DOMAIN AND UPDATING THE BI CONFIGURATION

Like I said before, I won’t be covering every single step of this process i.e, doing the map viewer portion, which means you’ll have to still consult the…oracle, on some of this stuff. That being said, don’t gloss over backing up the map viewer file..you have to do it. This is simply an attempt to help make the upgrade process a little easier to swallow and hopefully make some of the more confusing steps a bit clearer. Moving on. Guess what? It’s time to run another series of dialogue boxes. Beats the heck out of scripting this stuff though, I guess. Open up the RECONFIGURATION WIZARD!!!!! as instructed in the documentation, from the location within your NEW oracle home. The first step will prompt us for the location of the domain we want to upgrade. We want to upgrade our existing 12c domain (the old one). So type that in/browse for it. Right now.

Enter Existing Domain Home

Validate your java version and location in step 3 and then click your way through the next few screens, ensuring that you’ve at least given your stamp of approval on any pre-filled or manually filled entries in each dialogue box. Leave step 7 alone and click Next to get to the screen where we’re actually going to be starting the reconfiguration process. Click through and exit the RECONFIGURATION WIZARD!!!!!

Validate Java

Configuration Celebration

Don’t forget to restore the map viewer config file at this point, and then launch the configuration assistant again, this time selecting the All Configurations Used By a Domain option in step 2. Make sure you’ve entered the location of the existing 12c domain in this step as well, and NOT the one created under the new oracle home.

Enter Proper Domain

Click through the next steps, again, paying close attention to all prompts and the location for the map viewer xml file. Verify in step 7 that the directory locations referenced for both domain and oracle map viewer are for the existing locations and NOT those created by the install of the update.

Correct Location Verification Affirmation

WRAPPING UP AND NOTES

You can now boot up ssl (as below) and then start OBIEE services.

DOMAIN_HOME/bitools/bin/ssl.sh internalssl true

Note: if you have tnsadmin or ldap.ora, place copies under NEW_ORACLE_HOME/network/admin

You can ignore the new oracle home created at this time, as, in my opinion, we’re going to have to do something similar for any following updates
for 12c. What did you think of the upgrade process and did you run into any issues? Thanks so much for reading, and as always, if you find any inconsistencies or errors please let us hear about them!

Categories: BI & Warehousing

use of application error

Tom Kyte - Thu, 2016-09-22 10:26
Hi , I wanted to know is pragma exception init and raise_application_error does the same thing? I wanted to know what is the difference? Also, I wanted to know what is the use , if I place my exception declaration in the package specificat...
Categories: DBA Blogs

Intermittant ORA-08103: object no longer exists

Tom Kyte - Thu, 2016-09-22 10:26
In a batch job (in java), I am getting the below error details while reading the data (in a batch job). And this error is intermittant with one table on first run and in later runs the table keeps on changing. Before every job run, i will truncate th...
Categories: DBA Blogs

2 level subqueyries and problem select function result

Tom Kyte - Thu, 2016-09-22 10:26
Hello everybody, I've 2 problems with on 2 queries : 1) The following query <code>with pres as ( select t1.IDPROD AS spe1 , t2.IDPROD AS spe2 ,t1.INDICELIGNEPRESCRIPTION as id_ligne_1,t2.INDICELIGNEPRESCRIPTION as id_ligne_2, ...
Categories: DBA Blogs

SQL Query

Tom Kyte - Thu, 2016-09-22 10:26
Hi Tom, I have a table like this create table temp1 (f_name varchar2(100), f_date date, f_amount integer ); Records in this table will be like this: Ajay 15-JUL-02 500 Bhavani 15-JUL-02 700 Chakri ...
Categories: DBA Blogs

DBMS_ADVANCED_REWRITE

Tom Kyte - Thu, 2016-09-22 10:26
Tom, I was trying to tune one of the sql written in Ansi syntax without toughing using dbms_advanced_rewrite API. but ended up with the below error, Can you help me what i am missing here? <code> rajesh@ORA10G> declare 2 l_src long; ...
Categories: DBA Blogs

Becky's BI Apps Corner: OBIA 11.1.1.10.2 New Features Part 1 - Health Check & ETL Diagnostics

Rittman Mead Consulting - Thu, 2016-09-22 10:16

I have been working with BI Applications since OBIA switched to ODI in version 11.1.1.7.0. I have installed and worked with all of the 11.x versions using several different sources. This most recent version of OBIA may only be a Patch-Set release, but it has some new features that make it very compelling; ETL automatic error handling, Health Check, ETL Diagnostics, and built in Email notifications.

If you've been using earlier version of OBIA 11.x (or 7.x for that matter), now may be the time to upgrade. In an Oracle Advisor's Webcast, Rajesh Lakra, BIApps Global Product Support Lead Engineer, said this version will be the upgrade path to the OBIA 12.x, which is slated for release sometime in 2017. Also, it may just be circumstances I've been exposed to, but this latest version seems more performant as well. Since I've not had an opportunity to do benchmark time testing against the same source data, I cannot verify yet. However, I am setting up some environments to do just that. Keep an eye out for a future blog post for performance comparisons.

Load Plan

Check if there are any other running load plans

Check Previous Load Plan Runs is a package that only has the following procedure:
Check State of Previous Load Plan Runs
Load Plan
1) Checks the status of Previous Load Plan Runs. Throws Exception if any other Load Plan is still in Running state.
2) Checks Blackout schedule period from w_etl_control_schedule table. If there is a Blackout schedule then LP would fail and comes out from the execution.

I’ve not found any documentation about this package or procedure. However, the procedure itself has three steps.
Load Plan

  1. Check if Prev Load Plans are Running

    a. SELECT COUNT(1) FROM SNP_LPI_RUN WHERE STATUS IN ('R','W') AND I_LP_INST!=(?)

    b. >>>>>" + ctLPRunning + " Load plan(s) found to be running/waiting apart from the current one. ABORTING this Load Plan as running multiple Load Plans in Parallel can cause DATA CORRUPTION <<<<<< Load Plan
    Load Plan

  2. Check Blackout schedule

    a. select -1 from w_etl_control_schedule cs where sysdate between cs.control_start_dt and cs.control_end_dt and cs.control_code = 'NO_ETL'

    b. >>>>> Blackout schedule coming up as seen in w_etl_control_schedule table, hence no Load plan(s) will be executed. ABORTING this Load Plan as there will be a down time to the repository due to the Blackout schedule <<<<<<

  3. Check Source Upgrade

    a. SELECT 1 FROM W_ETL_SRC_VERSION_HISTORY WHERE SRC_PLV_NEW IS NOT NULL AND SRC_UPGRADE_STATUS IN ('PENDING','ERROR','STARTED’)

    b. >>>>>Found Incomplete Source Upgrade status from the Source Upgrade tracking table (W_ETL_SRC_VERSION_HISTORY).For more information, please refer to the Warehouse table W_ETL_SRC_VERSION_HISTORY for incomplete Source Upgrade status rows and take necessary action to run Source Upgrade Load plans <<<<<<

I believe this has been a good feature to add because Oracle’s OBIA documentation has always recommended not running more than one load plan at a time. Now if there is a load plan running, automatically the load plan will stop and there will be a message warning you about data corruption if more than one load plan is running.

I think it is interesting to see this Blackout schedule and Source Upgrade. I’ve done some looking in the documentation and on support.oracle.com but didn’t come up with any clues to what exactly these are for. It seems to me like the Blackout schedule is a calendar of sorts for when the repository will be down for maintenance. As for the Source Upgrade, that is more of a mystery to me.

Next step in the Load Plan is a Health Check.
Load Plan

Health Check and Diagnostics

Oracle's OBIA ETL Guide has a short paragraph on the Health Check.

Health Check is a preliminary ETL phase in which a diagnostic report is generated to identify problematic source data that might cause ETL failure or data loss or corruption in the data warehouse. The report is downloaded from Configuration Manager and includes any problematic data, a description of detected issues, and actions to resolve health check failures.

Referring again to Oracle's OBIA ETL Guide, the Manage Load Plans page is where you can download the Health Check. Highlight the relevant load plan that has run and/or failed and click on Show Data Problems

Load Plan

Select the desired DIAGNOSTIC HEALTHCHECK and download the zip file to your computer.

This file gets generated through a number of different steps starting with the Execute Healthcheck package and followed by the Create Report and Summarize Healthcheck package.

Execute Healthcheck is a package that only has the following procedure:
Run Diagnostics
Load Plan
This procedure has 12 steps. The two that are the most interesting are:
Define Execute Diagnostic Function and Run Diagnostics Load Plan

The bulk of the code for the Diagnostics are in the step Define Execute Diagnostic Function. The code is Jython and it is really too long to paste here, but I highly recommend digging into how it works.
Load Plan

Then the step Run Diagnostics actually kicks off the process as shown here.
Load Plan

Once the diagnostics are run, there is a step in the load plan called Create Report and Summarize Healthcheck. This package has two procedures.
Load Plan

The first procedure, Create Diagnostic Report does exactly what you might guess. It does a lot of formatting and puts the diagnostic report into an html page.
Load Plan
The second procedure, Summarize Healthcheck Result, takes the information, inserts it into a table, then organizes it for an email to be sent out.
Load Plan
Load Plan

During the September 21st Oracle Advisor Webcast on BI Applications 11.1.1.10.2 New Release Overview, Rajesh Lakra, BIApps Global Product Support Lead Engineer, said the Health Check is most useful during a domain load. It checks to ensure source domain data is valid, collectively letting you know if these domains will cause issues further down in the ETL process.

In Part 2 of OBIA 11.1.1.10.2 New Features, I will show you what the html files look like, how to setup the email process, and how the automatic error handling is now working within many of the knowledge modules. Tune in next time for more exciting parts of the new features in this version of BI Applications!

Categories: BI & Warehousing

Thursday at OpenWorld and JavaOne Recap

OTN TechBlog - Thu, 2016-09-22 10:00

Yesterday was the last day of the Java Hub. Highlights include the Java Community BBQ, JavaOne4Kids, Community Demos, and Live Streaming discussions each day. If you missed some of the live streaming, visit the Nighthacking video channel.

 

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} As we enter the last day of OpenWorld, the OTN Lounge is still going strong until 2 pm today. We continue to build shapes with the HearingNest Community Art Experience, serving the OTN Coffee Cloud Service, interacting with the IoT Workshop, and making your own shirt.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle ACE sessions are still being held with Best Practices for Oracle Data Integrator and Choosing a Cloud-Native AppDev Platform. For the full list of sessions visit the OTN Archbeat Blog.

The OTN Systems Hub starts their CommunityCast interviews today with Avi Miller discussing container registry at 11 am. Stop by the OTN Lounge to see the live taping.

 

 

 

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Oracle Open World 2016 from a PeopleSofter point of view: Wednesday 21st

Javier Delgado - Thu, 2016-09-22 08:26
We are reaching the end of Oracle Open World. Wednesday ended with the Appreciation Event which included the star appearances of Gwen Stefani and Sting. Both were really appreciated by the public, but I personally found Sting's performance just fantastic.


Anyway, let's live music behind before it clearly shows I don't have a clue about it. Wednesday was also a very intense day around PeopleSoft. I was able to attend very valuable sessions around different aspects of Fluid User Interface and ElasticSearch.

Fluid User InterfaceI attended four Fluid-related sessions yesterday, which were:

  • Getting the Most Out of PeopleSoft: PeopleSoft Fluid User Interface [CON7067] by Matthew Haavisto, Pramod Agrawal and Sasank Venama.
  • Discover What Is New in PeopleSoft Approvals: PeopleSoft Fluid User Interface, Page Composer [CON7065] by David Bain, from PeopleTools Product Management.
  • Hands-On with PeopleSoft: A Successful PeopleSoft Fluid User Interface Rollout [CON7063] delivered by David Bain, Kevin De Kock from McMaster University, Graham Smith and Sasank Venama.
  • Extending PeopleSoft Fluid User Interface Applications the PeopleSoft Way [CON7062] by David Bain.

My take from these sessions is that the key for being successful at deploying Fluid is not only taking advantage of nice UI features that Fluid provides, but mainly profiting from the new navigation approach proposed by PeopleTools 8.55.



Now, as simple as navigation may seem at first, it really requires to put some thought when it comes to Fluid. In the classic UI, we were kind of guided by the system to put our components into the portal menu structure and that was pretty much it. When using Fluid UI, we have several alternatives, and the optimal use is not always evident.


These sessions helped better understand when each option is best applicable. You can also check the PeopleSoft Fluid UX Standards page for more information.

In terms of new announcements, not much was presented. Probably the key announcement on Fluid UI is the new Page Composer for Mobile Approvals that is now available with PeopleSoft FSCM Update Image #20 and will soon be available for the other pillars. I've briefly covered it during my Tuesday's review post, but in a nutshell Page Composer will allow to configure the display of Mobile Approval pages by transaction and form factor. All this done in a nice drag & drop web based interface.


Could this be extended to other functionalities apart from Mobile Approvals? Not yet, Oracle says, but they are not discarding that either in the future. From my point of view, I have certain concerns at performance, because the HTML is not produced by the Application Server binary code, but PeopleCode itself. However, this was already the case for MAP-based Mobile Approvals, so I can only see the upside for it. Whether it can be extended or not to other functionalities, we will see.

ElasticSearch
ElasticSearch has been longly awaited in the PeopleSoft community, mainly because it will replace Oracle SES, which was personally one of the most-hated components of PeopleSoft, if anything like love & hate makes sense when talking about technology (nerd alert here!).

There was a very interesting session, Getting the Most Out of PeopleSoft: Transitioning to Elasticsearch [CON7066], delivered by Mathew Haavisto, Ramasimha Rangaraju from Oracle and Kevin Antoff from Wells Fargo.

In this session, Oracle and Wells Fargo presented the Proof of Concept they did together around ElasticSearch as replacement to Oracle SES. The feedback provided by Wells Fargo was extremely positive, almost from any perspective.

Also the transition to ElasticSearch seems simple. Both solutions can actually coexist, meaning that you can install and deploy ElasticSearch while Oracle SES is still operational, providing the opportunity of zero-downtime.


Still, we need to make sure we adopt ElasticSearch quickly. Support for Oracle SES will end 18 months after the ElasticSearch solution is made generally available. When will that happen? The almighty Safe Harbor Statement does not allow us to know the precise date, but I got the feeling that it is not going to be far from now (it will be delivered as part of PeopleTools 8.56 but will also be enabled for a future patch of PeopleTools 8.55).

I cannot wait to play around with it. I would be particularly interested in using cloud services of ElasticSearch together with PeopleSoft, so we don't need to use our own infrastructure. We will see if that is feasible when it is finally delivered.

Heading for the Last DayToday will be the last day at Oracle Open World. I will try to post my last review of Oracle Open World at the end of the day, but if I can't, it will be for sure during the week-end when I'm back in Spain.

If you are here at Oracle Open World, please say hello. It doesn't get better than this where it comes to networking opportunities.




MySQL 8.0.0 DMR – First impression

Yann Neuhaus - Thu, 2016-09-22 07:23

Last week (09/12/2016) was announced the new MySQL 8.0.0 DMR (Development Milestone Release)
It is now available for download at dev.mysql.com
I downloaded and installed it
Following are my first impression on some new features
First of all, when you remember the size of earlier versions, it is more & more gluttonous, see below
mysql-5.0.96-linux-x86_64-glibc23.tar.gz    121.2M
mysql-5.1.72-linux-x86_64-glibc23.tar.gz    127.8M
mysql-5.5.52-linux2.6-x86_64.tar.gz            177.2M
mysql-5.6.33-linux-glibc2.5-x86_64.tar.gz    299.7M
mysql-5.7.15-linux-glibc2.5-x86_64.tar.gz   611.8M
mysql-8.0.0-dmr-linux-glibc2.12-x86_64.tar.gz is around 1Gb and uncompressed 3.6Gb (Oups…)

At first sight , it seems that nothing has changed, same schemas and almost the same tables
– 2 new tables related to roles in the mysql schema : default_roles, role_edges
– 6 new tables in the information_schema & 6 also in the performance_schema

When you scan your Data directory (datadir), you will find new “.SDI” files (Serialized Dictionary Information) for every schema except for MySQL
mysql@MYSQL:/u01/mysqldata/mysqld9/ [mysqld9] ll *.SDI
-rw-r-----. 1 mysql mysql 215 Sep 13 21:34 employees_4.SDI
-rw-r-----. 1 mysql mysql 225 Sep 13 21:37 performance_sche_7.SDI
-rw-r-----. 1 mysql mysql 209 Sep 13 21:34 sys_6.SDI
-rw-r-----. 1 mysql mysql 209 Sep 13 21:34 TBS_3.SDI

If you have a look in one of these files, you will see that they all provide data dictionary information in a serialized form (JSON formatted text)
mysql@MYSQL:/u01/mysqldata/mysqld9/ [mysqld9] cat employees_4.SDI
{
"sdi_version": 1,
"dd_version": 1,
"dd_object_type": "Schema",
"dd_object": {
"name": "employees",
"default_collation_id": 8,
"created": 0,
"last_altered": 0
}

Old “.frm” files containing metadata  have been removed, the same for “.par” file (Partition definition), “.trg file” (Trigger parameter) and “.isl” file (InnoDB Symbolic Link)
When you create a new INNODB table, metadata are stored now in the InnoDB file (.ibd)
For all the other storage engine (memory, CSV,Blackhole, etc…), we get now a new “.SDI” file but more complex
mysql@MYSQL:/u01/mysqldata/mysqld9/Newfeatures/ [mysqld9] cat t3_617.SDI
{
"sdi_version": 1,
"dd_version": 1,
"dd_object_type": "Table",
"dd_object": {
"name": "t3",
"mysql_version_id": 80000,
"created": 20160914075838,
"last_altered": 20160914075838,
"options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=0;stats_ auto_recalc=0;stats_sample_pages=0;",
"columns": [
{
"name": "c1",
"type": 29,
"is_nullable": true,
"is_zerofill": false,
"is_unsigned": false,
"is_auto_increment": false,
"is_virtual": false,
"hidden": false,
"ordinal_position": 1,
"char_length": 20,
"numeric_precision": 0,
"numeric_scale": 0,
"datetime_precision": 0,
"has_no_default": false,
"default_value_null": true,
"default_value": "",
"default_option": "",
"update_option": "",
"comment": "",
"generation_expression": "",
"generation_expression_utf8": "",
"options": "interval_count=0;",
"se_private_data": "",
"column_key": 1,
"column_type_utf8": "char(20)",
"elements": [],
"collation_id": 8
}
],
"schema_ref": "Newfeatures",
"hidden": false,
"se_private_id": 18446744073709551615,
"engine": "MEMORY",
"comment": "",
"se_private_data": "",
"row_format": 1,
"partition_type": 0,
"partition_expression": "",
"default_partitioning": 0,
"subpartition_type": 0,
"subpartition_expression": "",
"default_subpartitioning": 0,
"indexes": [],
"foreign_keys": [],
"partitions": [],
"collation_id": 8
}

It is possible now to create roles, something we were waiting for years, that means you can define a set of privileges as a role and assign it to a user
mysqld9-(root@localhost) [Newfeatures]>create role tstrole;
mysqld9-(root@localhost) [Newfeatures]>grant all on employees.* to tstrole;
mysqld9-(root@localhost) [Newfeatures]>select host, user from mysql.user;
+---------------+-------------+
| host | user |
+---------------+-------------+
| % | tstrole |

As you can see, roles are stored in the mysql.user table meaning that you can assign a user to another user
mysqld9-(root@localhost) [Newfeatures]>show grants for tstrole;
+----------------------------------------------------------+
| Grants for tstrole@% |
+----------------------------------------------------------+
| GRANT USAGE ON *.* TO `tstrole`@`%` |
| GRANT ALL PRIVILEGES ON `employees`.* TO `tstrole`@`%` |
+----------------------------------------------------------+
mysqld9-(root@localhost) [Newfeatures]>grant tstrole to 'sme'@'localhost';
mysqld9-(root@localhost) [Newfeatures]>show grants for 'sme'@'localhost' using tstrole;
+----------------------------------------------------------------------------------+
| Grants for sme@localhost |
+----------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `sme`@`localhost` |
| GRANT ALL PRIVILEGES ON `employees`.* TO `sme`@`localhost` |
| GRANT `tstrole`@`%` TO `sme`@`localhost` |
+----------------------------------------------------------------------------------+

Now we connect as user “sme”
mysql -usme -p
mysql> use mysql
ERROR 1044 (42000): Access denied for user 'sme'@'localhost' to database 'mysql'

It seems to work, as user “sme” has only access to the employees schema

Data dictionary has been improved, almost all the system tables have been moved from MyISAM to the transactional InnoDB storage engine to increase reliability
Data dictionary tables are invisible,  they do not appear in the output of SHOW TABLES and cannot be accessed directly
but in most cases there are corresponding INFORMATION_SCHEMA tables that can be queried instead.

The new feature “Invisible Indexes” is really great.
You can toggle  now the visibility of an index as VISIBLE or INVISIBLE
We know that because of unused indexes, performance of modifications (insert, updates) are reduced
As they are also for the optimizer because they are taken in account for the plan selection
So first check for unused indexes
mysqld9-(root@localhost) [information_schema]>SELECT * FROM sys.schema_unused_indexes;
+------------------+-------------------+---------------+
| object_schema | object_name | index_name |
+------------------+---- --------------+---------------+
| employees | departments | dept_name |
| employees | dept_manager | dept_no |
+------------------+-------------------+---------------+

then
mysqld9-(root@localhost) [employees]>alter table departments alter index dept_name INVISIBLE;

Check in the data dictionary
mysqld9-(root@localhost) [employees]>SELECT * FROM information_schema.statistics WHERE is_visible='NO'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: employees
TABLE_NAME: departments
NON_UNIQUE: 0
INDEX_SCHEMA: employees
INDEX_NAME: dept_name
SEQ_IN_INDEX: 1
COLUMN_NAME: dept_name
COLLATION: A
CARDINALITY: NULL
SUB_PART: NULL
PACKED: NULL
NULLABLE:
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
IS_VISIBLE: NO

You can make now persistant global variables
select @@max_connections;
+--------------------------+
| @@max_connections |
+--------------------------+
| 151 |
+--------------------------+
SET GLOBAL max_connections=200;

If you restart your Instance, such setting is lost. So now with
SET PERSIST max_connections=200;
This setting now will remain after an instance restart
select @@max_connections;
+--------------------------+
| @@max_connections |
+--------------------------+
| 200 |
+--------------------------+

Last but not least, be carefull when using MySQL 8.0.0.0 in a multi instance environment,
in my case 9 instances with different versions
I found a bug in mysqld_multi when you want to restart your instance 8.0.0.0
First I corrected the still not fixed Bug #77946 (https://bugs.mysql.com/bug.php?id=77946)
in order to be able to stop my instance properly with mysqld_multi
I stopped it and then tried several times to restart it
No way
2016-09-20T23:42:41.466540Z 1 [ERROR] InnoDB: Operating system error number 2 in a file operation.
2016-09-20T23:42:41.466562Z 1 [ERROR] InnoDB: The error means the system cannot find the path specified.
2016-09-20T23:42:41.466568Z 1 [ERROR] InnoDB: If you are installing InnoDB, remember that you must create directories yourself, InnoDB does not create them.
2016-09-20T23:42:41.466574Z 1 [ERROR] InnoDB: File ./ibdata1: 'open' returned OS error 71. Cannot continue operation
2016-09-20T23:42:41.466582Z 1 [ERROR] InnoDB: Cannot continue operation.

As a temporary workaround, you can either start it manually or use the mysqld_multi from a previous version
I reported this bug  to MySQL (https://bugs.mysql.com/83081)

 

Cet article MySQL 8.0.0 DMR – First impression est apparu en premier sur Blog dbi services.

Oracle NEW_TIME Function with Examples

Complete IT Professional - Thu, 2016-09-22 06:00
In this article, I’ll explain what the NEW_TIME function does and show you some examples. Purpose of the Oracle NEW_TIME Function The NEW_TIME function converts a date from one timezone to another timezone.   Syntax The syntax of the NEW_TIME function is: NEW_TIME ( input_date, timezone1, timezone2 )   Parameters The parameters of the NEW_TIME […]
Categories: Development

When would you use a normal table function?

Bar Solutions - Thu, 2016-09-22 04:22

Dear Patrick,

Last year I did a presentation on table functions at KScope. One of the questions I got was: ‘If pipelined table functions provide their results faster, why would you want to use a normal table function?’ I couldn’t come up with the answer then, maybe you can help?

Erik van Roon

Dear Erik,

Let’s start with explaining a bit what table functions are. Table Functions are functions that return a collection of data and which can be called in a normal SQL statement by using the TABLE() operator. Let’s create a simple function. Please note this is a demonstration only, not something you would normally solve using table functions (or PL/SQL as a whole).
First step is to create a record type in the database:

[PATRICK]SQL>CREATE TYPE emp_rt AS OBJECT
             ( empno NUMBER(4)
             , ename VARCHAR2(10)
             , mgr   NUMBER(4)
             )
             /

Then you need to create a table type in the database:

[PATRICK]SQL>CREATE TYPE emp_tt AS TABLE OF emp_rt
             /

Then it is time for the simple function. The DBMS_LOCK.SLEEP call is in there to show the difference between Table Functions and Pipelined Table Functions.

[PATRICK]SQL>CREATE OR REPLACE FUNCTION tf_emp RETURN emp_tt
             AS
               l_returnvalue emp_tt;
             BEGIN
               SELECT emp_rt(e.empno, e.ename, e.mgr)
                 BULK COLLECT INTO l_returnvalue
                 FROM emp e
                ORDER BY e.deptno;
               FOR indx IN l_returnvalue.first ..
                           l_returnvalue.last LOOP
                 l_returnvalue(indx).ename :=
                   INITCAP(l_returnvalue(indx).ename);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN l_returnvalue;
             END;
             /

Now you can call the function in the FROM clause of your SQL statement as if it were a relational table:

[PATRICK]SQL>SELECT *
               FROM TABLE(tf_emp)
             /

Notice that the result is displayed after all records have been processed, i.e. after 3.5 seconds (due to the DBMS_LOCK.SLEEP statement).
Now let’s create a PIPELINED table function, which produces the same result but in a different manner:

[PATRICK]SQL>CREATE OR REPLACE FUNCTION ptf_emp RETURN emp_tt
                               PIPELINED
             AS
               l_emps emp_tt;
               l_returnvalue emp_rt;
             BEGIN
               SELECT emp_rt(e.empno, e.ename, e.mgr)
                 BULK COLLECT INTO l_emps
                 FROM emp e
                ORDER BY e.deptno;
               FOR indx IN l_emps.first ..
                           l_emps.last LOOP
                 l_returnvalue :=
                   emp_rt(empno => l_emps(indx).empno
                         ,ename => INITCAP(l_emps(indx).ename)
                         ,mgr => l_emps(indx).mgr);
                 PIPE ROW (l_returnvalue);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN;
             END;
             /

If you set the arraysize of your SQL*Plus session (or your command window in PL/SQL Developer) you can see how the results are being returned as they are produced, i.e. 0.25 seconds apart.

[PATRICK]SQL>SET ARRAYSIZE 1
[PATRICK]SQL>SELECT *
               FROM TABLE(ptf_emp)
             /

Now you can see the difference between a Table Function and a Pipelined Table Function. Pipelined Table Functions are best used when you are executing different stages of transformation of your data, for example reading from an OLTP system and writing to a DataWareHouse system. If you PARALLEL_ENABLE your functions AND your source table can be read parallel then you could really see some performance benefits.
But all of this doesn’t explain why you should NOT use pipelining in a table function. The ONLY reason I can think of is when you want to be able to call the function from plain PL/SQL. PL/SQL does one call to a function and expects one result from it. Not a result spread out over many ‘callback’s.
If you create a function like this:

[PATRICK]SQL>CREATE OR REPLACE PROCEDURE call_tf AS
               l_emps emp_tt;
             BEGIN
               l_emps := tf_emp;
               FOR indx IN l_emps.first .. l_emps.last LOOP
                 dbms_output.put_line(l_emps(indx).empno ||
                                     ' ' || l_emps(indx).ename ||
                                     ' ' || l_emps(indx).mgr);
               END LOOP;
             END;
             /

And call this function, then everything works ok. It takes about 3.5 seconds for the function to complete, due to the DBMS_LOCK.SLEEP call in the tf_emp function.
If you call the pipelined table function you are stopped at compile time. The call to a pipelined table function is not allowed.

[PATRICK]SQL>CREATE OR REPLACE PROCEDURE call_ptf AS
               l_emps emp_tt;
             BEGIN
               l_emps := ptf_emp;
               FOR indx IN l_emps.first .. l_emps.last LOOP
                 dbms_output.put_line(l_emps(indx).empno ||
                                     ' ' || l_emps(indx).ename ||
                                     ' ' || l_emps(indx).mgr);
               END LOOP;
             END;
             /
Warning: Procedure created with compilation errors.
[PATRICK]SQL>sho err
Errors for PROCEDURE CALL_PTF:
LINE/COL ERROR
-------- ---------------------------------------------------------------------
1/10     PLS-00653: aggregate/table functions are not allowed in PL/SQL scope

You can of course wrap the call to the pipelined table function in a cursor like this:

[PATRICK]SQL>CREATE OR REPLACE PROCEDURE call_ptf2 AS
               CURSOR c_emps IS
                 SELECT emp_rt(t.empno, t.ename, t.mgr)
                   FROM TABLE(ptf_emp) t;
               l_emps emp_tt;
             BEGIN
               OPEN c_emps;
               FETCH c_emps BULK COLLECT
                 INTO l_emps;
               FOR indx IN l_emps.first .. l_emps.last LOOP
                 dbms_output.put_line(l_emps(indx).empno ||
                                     ' ' || l_emps(indx).ename ||
                                     ' ' || l_emps(indx).mgr);
               END LOOP;
             END;
             /

Procedure created.

But when you call this function you will see that it takes about 3.5 seconds to fetch all the records, effectively using the pipelined table function as a normal table function. This might be your escape to use pipelined table functions in a SQL only environment and still use the same function in a PL/SQL environment.
‘But you said pipelined table functions are best used then executing different stages of transformation. That includes multiple PL/SQL functions.’ you might wonder. That is correct. A pipelined table function may call another pipelined table function. It can use for instance a collection as its input like this

[PATRICK]SQL>CREATE OR REPLACE FUNCTION ptf_emp3
                                        (emps_in IN emp_tt)
                                 RETURN emp_tt PIPELINED
             AS
               l_emp emp_rt;
             BEGIN
               FOR indx IN emps_in.first .. emps_in.last LOOP
                 l_emp := emps_in(indx);
                 l_emp.ename := UPPER(l_emp.ename);
                 PIPE ROW (l_emp);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN;
             END;
             /

When you call this function using this:

[PATRICK]SQL>SET ARRAYSIZE 1
[PATRICK]SQL>SELECT *
               FROM TABLE(ptf_emp3(CAST(MULTISET (SELECT *
                                             FROM TABLE(ptf_emp))
                                                     AS emp_tt)))
             /

You will notice the output will not start displaying until after about 3.5 seconds which tells me the call to ptf_emp must be completed before pft_emp3 can start doing its work.
If you change the parameter to accept a cursor (and of course change the processing as well) like this:

[PATRICK]SQL>CREATE OR REPLACE FUNCTION ptf_emp4
                               (cursor_in IN SYS_REFCURSOR)
                                 RETURN emp_tt PIPELINED
             AS
               l_emp emp_rt;
             BEGIN
               LOOP
                 FETCH cursor_in
                   INTO l_emp;
                 EXIT WHEN cursor_in%NOTFOUND;
                 l_emp.ename := upper(l_emp.ename);
                 PIPE ROW(l_emp);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN;
             END;
             /

And call this function using this statement:

[PATRICK]SQL>SET arraysize 1
[PATRICK]SQL>SELECT *
               FROM TABLE(ptf_emp4(CURSOR (SELECT emp_rt(empno
                                                        ,ename
                                                        ,mgr)
                                       FROM TABLE(ptf_emp))))
             /

You will notice the output starting to display after about half a second. The total time needed for this function to complete is roughly the same as the previous, but this function starts processing sooner, which might be exactly what you need.

I hope this sheds a bit of light on the subject. I think bottom line is to always use pipelined table functions instead of normal table functions, except when the function is only meant for PL/SQL or client consumption. You can always work around implementation restrictions if you need to use the same codebase in both SQL and PL/SQL.

Happy Oracle’ing,
Patrick Barel

If you have any comments on this subject or you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Spring 2015.

Hadoop on IaaS

Pat Shuff - Thu, 2016-09-22 02:07
We are going to try a weekly series of posts that talks about public domain code running on the Oracle Cloud IaaS platform. A good topic seems to be Big Data and Hadoop. Earlier we talked about running Tomcat on IaaS as well as WordPress on IaaS using bitnami.com. To start this process we are going to review what is Big Data and what is Hadoop. We are going to start with the first place that most people start with and that is looking at what books are available on the subject and walking through one or two of them. Today we are going to start with Hadoop for Dummies by Dirk deRoos. This is not the definitive source on Hadoop but a good place to have terms and concepts defined for us.

Years ago one of the big business trends was to create a data warehouse. The idea was to take all of the coporate operational data and put it into one database and grind on it to generate reports. History has shown that aggregation of the data was a difficult task as well as the processing power required to grind through reports. The task took significant resources to architect the data, to host the data, and to write select statements to generate reports for users. As retail got more and more ingrained on the web, sources outside the company became highly relevant and influential on products and services. Big Data and Hadoop have come with tools to pull from non-structured data like Twitter, Yelp, and other public web services and correlate comments and reviews to products and services.

The three characterizations of Big Data according to Hadoop for Dummies are

  • Volume - high volumes of data ranging from dozens fo terabytes to petabytes.
  • Variety - data that is organized in multiple structures, ranging from raw text to log files.
  • Velocity - data that enters an organization has some kind of value for a limited amount of time. The higher the volume of data entering an organization per second, the bigger the velocity of change.

Hadoop is architected to view high volumes of data and data with a variety of structures but it is not necessarily suited to analyze data in motion as it enters the organization but once it is stored and at rest.

Since we touched on the subject, let's define different data structures. Structured data is characterized by a high degree of organization and is typically stored in a database or spreadsheet. There is a relational mapping to the data and programs can be written to analize and process the relationships. Semi-structured data is a bit more difficult to understand than structured data. It is typically stored in the form of text data or log files. The data is typically somewhat structured and is either comma, tab, or character delimited. Unfortunately multiple log files have different formats so the stream of formatting is different for each file and parsing and analysis is a little more challenging. Unstructured data has none of the advantages of the other two data types. Structure might be in the form of directory structure, server location, or file type. The actual architecture of the data might or might not be predictable and needs a special translator to parse the data. Analyzing this type of data typically requires a data architect or data scientist to look at the data and reformat it to make it usable.

From Dummies Guide again, Hadoop is a framework for storing data on large clusters of commodity hardware. This lends itself well to running on a cloud infrastructure that is predictable and scalable. Level 3 networking is the foundation for the cluster. An application that is running on Hadoop gets its work divided among the nodes in the cluster. Some nodes aggregate data through MapReduce or YARN and the data is stored and managed by other nodes using a distributed file system know as the Hadoop distributed file system (HDFS). Hadoop started back in 2002 with the Apache Nutch project. The purpose of this project was to create the foundation for an open source search engine. The project needed to be able to scale to billions of web pages and in 2004 Google published a paper that introduced MapReduce as a way of parsing these web pages.

MapReduce performs a sequence of operations on distributed data sets. The data consists of key-value pairs and has two phases, mapping and data reduction. During the map phase, input data is split into a large number of fragments which is assigned to a map task. Map tasks process the key-value pair that it assigned to look for and proces a set of intermediate key-value pairs. This data is sorted by key and stored into a number of fragments that matches the number of reduce tasks. If for example, we are trying to parse data for the National Football League in the US we would want to spawn 32 task nodes to that we could parse data for each team in the league. Fewer nodes would cause one node to do double duty and more than 32 nodes would cause a duplication of effort. During the reduction phase each task processes the data fragment that it was assigned to it and produces an output key-value pair. For example, if we were looking for passing yardage by team we would spawn 32 task nodes. Each node would look for yardage data for each team and categorize it as either passing or rushing yardage. We might have two quarterbacks pay for a team or have a wide receiver throw a pass. The key for this team would be the passer and the value would be the yards gained. These reduce tasks are distributed across the cluster and the results of their output is stored on the HDFS when finished. We should end up with 32 data files from 32 different task nodes updating passing yardage by team.

Hadoop is more than just distributed storage and MapReduce. It also contains components to help administer and coordinate servers (HUE, Ambari, and Zookeeper), data movement management (flume and sqoop), resource management (YARN), processing framework (MapReduce, Tez, Hoya), Workflow engines (Oozie), Data Serialization (Avro), Data Collection (MapReduce, Pig, Hive, and HBase), and Data Analysis (Mahout). We will look into these system individually later.

There are commercial and public domain offerings for Hadoop.

A good project to start a small Hadoop project is log analysis. If you have a web server, it generates logs every time that a web page is requested. When a change is made to the web site, logs are generated when people log into manage the pages or change the page content. If you web page is a transactional system, orders are being placed for goods and services as well as credit card transaction processing. All of these generate log files. If we wanted to look at a product catalog and correlate what people look at in relationship to what is ordered, we could do what Amazon has done for years. We could come up with recommendations on what other people are looking at as well as what other people ordered along with this item. If, for example, we are buying a pair of athletic shoes. A common purchase with a pair of shoes is also socks. We could give a recommendation on socks that could go with the shoes or a shoe deoderant product that yields a higher profit margin. These items could be displayed with the product in the catalog or shopping cart to facilitate more goods sold on the web. We can also look at the products that no one is looking at and reduce our inventories since they are not even getting looked at casually.

We can also use Hadoop as a fraud detection or risk modeling engine. Both provide significant value to companies and allow executives to look at revenue losses as well as potential transactions that could cause a loss. For example, we might want to look at the packing material that we use for a fragile item that we sell. If we have a high rate of return on a specific item we might want to change the packing, change the shipper, or stop shipping to a part of the country that tends to have a high return rate. Any and all of these solutions can be implemented but a typical data warehouse will not be able to coordinate the data and answer these questions. Some of the data might be stored in plain text files or log files on our return web site. Parsing and processing this data is a good job for Hadoop.

In the upcoming weeks we will dive into installation of a Hadoop framework on the Oracle Cloud. We will look at resources required, pick a project, and deploy sample code into a IaaS solution. We will also look at other books and resources to help us understand and deploy sandboxes to build a prototype that might help us solve a business problem.

Links for 2016-09-21 [del.icio.us]

Categories: DBA Blogs

How even empty trigger increases redo generation

XTended Oracle SQL - Wed, 2016-09-21 17:30

Very simple example:

Test case
set feed on;
-- simple table:
create table xt_curr1 as select level a,level b from dual connect by level<=1e4;
-- same table but with empty trigger:
create table xt_curr2 as select level a,level b from dual connect by level<=1e4;
create or replace trigger tr_xt_curr2 before update on xt_curr2 for each row
begin
  null;
end;
/

set autot trace stat;
update xt_curr1 set b=a;
set autot off;

set autot trace stat;
update xt_curr2 set b=a;
set autot off;
set feed off

drop table xt_curr1 purge;
drop table xt_curr2 purge;

[collapse]
SQL> -- simple table:
SQL> create table xt_curr1 as select level a,level b from dual connect by level<=1e4;

Table created.

SQL> -- same table but with empty trigger:
SQL> create table xt_curr2 as select level a,level b from dual connect by level<=1e4;

Table created.

SQL> create or replace trigger tr_xt_curr2 before update on xt_curr2 for each row
  2  begin
  3    null;
  4  end;
  5  /

Trigger created.

SQL> update xt_curr1 set b=a;

10000 rows updated.


Statistics
----------------------------------------------------------
         25  recursive calls
      10553  db block gets
         91  consistent gets
         18  physical reads
    3101992  redo size
        560  bytes sent via SQL*Net to client
        491  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed

SQL> update xt_curr2 set b=a;

10000 rows updated.


Statistics
----------------------------------------------------------
         11  recursive calls
      20384  db block gets
         59  consistent gets
         18  physical reads
    4411724  redo size
        560  bytes sent via SQL*Net to client
        491  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed
Categories: Development

Exclusive! Oracle Mobile & JET Rapid Development Kit Enablement for APAC Partners Webinar

Usable Apps - Wed, 2016-09-21 17:13

OAUX Banner

Oracle PartnerNetwork and Oracle Applications User Experience Exclusive October 2016 Webcast Announcement:

Oracle Applications User Experience PaaS4SaaS Oracle MAF and Oracle JET Rapid Development Kits Enablement

I am pleased to say our first exclusive webinar explaining the new Oracle Applications User Experience (OAUX) Oracle Mobile Application Framework (MAF) and Oracle JavaScript Extension Toolkit (JET) Rapid Development Kits (RDKs) is for our very special PaaS and SaaS friends in the APAC region on October 19, 2016 (local time).

RDKs for MAF and JET

What's an RDK?

The RDK accelerator kits are based on enterprise best practices, reusable user experience (UX) design patterns, and the Oracle Alta UI look and feel. They contain all the components, code samples, and page templates partners need to quickly design and build compelling user experiences in the cloud. At this exclusive event, you can find out about:

  • RDK sample UIs and mobile flows for Oracle Sales Cloud (CRM), Oracle Human Capital Management (HCM) and Oracle Enterprise Resource Planning (ERP) SaaS.
  • Identifying a good PaaS or SaaS use case.
  • Integrating web services with PaaS flows and smartphone “capture” features, such as cameras and microphones, with SaaS flows.
  • And more RDK good stuff revealed at the Oracle Applications User Experience Exchange at Oracle OpenWorld 2016!

These two new standalone RDK accelerators for building mobile SaaS apps and JavaScript simplified UIs for SaaS and PaaS augment the OAUX RDK capability so that partners now have even more firepower to win more cloud business with that must-have user experience. Killer mobile apps and SaaS and PaaS UIs can be in customer hands in no time!

The event registration page is now available, so sign up

Can't Wait?

A taster of what you can expect can be had now for free: Get your free copy of the Oracle Mobile Applications Cloud User Experience Design Patterns eBook now!

Mobile UX Design Patterns eBook

Online webinars for other regions will be announced shortly. This is your chance to get the latest RDK information from the experts!

And There's More. . .  

In addition, we are planning to visit each region and provide hands-on enablement and deep-dive sessions about our resources at events for partners and customers worldwide.

Stay tuned. 

Pages

Subscribe to Oracle FAQ aggregator