Feed aggregator

OBIEE12c - Upgrading to Version 12.2.1.1

Rittman Mead Consulting - Thu, 2016-09-22 10:36

INTRODUCTION

The new version of OBIEE 12c, 12.2.1.1 to be exact, is out, so let’s talk about it. It’s my intent that after reading this, you can expect some degree of comfort in regards to possibly doing this thing yourself (should you find yourself in just such a circumstance), but if not, feel free to drop us a line or give us a ring. It should be noted that Oracle documentation explicitly indicates that you’re going to need to upgrade to OBIEE version 12.2.1.0, which is to say you’re going to have to bring your 11g instance up to 12c before you can proceed with another upgrade. A colleague here at RM and I recently sat down to give the upgrade process (click there for the Oracle doc) a go on one of our hosted windows servers, and here’s the cut and dry of it. The examples throughout will be referencing both Linux and Windows, so choose how you’d like. Now, if you’ve gone through the 12c install process before, you’ll be plenty familiar with roughly 80% of the steps involved in the upgrade. Just to get this out of the way, no, it’s not a patch (in the sense that you’re actually going through the OBIEE patching process using OPatch). In fact, the process almost exactly mirrors a basic 12c install, with the addition of a few steps that I will make darn sure we cover in their entirety below. Speaking of which, I’m not going to do a play-by-play of the whole thing, but simply highlight those steps that are wholly unfamiliar. To provide some context, let’s go through the bullet points of what we’ll actually be doing during the upgrade.

  1. First, we’ll make sure we have a server appropriate, supported version of java installed (8_77 is the lowest version) and that this guy corresponds to the JAVA_HOME you’ve got set up.

  2. Next, we’ll be running the install for the WebLogic server into a NEW oracle home. That’s right, you heard me. A. new. oracle. home.

  3. After that, we’ll be running a readiness check to make sure our OBIEE bits won’t run into any trouble during the actual upgrade process. This checks all OBIEE components, including those schemas you installed during the initial install process. Make sure to have your application database admin credentials on hand (we’ll talk about what you need below in more detail). The end of this step will actually have us upgrade all those pieces the readiness checker deems worthy of an upgrade.

  4. Next, we’ll reconfigure and upgrade our existing domain by running the RECONFIGURATION WIZARD!!!!! and upgrade assistant, respectively.

  5. Lastly, we’ll start up our services, cross our fingers, hold onto our four leaf clovers, etc.. (just kidding, at least about that last part).

Before we begin, however, let’s check off a few boxes on the ‘must have’ list.

  • Download all the files here, and make sure you get the right versions for whatever kind of server your version of OBIEE hangs out in. The java version will be 8_101 which will work out just fine even though the minimum needed is 8_77.

  • Get those database credentials! If you don’t know, drop everything and figure out how you’re going to access the application database within which the OBIEE 12c schemas were installed. You’ll need the user name/pass for the SYS user (or user with SYS privileges), and the database connection string as well, including the service name, host, and port.

  • Make sure you have enough disk space wherever you’re installing the upgrade. The downloads for the upgrade aren’t small. You should have at least 150GB, on a DEV box, say. You don’t want to have to manage allocating additional space at a time like this, especially if it involves putting in a ticket with IT (wink wink)! Speaking of which, you’ll also need the server credentials for whichever user 12c was installed under. Note that you probably don’t need root if it was a linux machine, however there have been some instances where I’ve needed to have these handy, as there were some file permission issues that required root credentials and were causing errors during an install. You’ll also need the weblogic/obiee admin user (if you changed the name for some reason).

  • Lastly, make sure you’re at least a tad bit familiar with both the path to the oracle and to the domain home.

SETTING UP JAVA

After downloading the version of Java you need, go ahead update it via the .rpm or .exe, etc… Make sure to update any environment variables you have set up, and to update both the JAVA_HOME variable AND the PATH to reference the new Java location. As stated above, at the time of this blog, the version we used, and that is currently available, is 8_101. During the upgrade process, we got a warning (see below) about our version not being 8_77. If this happens to you, just click Next. Everything will be alright, promise.

Java Version Warning

A NEW HOME FOR ORACLE

Did you click the link to the upgrade doc yet? If not, do so now, as things are about to get kind of crazy. Follow along as we walk through the next steps if you’d like. First, stop services and disable the SSL like it tells you to. Then, start OBIEE services back up and then run the infrastructure jar (java -jar fmw_12.2.1.1.0_infrastructure.jar) for the WebLogic server install. Again, I’m not going to go pic by pic here as you can assume most everything resembles the initial 12c install process, and this part is no different. The one piece of this puzzle we need to focus on is establishing a new oracle home. After skipping those auto updates, move onto step 3 where we are, in fact, going to designate a new oracle home. You’ll see that, after completing the WebLogic install, we’ll have a bunch of updated feature sets, in addition to some new directories in our 12.2.1.1 oracle home. For example, if your original home is something like:

/u01/app/oracle/fmw

change it to:

New Oracle Home

when it asks you to enter a new one.

Breeze through the rest of the steps here, and remember to save that response file!

UPDATING OBIEE

Unzip both of the fmw_12.2.1.1.0_bi_linux64_Disk#_#of2.zip files, making sure that your OBIEE install files are in the same directory. For windows, this will be the executable from the first zip file, and the zip file from the second part of disk 1. Execute the binary (on linux) or .exe, going through the usual motions and then in step 3, enter the NEW oracle home for 12.2.1.1. In the example above, it would be:

/u01/app/oracle/fmw2

for Linux, and likewise, for Windows:

Enter Existing Oracle Home

Again, there isn’t too much to note or trap you here beyond just making sure that you take special care not to enter your original oracle home, but the one you created in the previous section. Proceed through the next steps as usual and remember, save your response file!

UPDATING THE 12C SCHEMAS - USING THE READINESS CHECKER AND UPDATE ASSISTANT

Run the readiness checker from:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua -readiness

This next series of steps will take you through all the schemas currently deployed on your application database and confirm that they won’t explode once you take them through the upgrade process. In step 2 of 6, make sure that you’re entering the port for EM/Console (9500 by default). Remember when I said you’re going to need the DB credentials you used to install 12c in the first place? Well here’s where we’re going to use them. The readiness checker will guide you through a bunch of screens that essentially confirms the credentials for each schema installed, and then presents a report detailing which of these will actually get upgraded. That is to say, there are some that won’t be. I really like this new utility as an extra vote of confidence for a process that can admittedly be oftentimes troublesome.

Readiness Checker

Readiness Report

Once you’ve validated that those schemas ready for update, go ahead and stop OBI12c services using the EXISTING oracle home.

Pro tip: they’ve made it super easy to do this now by just pointing your bash_profile to the binaries directory in OBIEE’s bitools folder (ORACLE_HOME/user_projects/domains/bi/bitools/bin). After logging this entry in your profile, you can simply type start.sh or stop.sh to bring everything up or down, not to mention take advantage of the myriad other scripts that are in there. Don't type those paths out every time.

I digress… After the services come down, run the upgrade assistant from within the NEW oracle home, as below:

Citing the previous example:

NEW_ORACLE_HOME/oracle_common/upgrade/bin/ua

After bringing up the install dialogue box, move on to step 2, and select the All Schemas Used by a Domain option (as in the example above), unless of course you’d like to hand select which ones you’d like to upgrade. I suppose if you were thinking about scrapping one you had previously installed, then this would be a good option for you. Make sure the domain directory you specify is from your existing/old 12c instance, as below:

Upgrade Assistant-Existing Domain

Move through the next series of steps, which are more or less self explanatory (no tricks here, promise), once again validating connection credentials until you get to step 12. As always, save the response file, select Upgrade, and then watch the magic happen,….hopefully. Congratulations, you’ve just updated your schemas!

Schema Update Protocol Complete

WHO INVITED A WIZARD TO THE PARTY? - RECONFIGURING THE BI DOMAIN AND UPDATING THE BI CONFIGURATION

Like I said before, I won’t be covering every single step of this process i.e, doing the map viewer portion, which means you’ll have to still consult the…oracle, on some of this stuff. That being said, don’t gloss over backing up the map viewer file..you have to do it. This is simply an attempt to help make the upgrade process a little easier to swallow and hopefully make some of the more confusing steps a bit clearer. Moving on. Guess what? It’s time to run another series of dialogue boxes. Beats the heck out of scripting this stuff though, I guess. Open up the RECONFIGURATION WIZARD!!!!! as instructed in the documentation, from the location within your NEW oracle home. The first step will prompt us for the location of the domain we want to upgrade. We want to upgrade our existing 12c domain (the old one). So type that in/browse for it. Right now.

Enter Existing Domain Home

Validate your java version and location in step 3 and then click your way through the next few screens, ensuring that you’ve at least given your stamp of approval on any pre-filled or manually filled entries in each dialogue box. Leave step 7 alone and click Next to get to the screen where we’re actually going to be starting the reconfiguration process. Click through and exit the RECONFIGURATION WIZARD!!!!!

Validate Java

Configuration Celebration

Don’t forget to restore the map viewer config file at this point, and then launch the configuration assistant again, this time selecting the All Configurations Used By a Domain option in step 2. Make sure you’ve entered the location of the existing 12c domain in this step as well, and NOT the one created under the new oracle home.

Enter Proper Domain

Click through the next steps, again, paying close attention to all prompts and the location for the map viewer xml file. Verify in step 7 that the directory locations referenced for both domain and oracle map viewer are for the existing locations and NOT those created by the install of the update.

Correct Location Verification Affirmation

WRAPPING UP AND NOTES

You can now boot up ssl (as below) and then start OBIEE services.

DOMAIN_HOME/bitools/bin/ssl.sh internalssl true

Note: if you have tnsadmin or ldap.ora, place copies under NEW_ORACLE_HOME/network/admin

You can ignore the new oracle home created at this time, as, in my opinion, we’re going to have to do something similar for any following updates
for 12c. What did you think of the upgrade process and did you run into any issues? Thanks so much for reading, and as always, if you find any inconsistencies or errors please let us hear about them!

Categories: BI & Warehousing

use of application error

Tom Kyte - Thu, 2016-09-22 10:26
Hi , I wanted to know is pragma exception init and raise_application_error does the same thing? I wanted to know what is the difference? Also, I wanted to know what is the use , if I place my exception declaration in the package specificat...
Categories: DBA Blogs

Intermittant ORA-08103: object no longer exists

Tom Kyte - Thu, 2016-09-22 10:26
In a batch job (in java), I am getting the below error details while reading the data (in a batch job). And this error is intermittant with one table on first run and in later runs the table keeps on changing. Before every job run, i will truncate th...
Categories: DBA Blogs

2 level subqueyries and problem select function result

Tom Kyte - Thu, 2016-09-22 10:26
Hello everybody, I've 2 problems with on 2 queries : 1) The following query <code>with pres as ( select t1.IDPROD AS spe1 , t2.IDPROD AS spe2 ,t1.INDICELIGNEPRESCRIPTION as id_ligne_1,t2.INDICELIGNEPRESCRIPTION as id_ligne_2, ...
Categories: DBA Blogs

SQL Query

Tom Kyte - Thu, 2016-09-22 10:26
Hi Tom, I have a table like this create table temp1 (f_name varchar2(100), f_date date, f_amount integer ); Records in this table will be like this: Ajay 15-JUL-02 500 Bhavani 15-JUL-02 700 Chakri ...
Categories: DBA Blogs

DBMS_ADVANCED_REWRITE

Tom Kyte - Thu, 2016-09-22 10:26
Tom, I was trying to tune one of the sql written in Ansi syntax without toughing using dbms_advanced_rewrite API. but ended up with the below error, Can you help me what i am missing here? <code> rajesh@ORA10G> declare 2 l_src long; ...
Categories: DBA Blogs

Becky's BI Apps Corner: OBIA 11.1.1.10.2 New Features Part 1 - Health Check & ETL Diagnostics

Rittman Mead Consulting - Thu, 2016-09-22 10:16

I have been working with BI Applications since OBIA switched to ODI in version 11.1.1.7.0. I have installed and worked with all of the 11.x versions using several different sources. This most recent version of OBIA may only be a Patch-Set release, but it has some new features that make it very compelling; ETL automatic error handling, Health Check, ETL Diagnostics, and built in Email notifications.

If you've been using earlier version of OBIA 11.x (or 7.x for that matter), now may be the time to upgrade. In an Oracle Advisor's Webcast, Rajesh Lakra, BIApps Global Product Support Lead Engineer, said this version will be the upgrade path to the OBIA 12.x, which is slated for release sometime in 2017. Also, it may just be circumstances I've been exposed to, but this latest version seems more performant as well. Since I've not had an opportunity to do benchmark time testing against the same source data, I cannot verify yet. However, I am setting up some environments to do just that. Keep an eye out for a future blog post for performance comparisons.

Load Plan

Check if there are any other running load plans

Check Previous Load Plan Runs is a package that only has the following procedure:
Check State of Previous Load Plan Runs
Load Plan
1) Checks the status of Previous Load Plan Runs. Throws Exception if any other Load Plan is still in Running state.
2) Checks Blackout schedule period from w_etl_control_schedule table. If there is a Blackout schedule then LP would fail and comes out from the execution.

I’ve not found any documentation about this package or procedure. However, the procedure itself has three steps.
Load Plan

  1. Check if Prev Load Plans are Running

    a. SELECT COUNT(1) FROM SNP_LPI_RUN WHERE STATUS IN ('R','W') AND I_LP_INST!=(?)

    b. >>>>>" + ctLPRunning + " Load plan(s) found to be running/waiting apart from the current one. ABORTING this Load Plan as running multiple Load Plans in Parallel can cause DATA CORRUPTION <<<<<< Load Plan
    Load Plan

  2. Check Blackout schedule

    a. select -1 from w_etl_control_schedule cs where sysdate between cs.control_start_dt and cs.control_end_dt and cs.control_code = 'NO_ETL'

    b. >>>>> Blackout schedule coming up as seen in w_etl_control_schedule table, hence no Load plan(s) will be executed. ABORTING this Load Plan as there will be a down time to the repository due to the Blackout schedule <<<<<<

  3. Check Source Upgrade

    a. SELECT 1 FROM W_ETL_SRC_VERSION_HISTORY WHERE SRC_PLV_NEW IS NOT NULL AND SRC_UPGRADE_STATUS IN ('PENDING','ERROR','STARTED’)

    b. >>>>>Found Incomplete Source Upgrade status from the Source Upgrade tracking table (W_ETL_SRC_VERSION_HISTORY).For more information, please refer to the Warehouse table W_ETL_SRC_VERSION_HISTORY for incomplete Source Upgrade status rows and take necessary action to run Source Upgrade Load plans <<<<<<

I believe this has been a good feature to add because Oracle’s OBIA documentation has always recommended not running more than one load plan at a time. Now if there is a load plan running, automatically the load plan will stop and there will be a message warning you about data corruption if more than one load plan is running.

I think it is interesting to see this Blackout schedule and Source Upgrade. I’ve done some looking in the documentation and on support.oracle.com but didn’t come up with any clues to what exactly these are for. It seems to me like the Blackout schedule is a calendar of sorts for when the repository will be down for maintenance. As for the Source Upgrade, that is more of a mystery to me.

Next step in the Load Plan is a Health Check.
Load Plan

Health Check and Diagnostics

Oracle's OBIA ETL Guide has a short paragraph on the Health Check.

Health Check is a preliminary ETL phase in which a diagnostic report is generated to identify problematic source data that might cause ETL failure or data loss or corruption in the data warehouse. The report is downloaded from Configuration Manager and includes any problematic data, a description of detected issues, and actions to resolve health check failures.

Referring again to Oracle's OBIA ETL Guide, the Manage Load Plans page is where you can download the Health Check. Highlight the relevant load plan that has run and/or failed and click on Show Data Problems

Load Plan

Select the desired DIAGNOSTIC HEALTHCHECK and download the zip file to your computer.

This file gets generated through a number of different steps starting with the Execute Healthcheck package and followed by the Create Report and Summarize Healthcheck package.

Execute Healthcheck is a package that only has the following procedure:
Run Diagnostics
Load Plan
This procedure has 12 steps. The two that are the most interesting are:
Define Execute Diagnostic Function and Run Diagnostics Load Plan

The bulk of the code for the Diagnostics are in the step Define Execute Diagnostic Function. The code is Jython and it is really too long to paste here, but I highly recommend digging into how it works.
Load Plan

Then the step Run Diagnostics actually kicks off the process as shown here.
Load Plan

Once the diagnostics are run, there is a step in the load plan called Create Report and Summarize Healthcheck. This package has two procedures.
Load Plan

The first procedure, Create Diagnostic Report does exactly what you might guess. It does a lot of formatting and puts the diagnostic report into an html page.
Load Plan
The second procedure, Summarize Healthcheck Result, takes the information, inserts it into a table, then organizes it for an email to be sent out.
Load Plan
Load Plan

During the September 21st Oracle Advisor Webcast on BI Applications 11.1.1.10.2 New Release Overview, Rajesh Lakra, BIApps Global Product Support Lead Engineer, said the Health Check is most useful during a domain load. It checks to ensure source domain data is valid, collectively letting you know if these domains will cause issues further down in the ETL process.

In Part 2 of OBIA 11.1.1.10.2 New Features, I will show you what the html files look like, how to setup the email process, and how the automatic error handling is now working within many of the knowledge modules. Tune in next time for more exciting parts of the new features in this version of BI Applications!

Categories: BI & Warehousing

Thursday at OpenWorld and JavaOne Recap

OTN TechBlog - Thu, 2016-09-22 10:00

Yesterday was the last day of the Java Hub. Highlights include the Java Community BBQ, JavaOne4Kids, Community Demos, and Live Streaming discussions each day. If you missed some of the live streaming, visit the Nighthacking video channel.

 

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} As we enter the last day of OpenWorld, the OTN Lounge is still going strong until 2 pm today. We continue to build shapes with the HearingNest Community Art Experience, serving the OTN Coffee Cloud Service, interacting with the IoT Workshop, and making your own shirt.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle ACE sessions are still being held with Best Practices for Oracle Data Integrator and Choosing a Cloud-Native AppDev Platform. For the full list of sessions visit the OTN Archbeat Blog.

The OTN Systems Hub starts their CommunityCast interviews today with Avi Miller discussing container registry at 11 am. Stop by the OTN Lounge to see the live taping.

 

 

 

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Oracle Open World 2016 from a PeopleSofter point of view: Wednesday 21st

Javier Delgado - Thu, 2016-09-22 08:26
We are reaching the end of Oracle Open World. Wednesday ended with the Appreciation Event which included the star appearances of Gwen Stefani and Sting. Both were really appreciated by the public, but I personally found Sting's performance just fantastic.


Anyway, let's live music behind before it clearly shows I don't have a clue about it. Wednesday was also a very intense day around PeopleSoft. I was able to attend very valuable sessions around different aspects of Fluid User Interface and ElasticSearch.

Fluid User InterfaceI attended four Fluid-related sessions yesterday, which were:

  • Getting the Most Out of PeopleSoft: PeopleSoft Fluid User Interface [CON7067] by Matthew Haavisto, Pramod Agrawal and Sasank Venama.
  • Discover What Is New in PeopleSoft Approvals: PeopleSoft Fluid User Interface, Page Composer [CON7065] by David Bain, from PeopleTools Product Management.
  • Hands-On with PeopleSoft: A Successful PeopleSoft Fluid User Interface Rollout [CON7063] delivered by David Bain, Kevin De Kock from McMaster University, Graham Smith and Sasank Venama.
  • Extending PeopleSoft Fluid User Interface Applications the PeopleSoft Way [CON7062] by David Bain.

My take from these sessions is that the key for being successful at deploying Fluid is not only taking advantage of nice UI features that Fluid provides, but mainly profiting from the new navigation approach proposed by PeopleTools 8.55.



Now, as simple as navigation may seem at first, it really requires to put some thought when it comes to Fluid. In the classic UI, we were kind of guided by the system to put our components into the portal menu structure and that was pretty much it. When using Fluid UI, we have several alternatives, and the optimal use is not always evident.


These sessions helped better understand when each option is best applicable. You can also check the PeopleSoft Fluid UX Standards page for more information.

In terms of new announcements, not much was presented. Probably the key announcement on Fluid UI is the new Page Composer for Mobile Approvals that is now available with PeopleSoft FSCM Update Image #20 and will soon be available for the other pillars. I've briefly covered it during my Tuesday's review post, but in a nutshell Page Composer will allow to configure the display of Mobile Approval pages by transaction and form factor. All this done in a nice drag & drop web based interface.


Could this be extended to other functionalities apart from Mobile Approvals? Not yet, Oracle says, but they are not discarding that either in the future. From my point of view, I have certain concerns at performance, because the HTML is not produced by the Application Server binary code, but PeopleCode itself. However, this was already the case for MAP-based Mobile Approvals, so I can only see the upside for it. Whether it can be extended or not to other functionalities, we will see.

ElasticSearch
ElasticSearch has been longly awaited in the PeopleSoft community, mainly because it will replace Oracle SES, which was personally one of the most-hated components of PeopleSoft, if anything like love & hate makes sense when talking about technology (nerd alert here!).

There was a very interesting session, Getting the Most Out of PeopleSoft: Transitioning to Elasticsearch [CON7066], delivered by Mathew Haavisto, Ramasimha Rangaraju from Oracle and Kevin Antoff from Wells Fargo.

In this session, Oracle and Wells Fargo presented the Proof of Concept they did together around ElasticSearch as replacement to Oracle SES. The feedback provided by Wells Fargo was extremely positive, almost from any perspective.

Also the transition to ElasticSearch seems simple. Both solutions can actually coexist, meaning that you can install and deploy ElasticSearch while Oracle SES is still operational, providing the opportunity of zero-downtime.


Still, we need to make sure we adopt ElasticSearch quickly. Support for Oracle SES will end 18 months after the ElasticSearch solution is made generally available. When will that happen? The almighty Safe Harbor Statement does not allow us to know the precise date, but I got the feeling that it is not going to be far from now (it will be delivered as part of PeopleTools 8.56 but will also be enabled for a future patch of PeopleTools 8.55).

I cannot wait to play around with it. I would be particularly interested in using cloud services of ElasticSearch together with PeopleSoft, so we don't need to use our own infrastructure. We will see if that is feasible when it is finally delivered.

Heading for the Last DayToday will be the last day at Oracle Open World. I will try to post my last review of Oracle Open World at the end of the day, but if I can't, it will be for sure during the week-end when I'm back in Spain.

If you are here at Oracle Open World, please say hello. It doesn't get better than this where it comes to networking opportunities.




MySQL 8.0.0 DMR – First impression

Yann Neuhaus - Thu, 2016-09-22 07:23

Last week (09/12/2016) was announced the new MySQL 8.0.0 DMR (Development Milestone Release)
It is now available for download at dev.mysql.com
I downloaded and installed it
Following are my first impression on some new features
First of all, when you remember the size of earlier versions, it is more & more gluttonous, see below
mysql-5.0.96-linux-x86_64-glibc23.tar.gz    121.2M
mysql-5.1.72-linux-x86_64-glibc23.tar.gz    127.8M
mysql-5.5.52-linux2.6-x86_64.tar.gz            177.2M
mysql-5.6.33-linux-glibc2.5-x86_64.tar.gz    299.7M
mysql-5.7.15-linux-glibc2.5-x86_64.tar.gz   611.8M
mysql-8.0.0-dmr-linux-glibc2.12-x86_64.tar.gz is around 1Gb and uncompressed 3.6Gb (Oups…)

At first sight , it seems that nothing has changed, same schemas and almost the same tables
– 2 new tables related to roles in the mysql schema : default_roles, role_edges
– 6 new tables in the information_schema & 6 also in the performance_schema

When you scan your Data directory (datadir), you will find new “.SDI” files (Serialized Dictionary Information) for every schema except for MySQL
mysql@MYSQL:/u01/mysqldata/mysqld9/ [mysqld9] ll *.SDI
-rw-r-----. 1 mysql mysql 215 Sep 13 21:34 employees_4.SDI
-rw-r-----. 1 mysql mysql 225 Sep 13 21:37 performance_sche_7.SDI
-rw-r-----. 1 mysql mysql 209 Sep 13 21:34 sys_6.SDI
-rw-r-----. 1 mysql mysql 209 Sep 13 21:34 TBS_3.SDI

If you have a look in one of these files, you will see that they all provide data dictionary information in a serialized form (JSON formatted text)
mysql@MYSQL:/u01/mysqldata/mysqld9/ [mysqld9] cat employees_4.SDI
{
"sdi_version": 1,
"dd_version": 1,
"dd_object_type": "Schema",
"dd_object": {
"name": "employees",
"default_collation_id": 8,
"created": 0,
"last_altered": 0
}

Old “.frm” files containing metadata  have been removed, the same for “.par” file (Partition definition), “.trg file” (Trigger parameter) and “.isl” file (InnoDB Symbolic Link)
When you create a new INNODB table, metadata are stored now in the InnoDB file (.ibd)
For all the other storage engine (memory, CSV,Blackhole, etc…), we get now a new “.SDI” file but more complex
mysql@MYSQL:/u01/mysqldata/mysqld9/Newfeatures/ [mysqld9] cat t3_617.SDI
{
"sdi_version": 1,
"dd_version": 1,
"dd_object_type": "Table",
"dd_object": {
"name": "t3",
"mysql_version_id": 80000,
"created": 20160914075838,
"last_altered": 20160914075838,
"options": "avg_row_length=0;key_block_size=0;keys_disabled=0;pack_record=0;stats_ auto_recalc=0;stats_sample_pages=0;",
"columns": [
{
"name": "c1",
"type": 29,
"is_nullable": true,
"is_zerofill": false,
"is_unsigned": false,
"is_auto_increment": false,
"is_virtual": false,
"hidden": false,
"ordinal_position": 1,
"char_length": 20,
"numeric_precision": 0,
"numeric_scale": 0,
"datetime_precision": 0,
"has_no_default": false,
"default_value_null": true,
"default_value": "",
"default_option": "",
"update_option": "",
"comment": "",
"generation_expression": "",
"generation_expression_utf8": "",
"options": "interval_count=0;",
"se_private_data": "",
"column_key": 1,
"column_type_utf8": "char(20)",
"elements": [],
"collation_id": 8
}
],
"schema_ref": "Newfeatures",
"hidden": false,
"se_private_id": 18446744073709551615,
"engine": "MEMORY",
"comment": "",
"se_private_data": "",
"row_format": 1,
"partition_type": 0,
"partition_expression": "",
"default_partitioning": 0,
"subpartition_type": 0,
"subpartition_expression": "",
"default_subpartitioning": 0,
"indexes": [],
"foreign_keys": [],
"partitions": [],
"collation_id": 8
}

It is possible now to create roles, something we were waiting for years, that means you can define a set of privileges as a role and assign it to a user
mysqld9-(root@localhost) [Newfeatures]>create role tstrole;
mysqld9-(root@localhost) [Newfeatures]>grant all on employees.* to tstrole;
mysqld9-(root@localhost) [Newfeatures]>select host, user from mysql.user;
+---------------+-------------+
| host | user |
+---------------+-------------+
| % | tstrole |

As you can see, roles are stored in the mysql.user table meaning that you can assign a user to another user
mysqld9-(root@localhost) [Newfeatures]>show grants for tstrole;
+----------------------------------------------------------+
| Grants for tstrole@% |
+----------------------------------------------------------+
| GRANT USAGE ON *.* TO `tstrole`@`%` |
| GRANT ALL PRIVILEGES ON `employees`.* TO `tstrole`@`%` |
+----------------------------------------------------------+
mysqld9-(root@localhost) [Newfeatures]>grant tstrole to 'sme'@'localhost';
mysqld9-(root@localhost) [Newfeatures]>show grants for 'sme'@'localhost' using tstrole;
+----------------------------------------------------------------------------------+
| Grants for sme@localhost |
+----------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `sme`@`localhost` |
| GRANT ALL PRIVILEGES ON `employees`.* TO `sme`@`localhost` |
| GRANT `tstrole`@`%` TO `sme`@`localhost` |
+----------------------------------------------------------------------------------+

Now we connect as user “sme”
mysql -usme -p
mysql> use mysql
ERROR 1044 (42000): Access denied for user 'sme'@'localhost' to database 'mysql'

It seems to work, as user “sme” has only access to the employees schema

Data dictionary has been improved, almost all the system tables have been moved from MyISAM to the transactional InnoDB storage engine to increase reliability
Data dictionary tables are invisible,  they do not appear in the output of SHOW TABLES and cannot be accessed directly
but in most cases there are corresponding INFORMATION_SCHEMA tables that can be queried instead.

The new feature “Invisible Indexes” is really great.
You can toggle  now the visibility of an index as VISIBLE or INVISIBLE
We know that because of unused indexes, performance of modifications (insert, updates) are reduced
As they are also for the optimizer because they are taken in account for the plan selection
So first check for unused indexes
mysqld9-(root@localhost) [information_schema]>SELECT * FROM sys.schema_unused_indexes;
+------------------+-------------------+---------------+
| object_schema | object_name | index_name |
+------------------+---- --------------+---------------+
| employees | departments | dept_name |
| employees | dept_manager | dept_no |
+------------------+-------------------+---------------+

then
mysqld9-(root@localhost) [employees]>alter table departments alter index dept_name INVISIBLE;

Check in the data dictionary
mysqld9-(root@localhost) [employees]>SELECT * FROM information_schema.statistics WHERE is_visible='NO'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: employees
TABLE_NAME: departments
NON_UNIQUE: 0
INDEX_SCHEMA: employees
INDEX_NAME: dept_name
SEQ_IN_INDEX: 1
COLUMN_NAME: dept_name
COLLATION: A
CARDINALITY: NULL
SUB_PART: NULL
PACKED: NULL
NULLABLE:
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
IS_VISIBLE: NO

You can make now persistant global variables
select @@max_connections;
+--------------------------+
| @@max_connections |
+--------------------------+
| 151 |
+--------------------------+
SET GLOBAL max_connections=200;

If you restart your Instance, such setting is lost. So now with
SET PERSIST max_connections=200;
This setting now will remain after an instance restart
select @@max_connections;
+--------------------------+
| @@max_connections |
+--------------------------+
| 200 |
+--------------------------+

Last but not least, be carefull when using MySQL 8.0.0.0 in a multi instance environment,
in my case 9 instances with different versions
I found a bug in mysqld_multi when you want to restart your instance 8.0.0.0
First I corrected the still not fixed Bug #77946 (https://bugs.mysql.com/bug.php?id=77946)
in order to be able to stop my instance properly with mysqld_multi
I stopped it and then tried several times to restart it
No way
2016-09-20T23:42:41.466540Z 1 [ERROR] InnoDB: Operating system error number 2 in a file operation.
2016-09-20T23:42:41.466562Z 1 [ERROR] InnoDB: The error means the system cannot find the path specified.
2016-09-20T23:42:41.466568Z 1 [ERROR] InnoDB: If you are installing InnoDB, remember that you must create directories yourself, InnoDB does not create them.
2016-09-20T23:42:41.466574Z 1 [ERROR] InnoDB: File ./ibdata1: 'open' returned OS error 71. Cannot continue operation
2016-09-20T23:42:41.466582Z 1 [ERROR] InnoDB: Cannot continue operation.

As a temporary workaround, you can either start it manually or use the mysqld_multi from a previous version
I reported this bug  to MySQL (https://bugs.mysql.com/83081)

 

Cet article MySQL 8.0.0 DMR – First impression est apparu en premier sur Blog dbi services.

Oracle NEW_TIME Function with Examples

Complete IT Professional - Thu, 2016-09-22 06:00
In this article, I’ll explain what the NEW_TIME function does and show you some examples. Purpose of the Oracle NEW_TIME Function The NEW_TIME function converts a date from one timezone to another timezone.   Syntax The syntax of the NEW_TIME function is: NEW_TIME ( input_date, timezone1, timezone2 )   Parameters The parameters of the NEW_TIME […]
Categories: Development

When would you use a normal table function?

Bar Solutions - Thu, 2016-09-22 04:22

Dear Patrick,

Last year I did a presentation on table functions at KScope. One of the questions I got was: ‘If pipelined table functions provide their results faster, why would you want to use a normal table function?’ I couldn’t come up with the answer then, maybe you can help?

Erik van Roon

Dear Erik,

Let’s start with explaining a bit what table functions are. Table Functions are functions that return a collection of data and which can be called in a normal SQL statement by using the TABLE() operator. Let’s create a simple function. Please note this is a demonstration only, not something you would normally solve using table functions (or PL/SQL as a whole).
First step is to create a record type in the database:

[PATRICK]SQL>CREATE TYPE emp_rt AS OBJECT
             ( empno NUMBER(4)
             , ename VARCHAR2(10)
             , mgr   NUMBER(4)
             )
             /

Then you need to create a table type in the database:

[PATRICK]SQL>CREATE TYPE emp_tt AS TABLE OF emp_rt
             /

Then it is time for the simple function. The DBMS_LOCK.SLEEP call is in there to show the difference between Table Functions and Pipelined Table Functions.

[PATRICK]SQL>CREATE OR REPLACE FUNCTION tf_emp RETURN emp_tt
             AS
               l_returnvalue emp_tt;
             BEGIN
               SELECT emp_rt(e.empno, e.ename, e.mgr)
                 BULK COLLECT INTO l_returnvalue
                 FROM emp e
                ORDER BY e.deptno;
               FOR indx IN l_returnvalue.first ..
                           l_returnvalue.last LOOP
                 l_returnvalue(indx).ename :=
                   INITCAP(l_returnvalue(indx).ename);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN l_returnvalue;
             END;
             /

Now you can call the function in the FROM clause of your SQL statement as if it were a relational table:

[PATRICK]SQL>SELECT *
               FROM TABLE(tf_emp)
             /

Notice that the result is displayed after all records have been processed, i.e. after 3.5 seconds (due to the DBMS_LOCK.SLEEP statement).
Now let’s create a PIPELINED table function, which produces the same result but in a different manner:

[PATRICK]SQL>CREATE OR REPLACE FUNCTION ptf_emp RETURN emp_tt
                               PIPELINED
             AS
               l_emps emp_tt;
               l_returnvalue emp_rt;
             BEGIN
               SELECT emp_rt(e.empno, e.ename, e.mgr)
                 BULK COLLECT INTO l_emps
                 FROM emp e
                ORDER BY e.deptno;
               FOR indx IN l_emps.first ..
                           l_emps.last LOOP
                 l_returnvalue :=
                   emp_rt(empno => l_emps(indx).empno
                         ,ename => INITCAP(l_emps(indx).ename)
                         ,mgr => l_emps(indx).mgr);
                 PIPE ROW (l_returnvalue);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN;
             END;
             /

If you set the arraysize of your SQL*Plus session (or your command window in PL/SQL Developer) you can see how the results are being returned as they are produced, i.e. 0.25 seconds apart.

[PATRICK]SQL>SET ARRAYSIZE 1
[PATRICK]SQL>SELECT *
               FROM TABLE(ptf_emp)
             /

Now you can see the difference between a Table Function and a Pipelined Table Function. Pipelined Table Functions are best used when you are executing different stages of transformation of your data, for example reading from an OLTP system and writing to a DataWareHouse system. If you PARALLEL_ENABLE your functions AND your source table can be read parallel then you could really see some performance benefits.
But all of this doesn’t explain why you should NOT use pipelining in a table function. The ONLY reason I can think of is when you want to be able to call the function from plain PL/SQL. PL/SQL does one call to a function and expects one result from it. Not a result spread out over many ‘callback’s.
If you create a function like this:

[PATRICK]SQL>CREATE OR REPLACE PROCEDURE call_tf AS
               l_emps emp_tt;
             BEGIN
               l_emps := tf_emp;
               FOR indx IN l_emps.first .. l_emps.last LOOP
                 dbms_output.put_line(l_emps(indx).empno ||
                                     ' ' || l_emps(indx).ename ||
                                     ' ' || l_emps(indx).mgr);
               END LOOP;
             END;
             /

And call this function, then everything works ok. It takes about 3.5 seconds for the function to complete, due to the DBMS_LOCK.SLEEP call in the tf_emp function.
If you call the pipelined table function you are stopped at compile time. The call to a pipelined table function is not allowed.

[PATRICK]SQL>CREATE OR REPLACE PROCEDURE call_ptf AS
               l_emps emp_tt;
             BEGIN
               l_emps := ptf_emp;
               FOR indx IN l_emps.first .. l_emps.last LOOP
                 dbms_output.put_line(l_emps(indx).empno ||
                                     ' ' || l_emps(indx).ename ||
                                     ' ' || l_emps(indx).mgr);
               END LOOP;
             END;
             /
Warning: Procedure created with compilation errors.
[PATRICK]SQL>sho err
Errors for PROCEDURE CALL_PTF:
LINE/COL ERROR
-------- ---------------------------------------------------------------------
1/10     PLS-00653: aggregate/table functions are not allowed in PL/SQL scope

You can of course wrap the call to the pipelined table function in a cursor like this:

[PATRICK]SQL>CREATE OR REPLACE PROCEDURE call_ptf2 AS
               CURSOR c_emps IS
                 SELECT emp_rt(t.empno, t.ename, t.mgr)
                   FROM TABLE(ptf_emp) t;
               l_emps emp_tt;
             BEGIN
               OPEN c_emps;
               FETCH c_emps BULK COLLECT
                 INTO l_emps;
               FOR indx IN l_emps.first .. l_emps.last LOOP
                 dbms_output.put_line(l_emps(indx).empno ||
                                     ' ' || l_emps(indx).ename ||
                                     ' ' || l_emps(indx).mgr);
               END LOOP;
             END;
             /

Procedure created.

But when you call this function you will see that it takes about 3.5 seconds to fetch all the records, effectively using the pipelined table function as a normal table function. This might be your escape to use pipelined table functions in a SQL only environment and still use the same function in a PL/SQL environment.
‘But you said pipelined table functions are best used then executing different stages of transformation. That includes multiple PL/SQL functions.’ you might wonder. That is correct. A pipelined table function may call another pipelined table function. It can use for instance a collection as its input like this

[PATRICK]SQL>CREATE OR REPLACE FUNCTION ptf_emp3
                                        (emps_in IN emp_tt)
                                 RETURN emp_tt PIPELINED
             AS
               l_emp emp_rt;
             BEGIN
               FOR indx IN emps_in.first .. emps_in.last LOOP
                 l_emp := emps_in(indx);
                 l_emp.ename := UPPER(l_emp.ename);
                 PIPE ROW (l_emp);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN;
             END;
             /

When you call this function using this:

[PATRICK]SQL>SET ARRAYSIZE 1
[PATRICK]SQL>SELECT *
               FROM TABLE(ptf_emp3(CAST(MULTISET (SELECT *
                                             FROM TABLE(ptf_emp))
                                                     AS emp_tt)))
             /

You will notice the output will not start displaying until after about 3.5 seconds which tells me the call to ptf_emp must be completed before pft_emp3 can start doing its work.
If you change the parameter to accept a cursor (and of course change the processing as well) like this:

[PATRICK]SQL>CREATE OR REPLACE FUNCTION ptf_emp4
                               (cursor_in IN SYS_REFCURSOR)
                                 RETURN emp_tt PIPELINED
             AS
               l_emp emp_rt;
             BEGIN
               LOOP
                 FETCH cursor_in
                   INTO l_emp;
                 EXIT WHEN cursor_in%NOTFOUND;
                 l_emp.ename := upper(l_emp.ename);
                 PIPE ROW(l_emp);
                 dbms_lock.sleep(.25); -- for demo purposes only
               END LOOP;
               RETURN;
             END;
             /

And call this function using this statement:

[PATRICK]SQL>SET arraysize 1
[PATRICK]SQL>SELECT *
               FROM TABLE(ptf_emp4(CURSOR (SELECT emp_rt(empno
                                                        ,ename
                                                        ,mgr)
                                       FROM TABLE(ptf_emp))))
             /

You will notice the output starting to display after about half a second. The total time needed for this function to complete is roughly the same as the previous, but this function starts processing sooner, which might be exactly what you need.

I hope this sheds a bit of light on the subject. I think bottom line is to always use pipelined table functions instead of normal table functions, except when the function is only meant for PL/SQL or client consumption. You can always work around implementation restrictions if you need to use the same codebase in both SQL and PL/SQL.

Happy Oracle’ing,
Patrick Barel

If you have any comments on this subject or you have a question you want answered, please send an email to patrick[at]bar-solutions[dot]com. If I know the answer, or can find it for you, maybe I can help.

This question has been published in OTech Magazine of Spring 2015.

Hadoop on IaaS

Pat Shuff - Thu, 2016-09-22 02:07
We are going to try a weekly series of posts that talks about public domain code running on the Oracle Cloud IaaS platform. A good topic seems to be Big Data and Hadoop. Earlier we talked about running Tomcat on IaaS as well as WordPress on IaaS using bitnami.com. To start this process we are going to review what is Big Data and what is Hadoop. We are going to start with the first place that most people start with and that is looking at what books are available on the subject and walking through one or two of them. Today we are going to start with Hadoop for Dummies by Dirk deRoos. This is not the definitive source on Hadoop but a good place to have terms and concepts defined for us.

Years ago one of the big business trends was to create a data warehouse. The idea was to take all of the coporate operational data and put it into one database and grind on it to generate reports. History has shown that aggregation of the data was a difficult task as well as the processing power required to grind through reports. The task took significant resources to architect the data, to host the data, and to write select statements to generate reports for users. As retail got more and more ingrained on the web, sources outside the company became highly relevant and influential on products and services. Big Data and Hadoop have come with tools to pull from non-structured data like Twitter, Yelp, and other public web services and correlate comments and reviews to products and services.

The three characterizations of Big Data according to Hadoop for Dummies are

  • Volume - high volumes of data ranging from dozens fo terabytes to petabytes.
  • Variety - data that is organized in multiple structures, ranging from raw text to log files.
  • Velocity - data that enters an organization has some kind of value for a limited amount of time. The higher the volume of data entering an organization per second, the bigger the velocity of change.

Hadoop is architected to view high volumes of data and data with a variety of structures but it is not necessarily suited to analyze data in motion as it enters the organization but once it is stored and at rest.

Since we touched on the subject, let's define different data structures. Structured data is characterized by a high degree of organization and is typically stored in a database or spreadsheet. There is a relational mapping to the data and programs can be written to analize and process the relationships. Semi-structured data is a bit more difficult to understand than structured data. It is typically stored in the form of text data or log files. The data is typically somewhat structured and is either comma, tab, or character delimited. Unfortunately multiple log files have different formats so the stream of formatting is different for each file and parsing and analysis is a little more challenging. Unstructured data has none of the advantages of the other two data types. Structure might be in the form of directory structure, server location, or file type. The actual architecture of the data might or might not be predictable and needs a special translator to parse the data. Analyzing this type of data typically requires a data architect or data scientist to look at the data and reformat it to make it usable.

From Dummies Guide again, Hadoop is a framework for storing data on large clusters of commodity hardware. This lends itself well to running on a cloud infrastructure that is predictable and scalable. Level 3 networking is the foundation for the cluster. An application that is running on Hadoop gets its work divided among the nodes in the cluster. Some nodes aggregate data through MapReduce or YARN and the data is stored and managed by other nodes using a distributed file system know as the Hadoop distributed file system (HDFS). Hadoop started back in 2002 with the Apache Nutch project. The purpose of this project was to create the foundation for an open source search engine. The project needed to be able to scale to billions of web pages and in 2004 Google published a paper that introduced MapReduce as a way of parsing these web pages.

MapReduce performs a sequence of operations on distributed data sets. The data consists of key-value pairs and has two phases, mapping and data reduction. During the map phase, input data is split into a large number of fragments which is assigned to a map task. Map tasks process the key-value pair that it assigned to look for and proces a set of intermediate key-value pairs. This data is sorted by key and stored into a number of fragments that matches the number of reduce tasks. If for example, we are trying to parse data for the National Football League in the US we would want to spawn 32 task nodes to that we could parse data for each team in the league. Fewer nodes would cause one node to do double duty and more than 32 nodes would cause a duplication of effort. During the reduction phase each task processes the data fragment that it was assigned to it and produces an output key-value pair. For example, if we were looking for passing yardage by team we would spawn 32 task nodes. Each node would look for yardage data for each team and categorize it as either passing or rushing yardage. We might have two quarterbacks pay for a team or have a wide receiver throw a pass. The key for this team would be the passer and the value would be the yards gained. These reduce tasks are distributed across the cluster and the results of their output is stored on the HDFS when finished. We should end up with 32 data files from 32 different task nodes updating passing yardage by team.

Hadoop is more than just distributed storage and MapReduce. It also contains components to help administer and coordinate servers (HUE, Ambari, and Zookeeper), data movement management (flume and sqoop), resource management (YARN), processing framework (MapReduce, Tez, Hoya), Workflow engines (Oozie), Data Serialization (Avro), Data Collection (MapReduce, Pig, Hive, and HBase), and Data Analysis (Mahout). We will look into these system individually later.

There are commercial and public domain offerings for Hadoop.

A good project to start a small Hadoop project is log analysis. If you have a web server, it generates logs every time that a web page is requested. When a change is made to the web site, logs are generated when people log into manage the pages or change the page content. If you web page is a transactional system, orders are being placed for goods and services as well as credit card transaction processing. All of these generate log files. If we wanted to look at a product catalog and correlate what people look at in relationship to what is ordered, we could do what Amazon has done for years. We could come up with recommendations on what other people are looking at as well as what other people ordered along with this item. If, for example, we are buying a pair of athletic shoes. A common purchase with a pair of shoes is also socks. We could give a recommendation on socks that could go with the shoes or a shoe deoderant product that yields a higher profit margin. These items could be displayed with the product in the catalog or shopping cart to facilitate more goods sold on the web. We can also look at the products that no one is looking at and reduce our inventories since they are not even getting looked at casually.

We can also use Hadoop as a fraud detection or risk modeling engine. Both provide significant value to companies and allow executives to look at revenue losses as well as potential transactions that could cause a loss. For example, we might want to look at the packing material that we use for a fragile item that we sell. If we have a high rate of return on a specific item we might want to change the packing, change the shipper, or stop shipping to a part of the country that tends to have a high return rate. Any and all of these solutions can be implemented but a typical data warehouse will not be able to coordinate the data and answer these questions. Some of the data might be stored in plain text files or log files on our return web site. Parsing and processing this data is a good job for Hadoop.

In the upcoming weeks we will dive into installation of a Hadoop framework on the Oracle Cloud. We will look at resources required, pick a project, and deploy sample code into a IaaS solution. We will also look at other books and resources to help us understand and deploy sandboxes to build a prototype that might help us solve a business problem.

Links for 2016-09-21 [del.icio.us]

Categories: DBA Blogs

How even empty trigger increases redo generation

XTended Oracle SQL - Wed, 2016-09-21 17:30

Very simple example:

Test case
set feed on;
-- simple table:
create table xt_curr1 as select level a,level b from dual connect by level<=1e4;
-- same table but with empty trigger:
create table xt_curr2 as select level a,level b from dual connect by level<=1e4;
create or replace trigger tr_xt_curr2 before update on xt_curr2 for each row
begin
  null;
end;
/

set autot trace stat;
update xt_curr1 set b=a;
set autot off;

set autot trace stat;
update xt_curr2 set b=a;
set autot off;
set feed off

drop table xt_curr1 purge;
drop table xt_curr2 purge;

[collapse]
SQL> -- simple table:
SQL> create table xt_curr1 as select level a,level b from dual connect by level<=1e4;

Table created.

SQL> -- same table but with empty trigger:
SQL> create table xt_curr2 as select level a,level b from dual connect by level<=1e4;

Table created.

SQL> create or replace trigger tr_xt_curr2 before update on xt_curr2 for each row
  2  begin
  3    null;
  4  end;
  5  /

Trigger created.

SQL> update xt_curr1 set b=a;

10000 rows updated.


Statistics
----------------------------------------------------------
         25  recursive calls
      10553  db block gets
         91  consistent gets
         18  physical reads
    3101992  redo size
        560  bytes sent via SQL*Net to client
        491  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed

SQL> update xt_curr2 set b=a;

10000 rows updated.


Statistics
----------------------------------------------------------
         11  recursive calls
      20384  db block gets
         59  consistent gets
         18  physical reads
    4411724  redo size
        560  bytes sent via SQL*Net to client
        491  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed
Categories: Development

Exclusive! Oracle Mobile & JET Rapid Development Kit Enablement for APAC Partners Webinar

Usable Apps - Wed, 2016-09-21 17:13

OAUX Banner

Oracle PartnerNetwork and Oracle Applications User Experience Exclusive October 2016 Webcast Announcement:

Oracle Applications User Experience PaaS4SaaS Oracle MAF and Oracle JET Rapid Development Kits Enablement

I am pleased to say our first exclusive webinar explaining the new Oracle Applications User Experience (OAUX) Oracle Mobile Application Framework (MAF) and Oracle JavaScript Extension Toolkit (JET) Rapid Development Kits (RDKs) is for our very special PaaS and SaaS friends in the APAC region on October 19, 2016 (local time).

RDKs for MAF and JET

What's an RDK?

The RDK accelerator kits are based on enterprise best practices, reusable user experience (UX) design patterns, and the Oracle Alta UI look and feel. They contain all the components, code samples, and page templates partners need to quickly design and build compelling user experiences in the cloud. At this exclusive event, you can find out about:

  • RDK sample UIs and mobile flows for Oracle Sales Cloud (CRM), Oracle Human Capital Management (HCM) and Oracle Enterprise Resource Planning (ERP) SaaS.
  • Identifying a good PaaS or SaaS use case.
  • Integrating web services with PaaS flows and smartphone “capture” features, such as cameras and microphones, with SaaS flows.
  • And more RDK good stuff revealed at the Oracle Applications User Experience Exchange at Oracle OpenWorld 2016!

These two new standalone RDK accelerators for building mobile SaaS apps and JavaScript simplified UIs for SaaS and PaaS augment the OAUX RDK capability so that partners now have even more firepower to win more cloud business with that must-have user experience. Killer mobile apps and SaaS and PaaS UIs can be in customer hands in no time!

The event registration page is now available, so sign up

Can't Wait?

A taster of what you can expect can be had now for free: Get your free copy of the Oracle Mobile Applications Cloud User Experience Design Patterns eBook now!

Mobile UX Design Patterns eBook

Online webinars for other regions will be announced shortly. This is your chance to get the latest RDK information from the experts!

And There's More. . .  

In addition, we are planning to visit each region and provide hands-on enablement and deep-dive sessions about our resources at events for partners and customers worldwide.

Stay tuned. 

How to Increase Partner Opportunities using Oracle Applications User Experience Enablers

Usable Apps - Wed, 2016-09-21 16:44

By Plinio Arbizu of Oracle Gold Partner Services & Processes Solutions, Mexico

There are lots of advantages for partners by employing user experience (UX) practices to develop their business solutions. Major benefits such as productivity gains, user satisfaction, and training savings are realized during project implementation right through to the final delivery.

However, what happens when a partner is in the pre-sale project stage or when the customer is not yet convinced to invest resources to implement UX improvements?

In such situations, at S&P solutions we have found that applying design and prototyping techniques to legacy systems allows us to quickly show customers the benefits that can be achieved with an enhanced applications user experience.

This article provides details of a prototyping experience using the tools provided by the Oracle Applications User Experience (OAUX) Cloud User Experience Rapid Development Kit (RDK).

Use Case

Oracle iProcurement is part of Oracle E-Business Suite and enables organizations to perform the procurement activity electronically over the internet.  The user interfaces are a classic desktop design as shown in the following image:

Oracle iProcurement Desktop UI

In this case, we are going to enhance the Oracle iProcurement user experience, taking into account real customer requirements and suggestions about the design.

Customer Requirements

The requirements for the new design are:

1. To expedite the Receiving Goods process. Usually, users perform the following steps to complete the receiving process:

a. Visually inspect received goods.

b. Registration of the received goods on paper.

c. Registration of this paper information in Oracle iProcurement.

The proposed digital UX solution should simplify this process in order to reduce operations time.

2. The new user experience must facilitate the user to be more productive in completing tasks.

Proposed Solution

The proposed solution was composed of these important features.

  • Use of mobile devices. This would allow the receiving process to be performed flexibly from the factory receiving zone and avoid the need for of paper.
  • Use of mobile device features. To reduce processing time for received goods we can use the mobile device camera to read the barcode of the goods and avoid typing in the codes manually.

Suggested User Interfaces

In order to implement the requirements, we suggested the following interface design:

List of Requisitions

Proposed Requisitions Mobile UI Design

Proposed Requisitions List Mobile UI Design

On this mobile interface we can see the same requisition information shown in the previous desktop interface. But, in the redesign we've proposed the following recommendations to improve the user experience:

  • A flatter visual design.
  • Page content that is constrained horizontally and centered.
  • The most important content is easily glanced at.
  • More content is available with a wider display, in landscape mode.
  • A salient color is used to highlight – not decorate, but we ensure that we meet accessibility requirements for use of color too.
  • Gradual disclosure of key information as needed to complete the task.     

Receive Requisitions

Proposed Receive Requisitions Mobile UI Design

Proposed Receive Requisitions Mobile UI Design 

This interface is designed to register the receipt of goods:

  • We use colors to highlight the information. For example, we use the red color to identify modified information, but again are aware of accessibility requirements.
  • Another important aspect considered in this design is that only information that is really useful for the business process (for example, requisition quantities) is displayed.
  • This design follows a gradual disclosure recommendation.  For example, the Change Receipt Quantity and Delete Item options are not displayed by default, but only when the user needs to update info by selecting the yellow button on the right. 

Resources

We used the prototyping tools provided by the OAUX Cloud UX RDK. The RDK includes an easy-to-use wireframing template and tools that enable the creation of high-fidelity wireframes that were used for creating the mobile user interfaces included in this article.

In addition to the RDK, the Oracle Applications User Experience team offers partners and developers resources and methods to improve applications user experience productively and to win business. More details can be found on the Usable Apps website.

Conclusions

Using the OAUX resources to show customers designs of user interfaces that apply key UX principles and comparing the new designs with existing solutions  is an excellent way to demonstrate the advantages of an improved user experience and to win customer business.

Enablement for the design and building of partner user experience solutions is available for free. Use the resources and guidance offered by the OAUX team on the Usable Apps website and the information on the Usable Apps blog.

These resources and knowledge enables Oracle partners and developers to improve their skills and to offer customers more than technical skills: a complete user experience that is easy and fast to design, build, and implement.

About the Author

Plinio Arbizu is an Oracle Applications User Experience Designated Speaker and Oracle Technology Network ACE Director. He has more of 15 years of professional experience in projects using Oracle products such as Oracle Forms, Oracle ADF, Oracle WebCenter, and Oracle Mobile Application Framework (MAF) and is one of the authors of Web 2.0 Solutions with Oracle WebCenter 11g.

Plinio is a frequent speaker at Latin America events and is one of the coordinators of ORAMEX (Grupo de Usuarios Oracle de México/Oracle User Group Mexico), as featured on the Usable Apps blog. Plinio also holds UX certification from the Nielsen Norman Group.

Lessons in Marketing from Macy's

WebCenter Team - Wed, 2016-09-21 16:04
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

I am from New York and Macy’s is a staple brand for me. We can all recognize that red star anywhere we see it, we can picture their showrooms, the store’s brand displays, the floor upon floor of fashion clothing, accessories, home and beauty products. All of use, especially us New Yorkers enjoy and are well versed with Macy’s Fireworks and their world-famous Thanksgiving Parade.

For a brand to evoke such strong associations and positive recognition, clearly they must be doing something right in marketing (and customer service too)!

Well, here’s our big chance to learn a thing or two from Macy’s. If you are at Oracle OpenWorld 2016 this week, then don’t miss Thursday session:

Digital Asset Management for Modern Enterprises [CON7267]
Thursday, September 21 | 10:45 a.m. – 11:30 a.m.
| Moscone W 2014
Marc Andre-Houle, Senior Product Manager, Oracle
Christopher Kung, Vice President, Macy's China Limited
Rebecca Li, Senior Manager, Macy's China Ltd.

As a marketer, you recognize the big challenge in managing digital assets for multiple channels for customer facing platform. From content creation, collaboration to review and approvals to multi-channel content publication and delivery. Learn how Oracle Digital Asset Management is looking to solve these and such marketer’s challenges.

And hear from Macy’s China leadership on how Macy’s leverages technology to stay ahead of the competition and drive brand leadership and marketing results. Macy’s focuses its efforts on customer satisfaction and it starts even before you become their customer. Learn how Macy’s marketing strategy starts with content creation using digital technologies and goes all the way to publication and delivery.


Don’t miss:

Digital Asset Management for Modern Enterprises [CON7267]
Thursday, September 21 | 10:45 a.m. – 11:30 a.m. | Moscone W 2014


Tweet along with #OracleDX and #OOW16 and share your thoughts, lessons and experiences too.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

#OOW16: Highlights from Days 1-3

WebCenter Team - Wed, 2016-09-21 10:49

Well here we are at day 3 already of Oracle OpenWorld, and we’ve heard some pretty exciting announcements this week!

Larry Ellison kicked off Sunday's keynote by showcasing industry-first cloud innovations and announcing our release of Oracle's second generation Infrastructure as a Service (IaaS) data centers. Ellison introduced Oracle Database 12c Release 2, as well as more than 20 new Oracle Cloud Platform and Application services that extend choice and access for customers and partners globally. Industry-first innovations highlighted include a new Database as a Service (DBaaS) that is orders of magnitudes faster and more scalable than other Cloud database services and new Adaptive Intelligence Applications that use machine learning to power the next generation of Cloud applications.

"You will see as we develop features for the cloud, we'll also start delivering our software in the cloud first. Clearly it's going to go on-premises, but the first deployment of our database and a lot of our software now is going to go to the cloud first." - Larry Ellison, Executive Chairman & CTO, Oracle

Image result for larry ellison openworld keynote 2016 image

Larry also showed off a new platform for building and running ChatBots! Our Oracle ChatBot Platform is a simple interface for enterprise applications and the ChatBots made with it can work in modern messaging apps like Facebook Messenger, Slack, and Kik. The tool doesn’t require you to write in Java, PHP, or even JavaScript, instead, it involves dragging and dropping in a graphical user interface. 

Monday brought a full day of sessions covering our strategy & vision for our Content and Experience Management solutions, as well as a general session where Inderjeet Singh, EVP, Fusion Middleware Development, Oracle, showed the capabilities of Oracle’s middleware platform-as-a-service solutions for cloud-native application development, API management, data and enterprise integration, the Internet of Things, big data, machine learning, data visualization, and mobile ChatBots. He showcased how to easily move existing Oracle and non-Oracle workloads to the cloud; how to extend and integrate on-premises and software-as-a-service applications; and how to build API-first, mobile-fronted applications with portability across public and on-premises clouds.

The Oracle OpenWorld Content and Experience Management Website has been such a seamless and easy way for attendees to have all the important information right at their fingertips this week! Live Twitter feeds and pictures allow attendees to experience the conference in real time. Have you tweeted using hashtags #OracleDX or #OOW16? Your tweets will appear on the site! How cool is that? We hope you'll give it a try if you haven't already. https://sites.oracle.com/oowdx

The Oracle Mobile Challenge has been a huge hit this week! While attendees have been going to sessions, visiting vendors in the exhibit hall (Moscone South) and networking with experts, they've been able to add in some fun and some have won some great prizes! Throughout the week, Samsung has given away Galaxy smartphones, 360 Gear VR Cameras, and Gear VR Goggles! There is still time for you to play, as today will be the last day to participate (although we still hope you attend sessions and visit with us tomorrow). Here you can see who Monday's winners were!

As we approach the last 2 days of OpenWorld, we hope you have had a great start to your week and look forward to meeting with you and talking further! Be sure to join in on the conversations!


Wednesday at OpenWorld and JavaOne

OTN TechBlog - Wed, 2016-09-21 10:24

Wednesday at the OTN Lounge and Java Hub have activities still in full swing.

Have you visited the IoT Workshop in the OTN Lounge? Interact with your fellow attendees by getting your own wi-fi micro microcontroller and connect it to Oracle IoT Cloud Service. OTN is partnering with the AppsLab, the OAUX emerging technologies team, to make this happen. Don’t forget to download the app to participate in the workshop.

The Java Hub Gaming Station is in its last day! Play your favorite Nintendo games on a retro handheld console, powered by Java, running on a Raspberry Pi, and printed on a 3D printer. It is optimized using Flight Recorder and is connected to the Oracle Cloud.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

This past Sunday the OTN Lounge hosted the Women in Technology (WIT) Panel. They will be at the Java Hub today at 1:30. Visit the Oracle WIT community space to learn more and join the conversation. It's a place to share your knowledge, technical resources and engage with other women throughout the Oracle technical community. Make sure to include #OTNWIT with posts you'd like to share!

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} This past Sunday the OTN Lounge hosted the Women in Technology (WIT) Panel. They will be at the Java Hub today at 1:30. ">Visit the Oracle WIT community space to learn more and join the conversation. It's a place to share your knowledge, technical resources and engage with other women throughout the Oracle technical community. Make sure to include #OTNWIT with posts you'd like to share!

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Stop by the OTN Lounge to watch a CommunityCast taping. Today we have the winners of OFM PaaS Oracle Innovation Awards, the SQL authors and book signing, and build SPARC Cloud Infrastructure with OpenStack.

The OTN Systems Hub has times for the Systems’ sessions at OpenWorld and interviews at the OTN Lounge, with Jamie Iles discussing security and downtime at 10 am.

Visit the OTN Archbeat Blog to learn more about specific ACE sessions and stop by the OTN Lounge to watch the taping of the infamous Two Minute Tech Tips.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Tonight the OTN and Pythian are sponsoring the Oracle Bloggers Meetup. It will be held at Jillian’s @ Metreon 5:30 – 7:00 pm. Following the meetup is the Oracle Appreciation Event featuring Multi-GRAMMY Award-Winning Artists Sting and Gwen Stefani.

Pages

Subscribe to Oracle FAQ aggregator