Skip navigation.

Feed aggregator

Parallel Execution -- 1b The PARALLEL Hint and AutoDoP (contd)

Hemant K Chitale - Mon, 2015-03-02 09:38
Continuing the previous thread, having restarted the database again, with the same CPU_COUNT and missing I/O Calibration statistics  ....

The question this time is : What if the table level DoP is specifically 1 ?

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 2 23:22:28 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>alter table large_table parallel 1;

Table altered.

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha ! Again ! The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle chose to use 8 PX Servers for the query !  Again, ignoring the table level DoP (of 1)

So, once again, we see that Oracle actually computes a DoP that looks like it is CPU_COUNT x PARALLEL_THREADS_PER_CPU. Let's verify this.

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_threads_per_cpu=4;

System altered.

SYS>alter system flush shared_pool;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8b0ybuspqu0mm, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from Large_Table

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1311 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='8b0ybuspqu0mm';

PX_SERVERS_EXECUTIONS
---------------------
16

HEMANT>

YES SIR ! Oracle chose to use 16 PX Servers this time. So that does look like CPU_COUNT x PARALLEL_THREADS_PER_CPU.  Have you also noticed the COST ?  The COST has also dropped to half.  So, the COST is also computed based on the number of PX Servers that it expects to be able to grab and use.

.
.
.


Categories: DBA Blogs

Oracle Data Provider for .NET now on NuGet

Christian Shay - Mon, 2015-03-02 08:30

ODP.NET, Managed Driver is now on NuGet, meaning that you can add ODP.NET to your Visual Studio project with just a few clicks in the NuGet Package Manager. We've also published an Oracle By Example walkthrough to take you step by step through the process of using NuGet and ODP.NET.

Here we are in the NuGet Package Manager:




When searching for us in the package manager, make sure to get the official package - look for the word "Official" in the title.



There's actually two NuGet packages available:

ODP.NET, Managed Driver - Official
NuGet id: Oracle.ManagedDataAccess

This adds Oracle.ManagedDataAccess.dll to your project and also makes needed configuration entries in your app.config or web.config.


ODP.NET, Managed Entity Framework Driver - Official
NuGet id: Oracle.ManagedDataAccess.EntityFramework

This adds Oracle.ManagedDataAccess.EntityFramework.dll as well as config file configuration. It also has a dependency on the ODP.NET package above and will pull it into your project  as well as EF 6 if needed.

If you want to host this package on your local intranet, it is also available for download on the OTN .NET download page.

Please note that if you want to use Visual Studio integration features, such as browsing your Oracle Schema in Server Explorer, or using Entity Designer or Table Adapter Configuration wizard, you should still install Oracle Developer Tools for Visual Studio, as a NuGet package  does not provide any of the Visual Studio integration components needed to do design time work.

IR Scrolling - With a Little Help From My Friends

Denes Kubicek - Mon, 2015-03-02 04:02
If you are working with interactive reports you will for sure be faced with a problem of wide reports. If you are taking care of the page layout and eventually have more than just an interactive report on the page, you will want to limit it's size to something making sense. The first problem will appear if you limit the width by setting the region attribute to something like this

style="width:830px"

and you will not see some of the columns:



If you add a scrolling by wrapping the region in a div and adding the following to the region header:

<div style="width:810px;overflow-x:scroll">

and closing it in the footer by adding:

</div>



you will be able to scroll with two ugly side effects:

  • The action bar will be included in the scrolling as well and disappear as you scroll to the right.
  • The sort widgets for the columns will appear on the wrong position the more you scroll.




  • You can solve this problem in the following way:

  • Remove the scrolling DIV from the region header / footer.
  • Use this java script in the page Function and Global Variable Declaration:

    function onload_ir(p_width, p_report_id){

    $('<div id="scroll_me" style="width:' + p_width + 'px;overflow-x:auto;display:inline-block"></div>').insertBefore('#apexir_DATA_PANEL'); $("#apexir_DATA_PANEL").appendTo("#scroll_me"); $("#apexir_DATA_PANEL").show();

    var or_Finished_Loading = gReport._Finished_Loading; gReport._Finished_Loading = function(){ or_Finished_Loading(); if(gReport.current_control=='SORT_WIDGET'){

    var offset_pos = $("#" + p_report_id ).position().left; var pos_left = $('#apexir_rollover').css('left'); pos_left = pos_left.replace('px',''); if (pos_left>p_width-100) {new_pos = parseFloat(pos_left) + parseFloat(offset_pos) - 25; $('#apexir_rollover').css('left', new_pos+'px');} }; }; };


  • Create a Dynamic Action which runs on the page load and executing this script there:

    onload_ir(810, 7990109002761687)


  • 810 is the widht of the scolling region, which is a bit less then the total width of the region.

  • 7990109002761687 is the id of the data grid of the interactive report. You can find this id if you use firebug and scroll to the point where the data grid is placed.




  • What this script does is:

  • It will wrap the data grid into an additional div and add a scroll bar to it.
  • It will overwrite the IR onload function and add a sort widget positioning function to it in order to reposition the widget according to the scrolling.
  • The important part of the overloading function was done by Tom Petrus, who is a big help when it comes to tricky stuff like this.

    Now, once you have done that, your report will show up properly once you scroll it.



    Enjoy.
    Categories: Development

    The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes

    Vikram Das - Sun, 2015-03-01 14:53
    I got this error while upgrading an R12.1.3 instance to R12.2.4, when I completed AD.C.Delta 5 patches with November 2014 bundle patches for AD.C and was in the process of applying TXK.C.Delta5 with November 2014 bundle patches for TXK.C :

    Validation successful. All expected nodes are listed in ADOP_VALID_NODES table.
    [START 2015/03/01 04:53:16] Check if services are down
            [INFO] Run admin server is not down
         [WARNING]  Hotpatch mode should only be used when directed by the patch readme.
      [EVENT]     [START 2015/03/01 04:53:17] Performing database sanity checks
        [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .
        Log file: /erppgzb1/erpapp/fs_ne/EBSapps/log/adop/adop_20150301_045249.log


    [STATEMENT] Please run adopscanlog utility, using the command

    "adopscanlog -latest=yes"

    to get the list of the log files along with snippet of the error message corresponding to each log file.


    adop exiting with status = 1 (Fail)

    I was really surprised as I had already run EBS technology codelevel checker (patch 17537119) script checkDBpatch.sh on racnode1.
    To investigate I checked inside checkDBpatch.sh and found that it create a table called TXK_TCC_RESULTS.  
    SQL> desc txk_tcc_results Name                                      Null?    Type ----------------------------------------- -------- ---------------------------- TCC_VERSION                               NOT NULL VARCHAR2(20) BUGFIX_XML_VERSION                        NOT NULL VARCHAR2(20) NODE_NAME                                 NOT NULL VARCHAR2(100) DATABASE_NAME                             NOT NULL VARCHAR2(64) COMPONENT_NAME                            NOT NULL VARCHAR2(10) COMPONENT_VERSION                         NOT NULL VARCHAR2(20) COMPONENT_HOME                                     VARCHAR2(600) CHECK_DATE                                         DATE CHECK_RESULT                              NOT NULL VARCHAR2(10) CHECK_MESSAGE                                      VARCHAR2(4000)
    SQL> select node_name from txk_tcc_results;
    NODE_NAME--------------------------------------------------------------------------------RACNODE1
    I ran checkDBpatch.sh again, but the patch failed again with previous error:
       [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .
    It was Saturday 5 AM already working through the night.  So I thought, it is better to sleep now and tackle this on Sunday.  On Sunday morning after a late breakfast, I looked at the problem again.  This time, I realized that the error was complaining about racnode1 (in lower case) and the txk_tcc_results table had RACNODE1(in upper case).  To test my hunch, I immediately updated the value:
    update txk_tcc_resultsset node_name='racnode1' where node_name='RACNODE1';
    commit;
    I restarted the patch, and it went through.  Patch was indeed failing because it was trying to look for a lower case value.  I will probably log an SR with Oracle, so that they change their code to make the node_name check case insensitive.

    Further, I was curious, why node_name was stored in all caps in fnd_nodes and txk_tcc_results.  The file /etc/hosts had it in lowercase.  I tried the hostname command on linux prompt:

    $ hostname
    RACNODE1

    That was something unusual, as in our environment, hostname always returns the value in lowercase.  So I further investigated.
    [root@RACNODE1 ~]# sysctl kernel.hostname
    kernel.hostname = RACNODE1

    So I changed it

    [root@RACNODE1 ~]# sysctl kernel.hostname=RACNODE1
    kernel.hostname = racnode1
    [root@RACNODE1 ~]# sysctl kernel.hostname
    kernel.hostname = racnode1
    [root@RACNODE1 ~]#
    [root@RACNODE1 ~]# hostname
    racnode1
    Logged in again to see if root prompt changed:
    [root@racnode1 ~]#

    I also checked
    [root@tsgld5811 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    NOZEROCONF=yes
    HOSTNAME=RACNODE1

    Changed it here also:
    [root@tsgld5811 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    NOZEROCONF=yes
    HOSTNAME=racnode1

    I also changed it on racnode2.
    Categories: APPS Blogs

    Alternate Ledes for CUNY Study on Raising Graduation Rates

    Michael Feldstein - Sun, 2015-03-01 14:23

    By Phil HillMore Posts (291)

    Last week MDRC released a study on the City University of New York’s (CUNY) Accelerated Study in Associate Programs (ASAP) with near breathless terms.

    Title page

    • ASAP was well implemented. The program provided students with a wide array of services over a three-year period, and effectively communicated requirements and other messages.
    • ASAP substantially improved students’ academic outcomes over three years, almost doubling graduation rates. ASAP increased enrollment in college and had especially large effects during the winter and summer intersessions. On average, program group students earned 48 credits in three years, 9 credits more than did control group students. By the end of the study period, 40 percent of the program group had received a degree, compared with 22 percent of the control group. At that point, 25 percent of the program group was enrolled in a four-year school, compared with 17 percent of the control group.
    • At the three-year point, the cost per degree was lower in ASAP than in the control condition. Because the program generated so many more graduates than the usual college services, the cost per degree was lower despite the substantial investment required to operate the program.

    Accordingly the media followed suit with breathless coverage[1]. Consider this from Inside Higher Ed and their article titled “Living Up to the Hype”:

    Now that firm results are in, across several different institutions, CUNY is confident it has cracked the formula for getting students to the finish line.

    “It doesn’t matter that you have a particularly talented director or a president who pays attention. The model works,” said John Mogulescu, the senior university dean for academic affairs and the dean of the CUNY School of Professional Studies. “For us it’s a breakthrough program.”

    MDRC and CUNY also claim that “cracking the code” means that other schools can benefit, as described earlier in the article:

    “We’re hoping to extend that work with CUNY to other colleges around the country,” said Michael J. Weiss, a senior associate with MDRC who coauthored the study.

    Unfortunately . . .

    If you read the report itself, the data doesn’t back up the bold claims in the executive summary and in the media. A more accurate summary might be:

    For the declining number of young, living-with-parents community college students planning to attend full-time, CUNY has explored how to increase student success while avoiding any changes in the classroom. The study found that a package of interventions requiring full-time enrollment, increasing per-student expenditures by 63%, and providing aggressive advising as well as priority access to courses can increase enrollment by 22%, inclusive of term-to-term retention. At the 3-year mark these combined changes translate into an 82% increase in graduation rates, but it is unknown if any changes to the interventions would affect the results, and it is unknown what results would occur at the 4-year mark. Furthermore, it is unclear whether this program can scale due to priority course access and effects on the growing non-traditional student population. If a state sets performance-funding based on 3-year graduation rates and nothing else, this program could even reduce costs.

    Luckily, the report is very well documented, so nothing is hidden. What are the problems that would lead to this alternate description?

    • This study is only for one segment of the population, those willing to go full-time, first-time students, low income, and one or two developmental course requirements (not zero, not three+). This targeted less than one-fourth of the CUNY 2-year student population where 73% live at home with parents and 77% are younger than 22. For the rest, including the growing working-adult population:

    (p. 92): It is unclear, however, what the effects might be with a different target group, such as low-income parents. It is also unclear what outcomes an ASAP-type program that did not require full-time enrollment would yield.

    • The study required full-time enrollment (12 credits attempted per term) and only evaluated 3-year graduation rates, which is almost explains the results by itself. Do the math (24 credits / year over 3 years minus 3 – 6 as developmental courses don’t count for degree credit) and you see that going “full-time” and getting 66 credits is likely the only way to graduate with a 60-credit associate’s degree in 3 years. As the report itself states:

    (p. 85): It is likely that ASAP’s full-time enrollment requirement, coupled with multiple supports to facilitate that enrollment, were central to the program’s success.

    • The study created a special class of students with priority enrollment. One of the biggest challenges of public colleges is for students to even have access to the courses they need. The ASAP students were given priority enrollment as the report itself states:

    (p. 34): In addition, students were able to register for classes early in every semester they participated in the program. This feature allowed ASAP students to create convenient schedules and have a better chance of enrolling in all the classes they need. Early registration may be especially beneficial for students who need to enroll in classes that are often oversubscribed, such as popular general education requirements or developmental courses, and for students in their final semesters as they complete the last courses they need to graduate.

    • The study made no attempt to understand the many variables at play. There were a plethora of interventions – full-time enrollment requirement, priority enrollment, special seminars, reduced load on advisers, etc. Yet we have no idea which components lead to which effects. From the report

    (p. 85): What drove the large effects found in the study and which of ASAP’s components were most important in improving students’ academic outcomes? MDRC’s evaluation was not designed to definitively answer that question. Ultimately, each component in ASAP had the potential to affect students’ experiences in college, and MDRC’s evaluation estimates the effect of ASAP’s full package of services on students’ academic outcomes.

    • The study made no changes at all to actual teaching and learning practices. It almost seems this was the point to find out how we can everything except teaching and learning to get students to enroll full-time. From the report

    (p. 34): ASAP did not make changes to pedagogy, curricula, or anything else that happened inside of the classroom.

    What Do We Have Left?

    In the end this was a study on pulling out all of the non-teaching stops to see if we can get students to enroll full-time. Target only students willing to go full-time, then constantly advise them to enroll full-time and stick with it, and remove as many financial barriers (fund gap between cost and financial aid, free textbooks, gas cards, etc) as is feasible. With all of this effort, the real result of the study is that they increased the number of credits attempted and credits earned by 22%.

    We already know that full-time enrollment is the biggest variable for graduation rates in community colleges, especially if measured over 4 years or less. Look at the recent National Student Clearinghouse report at a national level (tables 11-13):

    • Community college 4-year completion rate for exclusively part-time students: 2.32%
    • Community college 4-year completion rate for mixed enrollment students (some terms FT, some PT): 14.25%
    • Community college 4-year completion rate for exclusively full-time students: 27.55%

    And that data is for 4 years – 3 years would have been more dramatic simply due to the fact that it’s almost impossible to get 60 credits if you don’t take at least 12 credits per term over 3 years.

    What About Cost Analysis?

    The study showed that CUNY spent approximately 63% more per student for the program compared to the control group. The bigger claim, however, is that cost per graduate is actually lower (163% of the cost with 182% of the graduates). But what about the students who don’t graduate or transfer? What about the students who graduate in 4 years instead of 3? Colleges spend money on all their students, and most community college students (60%) can only go part-time and will never be able to graduate in 3 years.

    Even if you factor in performance-based funding, using a 3-year graduation basis is misleading. No state is considering funding only for 3-year successful graduation. If that were so, I have a much easier solution – refuse to admit any students seeking less than 12 credits per term. That will produce dramatic cost savings and dramatic increases in graduation rates . . . as long as you’re willing to completely ignore the traditional community college mission that includes:

    serv[ing] all segments of society through an open-access admissions policy that offers equal and fair treatment to all students

    Can It Scale?

    Despite the claims that “the model works” and that CUNY has cracked the formula, does the report actually support this claim? Specifically, can this program scale?

    First of all, the report only makes its claims for a small percentage of students that are predominantly young and live at home with their parents – we don’t know if it applies beyond the target group as the report itself calls out.

    But within this target group, I think there are big problems with scaling. One of which is the priority enrollment in all courses, including oversubscribed courses and those available at convenient times. The control group was at a disadvantage as were all non-target students (including the growing working adult population and students going back to school). This priority enrollment approach is based on scarcity, and the very nature of scaling the program will reduce the benefits of the intervention.

    I have Premier Silver status at United airlines thanks to a few international trips. If this status gave me realistic priority access to first-class upgrades, then I would be more likely to fly United on a routine basis. As it is, however, I often show up at the gate and see myself #30 or higher in line for first-class upgrades when the cabin only has 5-10 first class grades available. The priority status has lost most of its benefits as United has scaled such that more than a quarter of all passengers on many routes also have priority status.

    CUNY plans to scale from 456 students in the ASAP study all the way up to 13,000 students in the next two years. Assuming even distribution over two years, this changes the group size from 1% of the entering freshman population to 19%. Won’t that make a dramatic difference in how easy it will be for ASAP students to get into the classes and convenient class times they seek? And doesn’t this program conflict with the goals of offering “equal and fair treatment to all students”?

    Alternate Ledes for Media Coverage of Study

    I realize my description above is too lengthy for media ledes, so here are some others that might be useful:

    • CUNY and MDRC prove that enrollment correlates with graduation time.
    • Requiring full-time enrollment and giving special access to courses leads to more full-time enrollment.
    • What would it cost to double an artificial metric without asking faculty to change any classroom activities? 63% more per student.
    Don’t Get Me Wrong

    I’m all for spending money and trying new approaches to help students succeed, including raising graduation rates. I’m also for increasing the focus on out-of-classroom support services to help students. I’m also glad that CUNY is investing in a program to benefit its own students.

    However, the executive summary of this report and the resultant media coverage are misleading. We have not cracked the formula, CUNY is not ready to scale this program or export to other colleges, and taking the executive summary claims at face value is risky at best. The community would be better served if CUNY:

    • Made some effort to separate variables and effect on enrollment and graduation rates;
    • Extended the study to also look at more realistic 4-year graduate rates in addition to 3-year rates;
    • Included an analysis of diminishing benefits from priority course access; and
    • Performed a cost analysis based on the actual or planned funding models for community colleges.
    1. And this article comes from a reporter for whom I have tremendous respect.

    The post Alternate Ledes for CUNY Study on Raising Graduation Rates appeared first on e-Literate.

    Installing Oracle XE on CentOS

    The Anti-Kyte - Sun, 2015-03-01 11:33

    Another Cricket World Cup is underway. England are fulfilling their traditional role of making all of the other teams look like world beaters.
    To take my mind off this excruciating spectacle, I’ll concentrate this week on installing Oracle XE 11g on CentOS 7.

    Before I get into the nuts and bolts of the installation…

    Flavours of Linux

    Whilst there are many Linux Distros out there, they all share the same common Linux Kernel. Within this there are a few Distros upon which most others are based.
    Debian provides the basis for Ubuntu and Mint among others.
    It uses the .deb package format.

    Red Hat Linux in contrast uses the RPM file format for it’s packages. Red Hat is the basis for Distros such as Fedora, CentOS…and Oracle Linux.

    For this reason, the Oracle Express Edition Linux version is packaged using rpm.
    Whilst it is possible to deploy it to a Debian based Distro – instructions for which are available here, deploying on CentOS is rather more straightforward.
    More straightforward, but not entirely so, as we will discover shortly…

    Getting Oracle Express Edition 11G

    Open your web browser and head over the the Oracle Express Edition download page.

    You’ll need to register for an account if you don’t already have one but it is free.

    The file you need to download is listed under :

    Oracle Express Edition 11g Release 2 for Linux x64.

    NOTE XE 11G only comes in the 64-bit variety for Linux. If you’re running a 32-bit version of your Distro, then you’re out of luck as far as 11G is concerned.

    If you’re not sure whether you’re on 32-bit or 64-bit, the following command will help you :

    uname -i
    

    If this returns x86_64 then your OS is 64-bit.

    Installing XE

    You should now have downloaded the zipped rpm file which will look something like this :

    cd $HOME/Downloads
    ls -l
    -rwxrwx---. 1 mike mike 315891481 Dec 16 20:21 oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    

    The next step is to uncompress…

     unzip oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    

    When you run this, the output will look like this :

       creating: Disk1/
       creating: Disk1/upgrade/
      inflating: Disk1/upgrade/gen_inst.sql  
       creating: Disk1/response/
      inflating: Disk1/response/xe.rsp   
      inflating: Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm 
    

    You now need to switch to the newly created Disk1 directory and become root

    cd Disk1
    su
    

    …and then install the package…

    rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
    

    If all goes well you should see…

    Preparing...                          ################################# [100%]
    Updating / installing...
       1:oracle-xe-11.2.0-1.0             ################################# [100%]
    Executing post-install steps...
    You must run '/etc/init.d/oracle-xe configure' as the root user to configure the database.
    
    Configuring XE

    The configuration will be prompt you for

    1. the APEX http port (8080 by default)
    2. the database (TNS) listener port (1521 by default)
    3. A single password to be assigned to the database SYS and SYSTEM users
    4. whether you want the database to start automatically when the system starts (Yes by default)

    Unless you have other software, or Oracle Instances, running elsewhere, the defaults should be fine.

    Here we go then, still as root, run :

    /etc/init.d/oracle-xe configure
    

    The output, complete with the prompts will be something like :

    Oracle Database 11g Express Edition Configuration
    -------------------------------------------------
    This will configure on-boot properties of Oracle Database 11g Express 
    Edition.  The following questions will determine whether the database should 
    be starting upon system boot, the ports it will use, and the passwords that 
    will be used for database accounts.  Press <Enter> to accept the defaults. 
    Ctrl-C will abort.
    
    Specify the HTTP port that will be used for Oracle Application Express [8080]:8081
    
    Specify a port that will be used for the database listener [1521]:1525
    
    Specify a password to be used for database accounts.  Note that the same
    password will be used for SYS and SYSTEM.  Oracle recommends the use of 
    different passwords for each database account.  This can be done after 
    initial configuration:
    Confirm the password:
    
    Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y
    
    Starting Oracle Net Listener...Done
    Configuring database...Done
    Starting Oracle Database 11g Express Edition instance...Done
    Installation completed successfully.
    

    Congratulations, you now have a running database. The first thing to do with it, however, is to shut it down.
    In fact, we need to do a re-start so that the menu items that have been added as part of the installation are visible.
    So, re-boot.

    NOTE – from this point on you can stop being root (although you may need to sudo occasionally).

    Once the system comes back, you will see the new Menu icons in the Applications menu under others :

    oracle_menu

    Just to confirm that your database is up and running, you can select the Run SQL Command Line option from this menu
    and run the following :

    
    conn system/pwd
    select sysdate from dual
    /
    

    This should return the current date.

    Sorting out the Environment Variables

    In the normal run of things, this is the one fiddly bit. There is a bug in one of the scripts Oracle uses to set the environment variables which may cause issues.

    To start with, let’s have a look at the main environment script…

    cat /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh
    

    This script is as follows :

    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
    export ORACLE_SID=XE
    export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh`
    export ORACLE_BASE=/u01/app/oracle
    export PATH=$ORACLE_HOME/bin:$PATH
    

    There is a bug in the nls_lang.sh that is called from here. If you’re NLS_LANG value contains a space, then it will not be configured correctly. A full list of the affected NLS_LANG values is available on the Oracle XE Installation Guide for Debian based systems I mentioned earlier.

    The easiest way to fix this is to just edit the script :

    sudo gedit /u01/app/oracle/product/11.2.0/xe/bin/nls_lang.sh
    

    Right at the bottom of the script where it says :

    # construct the NLS_LANG
    #
    NLS_LANG=${nlslang}.${charset}
    
    echo $NLS_LANG
    

    …amend it so that the $NLS_LANG value is quoted :

    # construct the NLS_LANG
    #
    NLS_LANG="${nlslang}.${charset}"
    
    echo $NLS_LANG
    

    To test the change and make sure everything is now working properly…

    cd /u01/app/oracle/product/11.2.0/xe/bin
    
    . ./oracle_env.sh
    echo $ORACLE_HOME
    echo $ORACLE_SID
    echo $NLS_LANG
    echo $PATH
    

    You should now see the following environment variable settings :

    echo $ORACLE_HOME
    /u01/app/oracle/product/11.2.0/xe
    echo $ORACLE_SID
    XE
    echo $NLS_LANG
    ENGLISH_UNITED KINGDOM.AL32UTF8
    $PATH
    /u01/app/oracle/product/11.2.0/xe/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/mike/.local/bin:/home/mike/bin
    

    NOTE – the $NLS_LANG should have a setting appropriate for your system (in my case ENGLISH_UNITED KINGDOM.AL32UTF8).

    The Oracle bin directory is now at the start of $PATH.

    Next, we need to ensure that these environment variables are set for all sessions. This can be done by running …

    sudo cp /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh /etc/profile.d/.
    

    To check this, you can start a new terminal session and echo the environment variables to make sure they have been set.

    Getting the Menu Items to Work

    To do this, you simply need to make sure that the oracle user, as well as your own user, is a member of the dba group :

    sudo usermod -a -G dba oracle
    sudo usermod -a -G dba mike
    

    To check :

    sudo grep dba /etc/group
    dba:x:1001:oracle,mike
    $
    

    The menu items for starting up and shutting down the database etc. should now work.

    Enabling the Getting Started Desktop Icon

    The final touch. The installation creates a Getting Started icon on the desktop which is designed to open the Database Home Page of the APEX application that comes with XE.

    In order to make it work as desired, you simply need to right-click the icon and select Properties.
    In the Permissions Tab check the box to “Allow executing file as program”.
    Close the window.

    You will notice that the icon has transformed into the familiar Oracle beehive and is now called
    Get Started With Oracle Database 11g Express Edition.

    Clicking on it now will reward you with …

    db_home

    All-in-all then, this installation is reasonably painless when compared with doing the same thing on a Debian system.
    I wish the same could be said of following the England Cricket Team.


    Filed under: Linux, Oracle Tagged: CentOS, nls_lang.sh, Oracle 11g Express Edition, oracle_env.sh

    Cedar’s Oracle Cloud and PeopleSoft Day

    Duncan Davies - Sat, 2015-02-28 18:36

    Cedar held it’s annual Oracle Cloud and PeopleSoft Day in London on Friday, with almost a hundred people in attendance (about 80 customers, plus staff from Oracle and Cedar).

    It was a great success, with a really positive vibe – customers are looking to do great things with both PeopleSoft and Oracle’s Cloud suite – and a privilege to be part of.

    Here are some photos from the day:

    Graham welcomes everyone
    2015-02-27 10.05.59

     

    Marc Weintraub gave a great keynote (from his office at 2:30am!)
    Marc Weintraub - Keynote

     

    Liz and I discuss the practical applications of the PeopleSoft RoadmapLiz and Duncan - PeopleSoft Roadmap and Cloud

     

    Mike takes us through the upcoming Oracle Cloud Release 10 features
    2015-02-27 12.26.53

     

    Jo talks about ‘Taleo for PeopleSoft People’
    Jo and Duncan - Taleo for PeopleSoft People

     

     Simon handles the prize draw2015-02-27 15.48.56

    So, a fun event with lots of knowledge sharing. My absolute favourite part is being able to connect customers who can help each other though. I lost count of the number of times we were able to say “oh, you’re doing <some project> are you? In that case, let me introduce you to <another client> as they’ve just finished doing that very thing” and then being able to leave them to share their experiences.


    Unsubscribe

    Michael Feldstein - Sat, 2015-02-28 16:00

    By Michael FeldsteinMore Posts (1015)

    A little while back, e-Literate suddenly got hit by a spammer who was registering for email subscriptions to the site at a rate of dozens of new email addresses every hour. After trying a number of less extreme measures, I ended up removing the subscription widget from the site. Unfortunately, as a few of you have since pointed out to me, by removing the option to subscribe by email, I also inadvertently removed the option to unsubscribe. Once I realized there was a problem (and cleared some time to figure out what to do about it), I investigated a number of other email subscription plugins, hoping that I could find one that is more secure. After some significant research, I came to the conclusion, that there is no alternate solution that I can trust more than the one we already have.

    The good news is that I discovered the plugin we have been using has an option to disable the subscribe feature while leaving on the unsubscribe feature. I have done so. You can now find the unsubscribe capability back near the top of the right-hand sidebar. Please go ahead and unsubscribe yourself if that’s what you’re looking to do. If any of you need help unsubscribing, please don’t hesitate to reach out to me.

    Sorry for the trouble. On a related note, I hope to reactivate the email subscription feature for new subscribers once I can find the right combination of spam plugins to block the spam registrations without getting in the way of actual humans trying to use the site.

    The post Unsubscribe appeared first on e-Literate.

    Even More Oracle Database Health Checks with ORAchk 12.1.0.2.1 and 12.1.0.2.3 (Beta)

    As we have discussed before, it can be a challenge to quantify how well your database is meeting operational expectations and identify areas to improve performance. Database health checks are...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Databricks and Spark update

    DBMS2 - Sat, 2015-02-28 05:06

    I chatted last night with Ion Stoica, CEO of my client Databricks, for an update both on his company and Spark. Databricks’ actual business is Databricks Cloud, about which I can say:

    • Databricks Cloud is:
      • Spark-as-a-Service.
      • Currently running on Amazon only.
      • Not dependent on Hadoop.
    • Databricks Cloud, despite having a 1.0 version number, is not actually in general availability.
    • Even so, there are a non-trivial number of paying customers for Databricks Cloud. (Ion gave me an approximate number, but is keeping it NDA until Spark Summit East.)
    • Databricks Cloud gets at data from S3 (most commonly), Redshift, Elastic MapReduce, and perhaps other sources I’m forgetting.
    • Databricks Cloud was initially focused on ad-hoc use. A few days ago the capability was added to schedule jobs and so on.
    • Unsurprisingly, therefore, Databricks Cloud has been used to date mainly for data exploration/visualization and ETL (Extract/Transform/Load). Visualizations tend to be scripted/programmatic, but there’s also an ODBC driver used for Tableau access and so on.
    • Databricks Cloud customers are concentrated (but not unanimously so) in the usual-suspect internet-centric business sectors.
    • The low end of the amount of data Databricks Cloud customers are working with is 100s of gigabytes. This isn’t surprising.
    • The high end of the amount of data Databricks Cloud customers are working with is petabytes. That did surprise me, and in retrospect I should have pressed for details.

    I do not expect all of the above to remain true as Databricks Cloud matures.

    Ion also said that Databricks is over 50 people, and has moved its office from Berkeley to San Francisco. He also offered some Spark numbers, such as:

    • 15 certified distributions.
    • ~40 certified applications.
    • 2000 people trained last year by Databricks alone.

    Please note that certification of a Spark distribution is a free service from Databricks, and amounts to checking that the API works against a test harness. Speaking of certification, Ion basically agrees with my views on ODP, although like many — most? — people he expresses himself more politely than I do.

    We talked briefly about several aspects of Spark or related projects. One was DataFrames. Per Databricks:

    In Spark, a DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.

    I gather this is modeled on Python pandas, and extends an earlier Spark capability for RDDs (Resilient Distributed Datasets) to carry around metadata that was tantamount to a schema.

    SparkR is also on the rise, although it has the usual parallel R story to the effect:

    • You can partition data, run arbitrary R on every partition, and aggregate the results.
    • A handful of algorithms are truly parallel.

    So of course is Spark Streaming. And then there are Spark Packages, which are — and I’m speaking loosely here — a kind of user-defined function.

    • Thankfully, Ion did not give me the usual hype about how a public repository of user-created algorithms is a Great Big Deal.
    • Ion did point out that providing an easy way for people to publish their own algorithms is a lot easier than evaluating every candidate contribution to the Spark project itself. :)

    I’ll stop here. However, I have a couple of other Spark-related posts in the research pipeline.

    Categories: Other

    Australian March Training Offer

    Rittman Mead Consulting - Fri, 2015-02-27 21:31

    Autumn is almost upon us here in Australia so why not hold off  going into hibernation and head into the classroom instead.

    For March and April only, Rittmanmead courses in Australia* are being offered at significantly discounted prices.

    Heading up this promotion is the popular TRN202 OBIEE 11g Bootcamp course which will be held in Melbourne, Australia* on March 16th-20th 2015.

    This is not a cut down version of the regular course but the entire 5 day content. Details

    To enrol for this specially priced course, visit the Rittmanmead website training page. Registration is only open between March 1st – March 9th 2015 so register quickly to secure a spot.

    Further specially priced courses will be advertised in the coming weeks.

    *This offer is only available for courses run in Australia.
    Registration Period: 01/03/2015 12:00am – 09/03/2015 11:59:59pm
    Further Terms and Conditions can be found during registration

    Categories: BI & Warehousing

    Greg Mankiw Thinks Greg Mankiw’s Textbook Is Fairly Priced

    Michael Feldstein - Fri, 2015-02-27 16:37

    By Michael FeldsteinMore Posts (1015)

    This is kind of hilarious.

    Greg Mankiw has written a blog post expressing his perplexity[1] with The New York Times’ position that textbooks are overpriced:

    To me, this reaction seems strange. After all, the Times is a for-profit company in the business of providing information. If it really thought that some type of information (that is, textbooks) was vastly overpriced, wouldn’t the Times view this as a great business opportunity? Instead of merely editorializing, why not enter the market and offer a better product at a lower price? The Times knows how to hire writers, editors, printers, etc. There are no barriers to entry in the textbook market, and the Times starts with a pretty good brand name.

    My guess is that the Times business managers would not view starting a new textbook publisher as an exceptionally profitable business opportunity, which if true only goes to undermine the premise of its editorial writers.

    It’s worth noting that Mankiw received a $1.4 million advance for his economics textbook from his original publisher Harcourt Southwestern, which was later acquired by the company now known as Cengage Learning. That was in 1997. Now in its seventh edition, Mankiw has five different versions of his book published by Cengage (not counting the five versions of the previous edition, which is still on the market). That said, he is probably right that NYT would not view the textbook industry as a profitable business opportunity. But think about that. A newspaper finds the textbook industry unattractive economically. The textbook industry is imploding. Mankiw’s publisher just emerged from bankruptcy, and textbook sales are down and still dropping across the board.

    One reason that textbook prices have not been responsive to market forces is that most faculty do not have strong incentives to search for less expensive textbooks and, to the contrary, have high switching costs. They have to both find an alternative that fits their curriculum and teaching approach—a non-trivial investment in itself—and then rejigger their course design to fit with the new book. A second part of the problem is that the publishers really can’t afford to lower the textbook prices at this point without speeding up their slow-motion train crash because their unit sales keep dropping as students find more creative ways to avoid buying the book. Their way of dealing with falling sales is to raise the price on each book that they sell. It’s a vicious cycle—one that could potentially be broken by the market forces that Mankiw seems so sure are providing fair pricing if only the people making the adoption decisions had motivations that were aligned with the people making the purchasing decisions. The high cost of switching for faculty, coupled with their relative personal immunity to pricing increases, translate into a barrier to entry for potential competitors looking to underbid the established players. Which brings me to the third reason. There are plenty of faculty who would like to believe that they could make money writing a textbook someday and that doing so would generate enough income to make a difference in their lives. Not all, not most, and probably not even the majority, but enough to matter. As long as faculty can potentially get compensated for sales, there will be motivation for them to see high textbook prices that they don’t have to pay themselves as “fair” or, at least, tolerable. It’s a conflict of interest. And Greg Mankiw, as a guy who’s made the big score, has the biggest conflict of interest of all and the least motivation of anyone to admit that textbook prices are out of hand, and that the textbook “market” he wants to believe in probably doesn’t even properly qualify as a market, never mind an efficient one.

    1. Hat tip to Stephen Downes for the link.

    The post Greg Mankiw Thinks Greg Mankiw’s Textbook Is Fairly Priced appeared first on e-Literate.

    What happened to “when the application is fast enough to meet users’ requirements?”

    Cary Millsap - Fri, 2015-02-27 15:00
    On January 5, I received an email called “Video” from my friend and former employee Guđmundur Jósepsson from Iceland. His friends call him Gummi (rhymes with “who-me”). Gummi is the guy whose name is set in the ridiculous monospace font on page xxiv of Optimizing Oracle Performance, apparently because O’Reilly’s Linotype Birka font didn’t have the letter eth (đ) in it. Gummi once modestly teased me that this is what he is best known for. But I digress...

    His email looked like this:


    It’s a screen shot of frame 3:12 from my November 2014 video called “Why you need a profiler for Oracle.” At frame 3:12, I am answering the question of how you can know when you’re finished optimizing a given application function. Gummi’s question is, «Oi! What happened to “when the application is fast enough to meet users’ requirements?”»

    Gummi noticed (the good ones will do that) that the video says something different than the thing he had heard me say for years. It’s a fair question. Why, in the video, have I said this new thing? It was not an accident.
    When are you finished optimizing?The question in focus is, “When are you finished optimizing?” Since 2003, I have actually used three different answers:
    When are you are finished optimizing?
    1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
      Source: Optimizing Oracle Performance (2003) pages 302–304.
    2. When the application is fast enough to meet your users’ requirements.
      Source: I have taught this in various courses, conferences, and consulting calls since 1999 or so.
    3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
      Source: “Why you need a profiler for Oracle” (2014) frames 2:51–3:20.
    My motive behind answers A and B was the idea that optimizing beyond what your business needs can be wasteful. I created these answers to deter people from misdirecting time and money toward perfecting something when those resources might be better invested improving something else. This idea was important, and it still is.

    So, then, where did C come from? I’ll begin with a picture. The following figure allows you to plot the response time for a single application function, whatever “given function” you’re looking at. You could draw a similar figure for every application function on your system (although I wouldn’t suggest it).


    Somewhere on this response time axis for your given function is the function’s actual response time. I haven’t marked that response time’s location specifically, but I know it’s in the blue zone, because at the bottom of the blue zone is the special response time RT. This value RT is the function’s top speed on the hardware you own today. Your function can’t go faster than this without upgrading something.

    It so happens that this top speed is the speed at which your function will run if and only if (i) it contains no unnecessary calls and (ii) the calls that remain run at hardware speed. ...Which, of course, is the idea behind this new answer C.
    Where, exactly, is your “requirement”?Answer B (“When the application is fast enough to meet your users’ requirements”) requires that you know the users’ response time requirement for your function, so, next, let’s locate that value on our response time axis.

    This is where the trouble begins. Most DBAs don’t know what their users’ response time requirements really are. Don’t despair, though; most users don’t either.

    At banks, airlines, hospitals, telcos, and nuclear plants, you need strict service level agreements, so those businesses investment into quantifying them. But realize: quantifying all your functions’ response time requirements isn’t about a bunch of users sitting in a room arguing over which subjective speed limits sound the best. It’s about knowing your technological speed limits and understanding how close to those values your business needs to pay to be. It’s an expensive process. At some companies, it’s worth the effort; at most companies, it’s just not.

    How about using, “well, nobody complains about it,” as all the evidence you need that a given function is meeting your users’ requirement? It’s how a lot of people do it. You might get away with doing it this way if your systems weren’t growing. But systems do grow. More data, more users, more application functions: these are all forms of growth, and you can probably measure every one of them happening where you’re sitting right now. All these forms of growth put you on a collision course with failing to meet your users’ response time requirements, whether you and your users know exactly what they are, or not.

    In any event, if you don’t know exactly what your users’ response time requirements are, then you won’t be able to use “meets your users’ requirement” as your finish line that tells you when to stop optimizing. This very practical problem is the demise of answer B for most people.
    Knowing your top speedEven if you do know exactly what your users’ requirements are, it’s not enough. You need to know something more.

    Imagine for a minute that you do know your users’ response time requirement for a given function, and let’s say that it’s this: “95% of executions of this function must complete within 5 seconds.” Now imagine that this morning when you started looking at the function, it would typically run for 10 seconds in your Oracle SQL Developer worksheet, but now after spending an hour or so with it, you have it down to where it runs pretty much every time in just 4 seconds. So, you’ve eliminated 60% of the function’s response time. That’s a pretty good day’s work, right? The question is, are you done? Or do you keep going?

    Here is the reason that answer C is so important. You cannot responsibly answer whether you’re done without knowing that function’s top speed. Even if you know how fast people want it to run, you can’t know whether you’re finished without knowing how fast it can run.

    Why? Imagine that 85% of those 4 seconds are consumed by Oracle enqueue, or latch, or log file sync calls, or by hundreds of parse calls, or 3,214 network round-trips to return 3,214 rows. If any of these things is the case, then no, you’re absolutely not done yet. If you were to allow some ridiculous code path like that to survive on a production system, you’d be diminishing the whole system’s effectiveness for everybody (even people who are running functions other than the one you’re fixing).

    Now, sure, if there’s something else on the system that has a higher priority than finishing the fix on this function, then you should jump to it. But you should at least leave this function on your to-do list. Your analysis of the higher priority function might even reveal that this function’s inefficiencies are causing the higher-priority functions problems. Such can be the nature of inefficient code under conditions of high load.

    On the other hand, if your function is running in 4 seconds and (i) its profile shows no unnecessary calls, and (ii) the calls that remain are running at hardware speeds, then you’ve reached a milestone:
    1. if your code meets your users’ requirement, then you’re done;
    2. otherwise, either you’ll have to reimagine how to implement the function, or you’ll have to upgrade your hardware (or both).
    There’s that “users’ requirement” thing again. You see why it has to be there, right?

    Well, here’s what most people do. They get their functions’ response times reasonably close to their top speeds (which, with good people, isn’t usually as expensive as it sounds), and then they worry about requirements only if those requirements are so important that it’s worth a project to quantify them. A requirement is usually considered really important if it’s close to your top speed or if it’s really expensive when you violate a service level requirement.

    This strategy works reasonably well.

    It is interesting to note here that knowing a function’s top speed is actually more important than knowing your users’ requirements for that function. A lot of companies can work just fine not knowing their users’ requirements, but without knowing your top speeds, you really are in the dark. A second observation that I find particularly amusing is this: not only is your top speed more important to know, your top speed is actually easier to compute than your users’ requirement (…if you have a profiler, which was my point in the video).

    Better and easier is a good combination.
    Tomorrow is important, tooWhen are you are finished optimizing?
    1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
    2. When the application is fast enough to meet your users’ requirements.
    3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
    Answer A is still a pretty strong answer. Notice that it actually maps closely to answer C. Answer C’s prescription for “no unnecessary calls” yields answer A’s goal of call reduction, and answer C’s prescription for “calls that remain run at hardware speed” yields answer A’s goal of latency reduction. So, in a way, C is a more action-oriented version of A, but A goes further to combat the perfectionism trap with its emphasis on the cost of action versus the cost of inaction.

    One thing I’ve grown to dislike about answer A, though, is its emphasis on today in “…exceeds the cost of the performance you’re getting today.” After years of experience with the question of when optimization is complete, I think that answer A under-emphasizes the importance of tomorrow. Unplanned tomorrows can quickly become ugly todays, and as important as tomorrow is to businesses and the people who run them, it’s even more important to another community: database application developers.
    Subjective goals are treacherous for developersMany developers have no way to test, today, the true production response time behavior of their code, which they won’t learn until tomorrow. ...And perhaps only until some remote, distant tomorrow.

    Imagine you’re a developer using 100-row tables on your desktop to test code that will access 100,000,000,000-row tables on your production server. Or maybe you’re testing your code’s performance only in isolation from other workload. Both of these are problems; they’re procedural mistakes, but they are everyday real-life for many developers. When this is how you develop, telling you that “your users’ response time requirement is n seconds” accidentally implies that you are finished optimizing when your query finishes in less than n seconds on your no-load system of 100-row test tables.

    If you are a developer writing high-risk code—and any code that will touch huge database segments in production is high-risk code—then of course you must aim for the “no unnecessary calls” part of the top speed target. And you must aim for the “and the calls that remain run at hardware speed” part, too, but you won’t be able to measure your progress against that goal until you have access to full data volumes and full user workloads.

    Notice that to do both of these things, you must have access to full data volumes and full user workloads in your development environment. To build high-performance applications, you must do full data volume testing and full user workload testing in each of your functional development iterations.

    This is where agile development methods yield a huge advantage: agile methods provide a project structure that encourages full performance testing for each new product function as it is developed. Contrast this with the terrible project planning approach of putgin all your performance testing at the end of your project, when it’s too late to actually fix anything (if there’s even enough budget left over by then to do any testing at all). If you want a high-performance application with great performance diagnostics, then performance instrumentation should be an important part of your feedback for each development iteration of each new function you create.
    My answerSo, when are you finished optimizing?
    1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
    2. When the application is fast enough to meet your users’ requirements.
    3. When there are no unnecessary calls and the calls that remain run at hardware speed.
    There is some merit in all three answers, but as Dave Ensor taught me inside Oracle many years ago, the correct answer is C. Answer A specifically restricts your scope of concern to today, which is especially dangerous for developers. Answer B permits you to promote horrifically bad code, unhindered, into production, where it can hurt the performance of every function on the system. Answers A and B both presume that you know information that you probably don’t know and that you may not need to know. Answer C is my favorite answer because it is tells you exactly when you’re done, using units you can measure and that you should be measuring.

    Answer C is usually a tougher standard than answer A or B, and when it’s not, it is the best possible standard you can meet without upgrading or redesigning something. In light of this “tougher standard” kind of talk, it is still important to understand that what is optimal from a software engineering perspective is not always optimal from a business perspective. The term optimized must ultimately be judged within the constraints of what the business chooses to pay for. In the spirit of answer A, you can still make the decision not to optimize all your code to the last picosecond of its potential. How perfect you make your code should be a business decision. That decision should be informed by facts, and these facts should include knowledge of your code’s top speed.

    Thank you, Guđmundur Jósepsson, of Iceland, for your question. Thank you for waiting patiently for several weeks while I struggled putting these thoughts into words.

    Editorial Policy: Notes on recent reviews of CBE learning platforms

    Michael Feldstein - Fri, 2015-02-27 12:30

    By Phil HillMore Posts (291)

    Oh let the sun beat down upon my face, stars to fill my dream
    I am a traveler of both time and space, to be where I have been
    To sit with elders of the gentle race, this world has seldom seen
    They talk of days for which they sit and wait and all will be revealed

    - R Plant, Kashmir

    Over the past half year or so I’ve provided more in-depth product reviews of several learning platforms than is typical – Helix, FlatWorld, LoudCloud, Bridge. Understanding that at e-Literate we are not a review site nor do we tend to analyze technology for technology’s sake, it’s worth asking ‘why the change?’. There has been a lot of worthwhile discussion in several blogs recently about whether the LMS is obsolete or critical to the future of higher ed, and this discussion even raised the subject of how we got to the current situation in the first place.

    An interesting development I’ve observed is that the learning environment of the future might already be emerging on its own, but not necessarily coming from the institution-wide LMS market. Canvas, for all its market-changing power, is almost a half decade old. The area of competency-based education (CBE), with its hundreds of pilot programs, appears to be generating a new generation of learning platforms that are designed around the learner (rather than the course) and around learning (or at least the proxy of competency frameworks). It seems useful to get a more direct look at these platforms to understand the future of the market and to understand that the next generation environment is not necessarily a concept yet to be designed.

    At the same time, CBE is a very important development in higher ed, yet there are plenty of signs of assuming that CBE is students working in isolation to learn regurgitated facts assessed by multiple choice questions. Yes, that does happen in cases and is a risk for the field, but CBE is far richer. Criticize CBE if you will, but do so based on what’s actually happening[1].

    Both Michael and I have observed and even participated in efforts that seek to explore CBE and the learning environment of the future.

    Perhaps given that I’m prone to visual communication approaches, the best approach for me to work out my own thoughts on the subjects as well as share more broadly through e-Literate has been to do more in-depth product reviews with screenshots.

    Bridge, from Instructure, is a different case. I frequently get into discussions about how Instructure might evolve as a company, especially given their potential IPO. The public markets will demand continued growth, so what will this change in terms of their support of Canvas as a higher education LMS? Will they get into adjacent markets? With the latest news of the company raising $40 million in what is likely the last pre-IPO VC funding round as well as their introduction of Bridge to move into the corporate learning space, we now have a pretty solid basis for answering these questions. Understanding that Bridge is a separate product and seeing how the company approaches both its design and lack of change to Canvas are the keys.

    With this in mind, it’s worth noting some editorial policy stuff at e-Literate:

    • We do not endorse products; in fact, we generally focus on the academic or administrative need first as well as how a product is selected and implemented.
    • We do not take solicitations to review products, even if a vendor’s competitors have been reviewed. The reviews mentioned above were more about understanding market changes and understanding CBE as a concept than about the products per se.
    • We might accept a vendor’s offer of a demo at our own discretion, either online or at a conference, but even then we do not promise to cover within a blog post.

    OK, the lead-in quote is a stretch, but it does tie in to one of the best videos I have seen in a while.

    Click here to view the embedded video.

    1. And you would do well to read Michael’s excellent post on CBE meant for faculty trying to understand the subject.

    The post Editorial Policy: Notes on recent reviews of CBE learning platforms appeared first on e-Literate.

    Log Buffer #412, A Carnival of the Vanities for DBAs

    Pythian Group - Fri, 2015-02-27 10:58

    This Log Buffer Edition makes it way through the realms of Oracle, SQL Server and MySQL and brings you some of the blog posts.

    Oracle:

    Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization

    FULL and NO_INDEX Hints

    Base64 Encode / Decode with Python (or WebLogic Scripting Tool) by Frank Munz

    Why I’m Excited About Oracle Integration Cloud Service – New Video

    Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015

    SQL Server:

    An article about how we underestimate the power of joins and degrade our query performance by not using proper joins

    Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting.

    How do you develop and deploy your database?

    A database must be able to maintain and enforce the business rules and relationships in data in order to maintain the data model.

    Error handling with try-catch-finally in PowerShell for SQL Server

    MySQL:

    MySQL Enterprise Monitor 3.0.20 has been released

    MySQL Cluster 7.4 is GA!

    Connector/Python 2.1.1 Alpha released with C Extension

    Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?

    MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more

    Categories: DBA Blogs

    Webcast: Public Sector FMW: Mobility Solutions – Re-Think Mobile

    WebCenter Team - Fri, 2015-02-27 07:15
    3Di is a Gold Partner of Oracle and also a Pillar Partner for Middlware Solutions and a Top Partner for North America. 3Di has alo successfully delivered over 200 projects to public sector, private sector and military clients. Please visit www.3disystems.com for further information.  Join us on Wednesday, March 4, 2015 at 9:00 am Central Standard Time (Chicago, GMT-06:00) to learn how customers are re-thinking their enterprise mobile strategy unifying the client, content, context, security and cloud in their enterprise mobile strategy. Through case studies and live demonstrations, 3Di will present how customers like you have successfully addressed these questions using Oracle Technologies and 3Di's solutions, innovations and services. Register Now!

    SQL Developer: Viewing Trace Files

    Dominic Brooks - Fri, 2015-02-27 06:18

    Just a quick plug for looking at raw sql trace files via SQL Developer.

    There is a nice Tree View:
    sqldev_trace

    Which can be expanded:
    sqldev_trace_expand

    Also summary view of statistics, filterable:
    sqldev_trace_stats

    And a list view, filterable and orderable:

    sqldev_trace_list

    Some sort of right click summary for binds/waits might be a nice addition.


    Exadata Documentation Available

    Dan Norris - Fri, 2015-02-27 06:12

    Please join me in welcoming the Exadata product documentation to the internet. It’s been a long time coming, but glad it’s finally made an appearance!

    Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization

    Rittman Mead Consulting - Thu, 2015-02-26 17:08

    In the first two posts in this series, we looked at what Oracle Big Data Discovery is and how you can use it to sample, cleanse and then catalog data in your Hadoop-based data reservoir. At the end of that second post we’d loaded some webserver log data into BDD, and then uploaded some additional reference data that we then joined to the log file dataset to provide descriptive attributes to add to the base log activity. Once you’ve loaded the datasets into BDD you can do some basic searching and graphing of your data directly from the “Explore” part o the interface, selecting and locating attribute values from the search bar and displaying individual attributes in the “Scratchpad” area.

    NewImage

    With Big Data Discovery though you can go one step further and build complete applications to search and analyse your data, using the “Discover” part of the application. Using this feature you can add one or more charts to a dashboard page that go much further than the simple data visualisations you get on the Explore part of the application, based on the chart types and UI interactions that you first saw in Oracle Endeca Information Discovery Studio.

    NewImage

    Components you can add include thematic maps, summary bars (like OBIEE’s performance tiles, but for multiple measures), various bar, line and bubble charts, all of which can then be faceted-searched using an OEID-like search component.

    NewImage

    Each visualisation component is tied to a particular “view” that points to one or more underlying BDD datasets – samples of the full dataset held in the Hadoop cluster stored in the Endeca Server-based DGraph engine. For example, the thematic map above was created against the post comments dataset, with the theme colours defined using the number of comments metric and each country defined by a country name attribute derived from the calling host IP address.

    NewImage

    Views are auto-generated by BDD when you import a dataset, or when you join two or more datasets together. You can also use the Endeca EQL language to define your own views using a SQL-type language, and then define which columns represent attributes, which ones are metrics (measures) and how those metrics are aggregated.

    NewImage

    Like OEID before it, Big Data Discovery isn’t a substitute for a regular BI tool like OBIEE – beyond simple charts and visualizations its tricky to create more complex data selections, drill-paths in hierarchies, subtotals and so forth, and users will need to understand the concept of multiple views and datatypes, when to drop into EQL and so on – but for non-technical users working in an organization’s big data team it’s a great way to put a visual front-end onto the data in the data reservoir without having to understand tools like R Studio.

    So that’s it for this three-part overview of Oracle Big Data Discovery and how it works with the Hadoop-based data reservoir. Keep an eye on the blog over the next few weeks as we get to grips with this new tool, and we’ll be covering it as part of the optional masterclass at the Brighton and Atlanta Rittman Mead BI Forum 2015 events this May.

    Categories: BI & Warehousing

    Oracle Priority Support Infogram for 26-FEB-2015

    Oracle Infogram - Thu, 2015-02-26 15:57

    Oracle Support
    How To Be Notified When MOS Notes Are Updated, from the Oracle E-Business Suite Technology blog.
    RDBMS
    Oracle In-Memory Advisor with Oracle Multitenant? Issues?, from Update your Database – NOW!
    Exadata
    Examining the new Columnar Cache with v$cell_state, from SmartScan Deep Dive.
    And from Exadata Database Machine: 10 reasons to run Database In-Memory on Exadata
    OEM
    From Oracle Enterprise Manager: Editing EM12c Jobs in Bulk
    Data Warehouse
    From The Data Warehouse Insider: New Way to Enable Parallel DML
    WebLogic
    Weblogic LDAPAuthenticator configuration; the GUID Attribute ,from WebLogic Partner Community EMEA.
    Java and Friends
    Unsynchronized Persistence Contexts in JPA 2.1/Java EE 7, from The Aquarium.
    YouTube: Format Multiple Files in NetBeans IDE, from Geertjan's Blog.
    4 New OTN Tech articles on JDeveloper, BPM, ADF, from ArchBeat.
    Linux
    Two from Oracle’s Linux Blog:
    Introduction to Using Oracle's Unbreakable Linux Network
    Technology Preview available for the Oracle Linux software collection library
    MySQL
    From Paulie’s world in a blog: Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance
    Hardware Support
    From My Oracle Support: Power Cord Replacement Notice (updated February 2015).
    SOA and BPM
    Dynamic ADF Form Solution for Oracle BPM Process, from the SOA & BPM Partner Community Blog.
    WebCenter
    From the Oracle WebCenter Blog: Webcast: Next Generation AP Invoice Automation
    Ops Center
    From the Oracle Ops Centerblog: Database License.
    Hyperion
    Patch Set Update: Hyperion Strategic Finance 11.1.2.1.106, from the Business Analytics - Proactive Support blog.
    Demantra
    From the Oracle Demantrablog: Setting Worksheet Related Parameters and Hardware Requirement Example
    EBS
    From the Oracle E-Business Suite Support Blog:
    Announcing Oracle Global Trade Management (GTM) Community
    Webcast: Oracle Time and Labor (OTL) Timecard Layout Configuration
    Internal Requisition, Internal Sales Order Change Management
    From the Oracle E-Business Suite Technology blog:
    Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015

    EBS 12.x certified with Apple Mac OS X 10.10 (Yosemite)