Skip navigation.

Feed aggregator

Oracle Priority Service Infogram for 31-JUL-2013

Oracle Infogram - Thu, 2014-07-31 10:34

Oracle and NFS
From Martin’s Blog: Setting up Direct NFS on Oracle 12c.
Testing
From flashdba: New section: Oracle SLOB Testing. And no, it’s not about finding people with soup stains on their shirt and giving them multiple choice exams with essay.
GoldenGate
GoldenGate Director Security, from Oracle DBA - Tips and Techniques.
VM
From AMIS Technology Blog: Fastest way to a Virtual Machine with JDeveloper 12.1.3 and Oracle Database XE 11gR2 – on Ubuntu Linux 64 bit.
Fusion
Discovering Fusion Applications in Oracle Enterprise Manager 12c, from the Oracle A-Team Chronicles.
SOA
Purging Data From the BPEL Store, from DZone.
Visualization
From the Oracle Data Visualizations Blog: A Guide to Diagram – Part 8 – Diagram Container Groups.
Java
org.openide.util.ContextGlobalProvider, from Geertjan’s Blog.
A closer look at Oracle IDM Auditing, from Java Code Geeks.
Big Data
From CIO: Oracle hopes to make SQL a lingua franca for big data.
Good Housekeeping
OraManageability brings us this article: Keeping a Tidy Software Library – Saved Patches
SPARC
From ZDNet: Oracle prepares to unveil next-gen SPARC 7 processor.
EBS
From the Oracle E-Business Suite Support Blog:
Oracle Service Contracts – How to Drive Contract Coverage by Severity
Let's Talk About Reclassifications in Fixed Assets
Just Released! July 2014 Procurement Rollup Patch 18911810
How to Customize the Field Service Debrief Report
From Oracle E-Business Suite Technology
Latest Updates to AD and TXK Tools for EBS 12.2
E-Business Suite Plug-in 12.1.0.3 for Enterprise Manager 12c Now Available
Shameless Boasting

Is your company’s HQ so cool people come film movies there? Ours is! ‘Terminator: Genesis’ Filming at Oracle Headquarters, from CBS.

MySQL 5.6.20-4 and Oracle Linux DTrace

Wim Coekaerts - Thu, 2014-07-31 09:57
The MySQL team just released MySQL 5.6.20. One of the cool new things for Oracle Linux users is the addition of MySQL DTrace probes. When you use Oracle Linux 6, or 7 with UEKr3 (3.8.x) and the latest DTrace utils/tools, then you can make use of this. MySQL 5.6 is available for install through ULN or from public-yum. You can just install it using yum.

# yum install mysql-community-server

Then install dtrace utils from ULN.

# yum install dtrace-utils

As root, enable DTrace and allow normal users to record trace information:

# modprobe fasttrap
# chmod 666 /dev/dtrace/helper

Start MySQL server.

# /etc/init.d/mysqld start

Now you can try out various dtrace scripts. You can find the reference manual for MySQL DTrace support here.

Example1

Save the script below as query.d.

#!/usr/sbin/dtrace -qws
#pragma D option strsize=1024


mysql*:::query-start /* using the mysql provider */
{

  self->query = copyinstr(arg0); /* Get the query */
  self->connid = arg1; /*  Get the connection ID */
  self->db = copyinstr(arg2); /* Get the DB name */
  self->who   = strjoin(copyinstr(arg3),strjoin("@",
     copyinstr(arg4))); /* Get the username */

  printf("%Y\t %20s\t  Connection ID: %d \t Database: %s \t Query: %s\n", 
     walltimestamp, self->who ,self->connid, self->db, self->query);

}

Run it, in another terminal, connect to MySQL server and run a few queries.

# dtrace -s query.d 
dtrace: script 'query.d' matched 22 probes
CPU     ID                    FUNCTION:NAME
  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:21 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: select @@version_comment limit 1

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: SELECT DATABASE()

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show databases

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show tables

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:31 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: select * from foo

Example 2

Save the script below as statement.d.

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-60s %-8s %-8s %-8s\n", "Query", "RowsU", "RowsM", "Dur (ms)");
}

mysql*:::update-start, mysql*:::insert-start,
mysql*:::delete-start, mysql*:::multi-delete-start,
mysql*:::multi-delete-done, mysql*:::select-start,
mysql*:::insert-select-start, mysql*:::multi-update-start
{
    self->query = copyinstr(arg0);
    self->querystart = timestamp;
}

mysql*:::insert-done, mysql*:::select-done,
mysql*:::delete-done, mysql*:::multi-delete-done, mysql*:::insert-select-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           0,
           arg1,
           this->elapsed);
    self->querystart = 0;
}

mysql*:::update-done, mysql*:::multi-update-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           arg1,
           arg2,
           this->elapsed);
    self->querystart = 0;
}

Run it and do a few queries.

# dtrace -s statement.d 
Query                                                        RowsU    RowsM    Dur (ms)
select @@version_comment limit 1                             0        1        0
SELECT DATABASE()                                            0        1        0
show databases                                               0        6        0
show tables                                                  0        2        0
select * from foo                                            0        1        0

A look at how RDX’s Additional Services can meet your needs: Series Kick-off [VIDEO]

Chris Foot - Thu, 2014-07-31 09:08

Transcript

Today we’re kicking off a series about our additional offerings, because we think it’s important for your organization to leverage RDX’s full suite of data infrastructure services to improve your organization’s ability to turn raw information into actionable business knowledge.

From our Business Intelligence services – designed to get you the right information about your company to make savvy strategic decisions – to our application hosting, database security and non-database server monitoring, GoldenGate replication services, and support for Windows, MySQL and Oracle EBS, we’ve got every administration need you can think of covered.

We’ll take an in-depth look at each of these services in videos to come, so you can learn how they can benefit your business and choose the services that may be the most important to you.

For more information on our additional services, follow the link below for our Additional Services Whitepaper.

Tune in next time as we discuss the importance of Business Intelligence for your business!
 

The post A look at how RDX’s Additional Services can meet your needs: Series Kick-off [VIDEO] appeared first on Remote DBA Experts.

SQL Server and OS Error 1117, Error 9001, Error 823

Pythian Group - Thu, 2014-07-31 08:32

small__3212904193 Along with other administrators, life of us, the DBAs are no different but full of adventure.  At times, we encounter an issue which is very new for us, rather, one that we have not faced in the past.  Today, I will be writing about such case.  Not so long back, in the beginning of June, I was having my morning tea I got a page from a customer we normally do not receive pages from. While I was analyzing the error logs, I noticed several lines of error like the ones below:

2014-06-07 21:03:40.57 spid6s Error: 17053, Severity: 16, State: 1.
LogWriter: Operating system error 21(The device is not ready.) encountered.
2014-06-07 21:03:40.57 spid6s Write error during log flush.
2014-06-07 21:03:40.57 spid67 Error: 9001, Severity: 21, State: 4.
The log for database 'SSCDB' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
2014-06-07 21:03:40.58 spid67 Database SSCDB was shutdown due to error 9001 in routine 'XdesRMFull::Commit'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
2014-06-07 21:03:40.65 spid25s Error: 17053, Severity: 16, State: 1.
fcb::close-flush: Operating system error (null) encountered.
2014-06-07 21:03:40.65 spid25s Error: 17053, Severity: 16, State: 1.
fcb::close-flush: Operating system error (null) encountered.

I had never seen this kind of error in the past so my next step was to check Google , which returned too many results. There were two sites that were worthwhile: The first site covers the OS Error 1117 , a Microsoft KB article, whereas the second site by Erin Stellato ( B | T ) talks about other errors like Error 823, Error 9001.  Further, I checked the server details and found that it’s exactly what the issue is here,  the server is using  PVSCSI (Para Virtualized SCSI) controller to LSI on the VMWare host. 

Resolving the issue

I had a call with client and have his consent to restart the service. This was quick, and after it came back, I ran checkdb – “We are good!” I thought.

But wait. This was the temporary fix. Yes, you read that correctly. This was the temporary fix, and this issue is actually lies with the VMWare, it’s a known issue according to VMWare KB Article. To fix this issue, we’ll have to upgrade to vSphere 5.1 according to the VMWare KB article.

Please be advised that the first thing that I did here is to apply the temporary fix, the root cause analysis – I did that last, after the server is up and running fine.

photo credit: Andreas.  via photopin CC

Categories: DBA Blogs

Help Please! The UKOUG Partner of the Year awards

Duncan Davies - Thu, 2014-07-31 07:48

We’d really appreciate your help. But first, a bit of background:

The Partner of the Year awards is an annual awards ceremony held by the UK Oracle User Group. It allows customers to show appreciation for partners that have provided a service to them over the previous 12 months. As you would imagine, being voted a winner (for the categories that you operate in) is a wonderful accolade as it’s the end-users that have spoken.

Cedar Consulting has a long history of success in the competition, reflecting our long standing relationships with our clients. I wasn’t going to ask for votes this year, however I notice that many of our competitors are filling Twitter and LinkedIn with pleas so I feel that I should also ask for your vote.

If you’re an existing Cedar client site we’d love your vote. Also, if you are a recipient of any other Cedar service – and I guess here I’m talking about the free PeopleSoft and Fusion Weekly newsletters that we send out – we’d be very grateful if you gave 3 minutes of your time to vote for us.

What we’d like you to do:

1) Go to: http://pya.ukoug.org/index.php

2) Fill in your company name, first name and surname. Then click Next.

3) Enter your email address in both fields, then click Next.

4) Select any checkboxes if you want ‘follow-up communications’ from the UKOUG, or leave all blank, and click Next.

5) Select Cedar Consulting from the drop-down, and click Next.

6) On the PeopleSoft page, select the Gold radio button on the Cedar Consulting row (note, it’s the 3rd column!), then click Next.

7) Repeat by selecting the Gold radio button on the Cedar Consulting row of the Fusion page, then click Next.

8) Click Submit.

And you’re done. Thank you very much. If you want some gratitude for your 3 minutes of effort drop me an email and I’ll thank you personally!


jQuery - loop through a Tabular Form

Denes Kubicek - Thu, 2014-07-31 07:48
This question is one of the frequently asked questions - "How do I loop through a tabular form using a dynamic action?". This example shows how to loop through a tabular form and set the values for each row to what ever you want. Using apex_application.g_fxx array is not an option for onload processes or dynamic actions. It can only be used in an on submit process. Using jQuery in a simple loop it is possible to read / set any of the values in any column. Try it out.

Categories: Development

Redstone’s John Klein Named Iowa Entrepreneur of the Year

WebCenter Team - Thu, 2014-07-31 07:42

Entrepreneurs’ Organization (EO) Iowa named member John Klein as “Entrepreneur of the Year” during their annual meeting on Tuesday, July 15, 2014 in Des Moines. Klein and partner, Jason Stortz, started their computer consulting business five years ago in Klein’s basement. Since its humble beginnings, Redstone Content Solutions has grown to become a nationally recognized leader among information technology service businesses. (source)

“John is recognized by his fellow EOers as a leader who lives the EO Vision of business growth, personal development and community engagement,” stated Rowena Crosbie, President, Tero International, Inc. “He exemplifies the EO core values each day.” 

Redstone also recently celebrated it's 5 year anniversary!

“Five years ago we set a standard to place our clients at the center of all that we do.  The company we have built and the successes we’ve enjoyed are the direct result of customer confidence in our mission and loyalty to our partnership”, comments John Klein, co-founder of Redstone.  “Without this support, our accomplishments would be far fewer and much less meaningful.”

Redstone delivers a full complement of strategic Oracle WebCenter consulting services – software development, implementation, training and support for customers across a wide range of industries. Redstone has achieved industry recognition as an innovative IT services organization that delivers global Oracle WebCenter solutions. The firm's solid track record for delivering results is a by-product of its investment in people, processes and technology. Read more about John Klein's EO Entrepreneur of the Year award and Redstone's recent accomplishments.

Congratulations John from all of us on the WebCenter team! 

An Introduction to PeopleTools 8.54 (part 2)

Duncan Davies - Thu, 2014-07-31 07:08
1. Introduction

The recently launched version of PeopleTools 8.54 contains a broad range of enhancements. Although we’ve had the GA (General Availability) release we can’t upgrade existing environments until the 8.54.02 patch, so now is a good time to perform a fresh sandbox install to investigate the details and highlight the areas that are of most interest.

As in the first part of this series, there is a lot of content to cover so I’ve spread it over several entries. This first part looked at the back-end, infrastructure and System Admin changes, whereas this post moves up the software stack with integration and reporting, and the final entry will finish with the Fluid UI.

2. Developer UI Enhancements

There have been a number of improvements to the User Interface – we’re not talking about Fluid, this is in the Classic UI too although these changes may also benefit Fluid pages).

a. Charting Enhancements

The development team have introduced some new charting types to the toolset. We were already able to select from quite a few options, however we’ve now got Gauges, the LED lights, Status Meters and Funnel charts.

1-charts

b. Long Edit Box Character Counter

Previously people (myself included) have accomplished similar functionality – with varying levels of success – by inserting JavaScript onto the page. Now it’s natively built in to PeopleTools. A character counter can be activated on the properties on the Long Edit box control:

2-long edit setup

The result on-screen is like this:

3-long edit result

Note: this functionality doesn’t actually limit the text entry and if the user exceeds the limit it will show a negative number.

c. Pivot Grids

Pivot Grids have been enhanced in a lot of ways – there are almost 30 improvements listed in the Release Notes. Just picking a few, they now include the ability to restrict the number of rows shown in a Pivot Grid, PS Query drilling URLs and Bubble and Scatter charts are now available as Pivot Chart types.

4-pivots

3. Reporting a. BI Publisher

As of PeopleTools 8.54, BI Publisher now includes support for PCL (Printer Control Language) code in RTF templates. This allows for printing PDF with Secure Fonts, essential for the secure printing of cheques.

Also newly supported are digital signatures – which can be used to verify the sender and to ensure that it the output hasn’t been amended in transit, and updatable PDFs.

b. PS Query

PS Query now supports defaults for prompts:

5-query

There have been workarounds to achieve a similar result before, but it’s now built into the configuration pages so we don’t need to search online for the workaround each time we want to use it.

Also new with PS Query is the option to include image fields in your output. There are a number of display alternatives with image fields, either Image Data (in which case PS Query will display the image inline with the rest of the HTML result set), or a Base64 encoded data string representing the image with any output other than HTML, or – when the property is set to Image Hyperlink – a URL to the image will be returned instead of the image itself. When the URL is clicked, the image will be displayed in a new browser window.

6-query images

Additionally, all PS Queries can be exposed as REST services, and Microsoft Excel 2007 and above is supported thereby increasing the amount of rows you can download from a Query result set into Excel above the previous limit of 64 thousand.

Finally, PeopleTools 8.54 introduces a new Query type, the Composite Query. Composite Queries are a superset of Connected Queries (which have been in PeopleTools for a couple of releases). Composite Queries allow users to connect queries together and have the output presented as a flat result set (instead of the hierarchical data sets which were output from Connected Queries).

4. Batch Processing

One very visible improvement to the Process Scheduler is a new status window that slides in from the lower right corner to give updates on processing progress. This is a nice touch that I’m sure end-users will appreciate:

7-batch window

Secondly, Activity Guides can improve batch processes as steps – which is important if there’s something a bit more process intensive that’s needed as part of a sequence of steps.

Finally, App Engine program trace allows you to specify which sections to appear in the trace, rather than having to wade through the trace for an entire program. This needs to be enabled in the Process Scheduler config, in Configuration Manager, and then in the App Engine itself:

8-trace

5. Other Enhancements

Other enhancements included in this version of PeopleTools are:

- There’s a WorkCenter to make the setting up of new Activity Guides easier, plus a cloning function which will be useful when similar – but slightly different – guides are needed.

- SES facets now include numerical and date ranges. Results can include images and report repository content.

- Change Assistant has now been decoupled from the PeopleSoft Image, enabling packages to be moved to subsequent environments without also needing to connect to the PeopleSoft Image (this was quite restricting in Tools 8.53). It also has a fresh new UI and can be scripting/configured via the command line

- Data Migration Workbench has received improvements to Application Data Sets (and define relationships between groups), plus merging, support for managed objects and an improved UI.

- PeopleSoft Test Framework now allows you to perform mass updates (updating a set of tests in one change), interaction with App Designer projects and some usability enhancements.

6. Conclusion

The next version of PeopleTools is bringing many improvements. Much is being made of the new User Interface – and rightly so – however there are other improvements that we will improve our workflow by making it both simpler and more efficient.


Test your Application with the WebLogic Maven plugin

Edwin Biemond - Thu, 2014-07-31 05:47
In this blogpost I will show you how easy it is to add some unit tests to your application when you use Maven together with the 12.1.3 Oracle software ( like WebLogic , JDeveloper or Eclipse OEPE). To demonstrate this, I will create a RESTful Person Service in JDeveloper 12.1.3 which will use the Maven project layout. We will do the following: Create a Project and Application based on a Maven

OTN APEX Forum again

Denes Kubicek - Thu, 2014-07-31 04:02
The OTN Forum is not available (again). This usefull but constantly changing forum now gives me the following message:



I am not sure why they are using jive for that. Maybe APEX would be a better solution.
Categories: Development

Don’t let database security woes outweigh EHR benefits

Chris Foot - Thu, 2014-07-31 01:40

Although the transition from paper to electronic health records hasn't been easy, it's certainly paid off.

Those in the medical industry can now access patient information more easily, allowing them to eliminate mistakes characterized by the use of tangible forms. However, organizations should be wary of the dangers EHR implementations pose to database security.

Eliminating grievous mistakes
That's not to say professionals should abandon EHR technology. The National Institute For Health Care Reform acknowledged how using EHR can eliminate what physicians, hospital administrators and others in the health care sector call "unintended discrepancies." These instances are essentially minor mishaps that can have major repercussions.

Unfortunately, fragmented delivery systems will provide inaccurate information regarding medications, especially when patients are being admitted to and released from hospitals. This can cause doctors to accidentally omit, duplicate or add unnecessary prescriptions. In a worst-case scenario, this could cause a person to overdose.

The benefits
The NIHCR outlined what facilities need to prevent these mistakes from occurring, and it starts with the implementation of an EHR system. Such technology can allow hospitals and personnel to:

  • Aggregate accurate, applicable pre-admission medication data
  • Compare hospital prescription orders to previous medications so physicians can make educated treatment decisions
  • Share relevant lists pertaining to medicines administered for the discharge phase with primary care doctors, nursing facilities and other places

The situation
Obviously, a lot of digital information is being stored and transferred. Some connections may be more secure than others, but the environment is a hacker's dream come true. HealthITSecurity contributor Greg Michaels acknowledged that while exchanging patient intelligence may enable physicians to deliver better care, health care organizations find they can't dedicate enough resources to sanctioning safe delivery.

Michaels advised medical industry participants heavily entrenched in EHR uses to work with a trusted, third-party IT security expert. In addition to communication surveillance, the outsourced entity should be capable of providing remote database management and monitoring as well. Michaels also recommended professionals abide by the following best practices:

  • Audit all partners to see which ones provide their customers with protected health information and identify which IT protection measures they're taking
  • Open communication with third-parties so data breaches affecting multiple organizations can be addressed in a united manner
  • Ensure all partners are compliant with standards outlined by the Health Insurance Portability and Accountability Act
  • Educate in-house personnel on how to take basic security measures

By seeking help from a database administration service and implementing basic protective measures, hospitals will be able to use EHR with limited risk of sustaining an IT disaster.

The post Don’t let database security woes outweigh EHR benefits appeared first on Remote DBA Experts.

Developing with JAX-RS 2.0 for WebLogic Server 12.1.3

Steve Button - Thu, 2014-07-31 00:47
In an earlier post on the topic of Using JAX-RS 2.0 with WebLogic Server 12.1.3, I described that we've utilized the shared-library model to distribute and enable it.

This approach exposes the JAX-RS 2.0 API and enlists the Jersey 2.x implementation on the target server, allowing applications to make use of it as when they are deployed through a library reference in a weblogic deployment descriptor.

The one resulting consideration here from a development perspective is that since this API is not part of the javaee-api-6.jar nor is it a default API of the server, it's not available in the usual development API libraries that WebLogic provides.

For instance the $ORACLE_HOME/wlserver/server/lib/api.jar doesn't contain a reference to the JAX-RS 2.0 API, nor do the set of maven artifacts we produce and push to a repository via the oracle-maven-sync plugin contain the javax.ws.rs-api-2.0.jar library.

To develop an application using JAX-RS 2.0 to deploy to WebLogic Server 12.1.3, the javax.ws.rs-api-2.0.jar needs to be sourced and added to the development classpath.

Using maven, this is very simple to do by adding an additional dependency for the javax.ws.rs:javax.ws.rs-api:2.0 artifact that is hosted in public maven repositories:

    <dependency>
<groupid>javax.ws.rs</groupid>
<artifactid>javax.ws.rs-api</artifactid>
<version>2.0</version>
<scope>provided</scope>
</dependency>

Note here that the scope is set to provided since the library will be realized at runtime through jax-rs-2.0.war shared-library that it deployed to the target server and referenced by the application. It doesn't need to be packaged with the application to deploy to WebLogic Server 12.1.3.

For other build systems using automated dependency management such as Gradle or Ant/Ivy, the same sort of approach can be used.

For Ant based build systems, the usual approach of obtaining the necessary API libraries and adding them to the development CLASSPATH will work. Be mindful that there is no need to bundle the jax.ws.rs-ap-2.0.jar in the application itself as it will be available from the server when correctly deployed and referenced in the weblogic deployment descriptor.

"Private" App Class Members

Jim Marion - Thu, 2014-07-31 00:23

I was reading Lee Greffin's post More Fun with Application Packages -- Instances and stumbled across this quote from PeopleBooks:

A private instance variable is private to the class, not just to the object instance. For example, consider a linked-list class where one instance needs to update the pointer in another instance.

What exactly does that mean? I did some testing to try and figure it out. Here is what I came up with:

  1. It is still an instance variable which means each in-memory object created from the App Class blue print has its own memory placeholder for each instance member.
  2. Instances of other classes can't interact with private instance members.
  3. Instances of the exact same class CAN interact with private members of a different instance.
  4. Private instance members differ from static members in other languages because they don't all share the same pointer (pointer, reference, whatever).

I thought it was worth proving so here is my sample. It is based on the example suggested in PeopleBooks:

For example, consider a linked-list class where one instance needs to update the pointer in another instance.

The linked list is just an item with a pointer to the next item (forward only). A program using it keeps a pointer to the "head" and then calls next() to iterate over the list. It is a very common pattern so I will forgo further explanation. Here is a quick implementation (in the App Package JJM_COLLECTIONS):

class ListItem
method ListItem(&data As any);
method linkTo(&item As JJM_COLLECTIONS:ListItem);
method next() Returns JJM_COLLECTIONS:ListItem;
method getData() Returns any;
private
instance JJM_COLLECTIONS:ListItem &nextItem_;
instance any &data_;
end-class;

method ListItem
/+ &data as Any +/
%This.data_ = &data;
end-method;

method linkTo
/+ &item as JJM_COLLECTIONS:ListItem +/
&item.nextItem_ = %This;
end-method;

method next
/+ Returns JJM_COLLECTIONS:ListItem +/
Return %This.nextItem_;
end-method;

method getData
/+ Returns Any +/
Return %This.data_;
end-method;

Notice the linkTo method sets the value of the private instance member of a remote instance (its parameter), NOT the local instance. This is what is meant by private to the class, not private to the instance. Each instance has its own &nextItem_ instance member and other instances of the exact same class can manipulate it. Here is the test case I used to test the remote manipulation implementation:

import TTS_UNITTEST:TestBase;
import JJM_COLLECTIONS:ListItem;

class TestListItem extends TTS_UNITTEST:TestBase
method TestListItem();
method Run();
end-class;

method TestListItem
%Super = create TTS_UNITTEST:TestBase("TestListItem");
end-method;

method Run
/+ Extends/implements TTS_UNITTEST:TestBase.Run +/
Local JJM_COLLECTIONS:ListItem &item1 =
create JJM_COLLECTIONS:ListItem("Item 1");
Local JJM_COLLECTIONS:ListItem &item2 =
create JJM_COLLECTIONS:ListItem("Item 2");

&item2.linkTo(&item1);

%This.AssertStringsEqual(&item1.next().getData(), "Item 2",
"The next item is not Item 2");
%This.Msg(&item1.next().getData());
end-method;

The way it is written requires you to create the second item and then call the second item's linkTo method to associate it with the head (or previous) element.

Now, just because you CAN manipulate a private instance member from a remote instance doesn't mean you SHOULD. Doing so seems to violate encapsulation. You could accomplish the same thing by reversing the linkTo method. What if we flipped this around so you created the second item, but called the first item's linkTo? It is really the first item we want to manipulate in a forward only list (now, if it were a multi-direction list perhaps we would want to manipulate the &ampprevItem_ member?). Here is what the linkTo method would look like:

method linkTo
/+ &item as JJM_COLLECTIONS:ListItem +/
%This.nextItem_ = &item;
end-method;

Now what if we wanted a forward AND reverse linked list? Here is where maybe the ability to manipulate siblings starts to seem a little more reasonable (I still think there is a better way, but humor me):

class ListItem
method ListItem(&data As any);
method linkTo(&item As JJM_COLLECTIONS:ListItem);
method next() Returns JJM_COLLECTIONS:ListItem;
method prev() Returns JJM_COLLECTIONS:ListItem;
method remove() Returns JJM_COLLECTIONS:ListItem;
method getData() Returns any;
private
instance JJM_COLLECTIONS:ListItem &nextItem_;
instance JJM_COLLECTIONS:ListItem &prevItem_;
instance any &data_;
end-class;

method ListItem
/+ &data as Any +/
%This.data_ = &data;
end-method;

method linkTo
/+ &item as JJM_COLLECTIONS:ListItem +/
REM ** manipulate previous sibling;
&item.nextItem_ = %This;
%This.prevItem_ = &item;
end-method;

method next
/+ Returns JJM_COLLECTIONS:ListItem +/
Return %This.nextItem_;
end-method;

method prev
/+ Returns JJM_COLLECTIONS:ListItem +/
Return %This.prevItem_;
end-method;

method remove
/+ Returns JJM_COLLECTIONS:ListItem +/
%This.nextItem_.linkTo(%This.prevItem_);
REM ** Or manipulate both siblings;
REM %This.prevItem_.nextItem_ = %This.nextItem_;
REM %This.nextItem_.prevItem_ = %This.prevItem_;
Return %This.prevItem_;
end-method;

method getData
/+ Returns Any +/
Return %This.data_;
end-method;

And here is the final test case

import TTS_UNITTEST:TestBase;
import JJM_COLLECTIONS:ListItem;

class TestListItem extends TTS_UNITTEST:TestBase
method TestListItem();
method Run();
end-class;

method TestListItem
%Super = create TTS_UNITTEST:TestBase("TestListItem");
end-method;

method Run
/+ Extends/implements TTS_UNITTEST:TestBase.Run +/
Local JJM_COLLECTIONS:ListItem &item1 =
create JJM_COLLECTIONS:ListItem("Item 1");
Local JJM_COLLECTIONS:ListItem &item2 =
create JJM_COLLECTIONS:ListItem("Item 2");
Local JJM_COLLECTIONS:ListItem &item3 =
create JJM_COLLECTIONS:ListItem("Item 3");

&item2.linkTo(&item1);

%This.AssertStringsEqual(&item1.next().getData(), "Item 2",
"Test 1 failed. The next item is not Item 2");
%This.AssertStringsEqual(&item2.prev().getData(), "Item 1",
"Test 2 failed. The prev item is not Item 1");

&item3.linkTo(&item2);
%This.AssertStringsEqual(&item1.next().next().getData(), "Item 3",
"Test 3 failed. The next.next item is not Item 3");
%This.AssertStringsEqual(&item1.next().next().prev().getData(), "Item 2",
"Test 4 failed. The prev item is not Item 2");

Local JJM_COLLECTIONS:ListItem &temp = &item2.remove();
%This.AssertStringsEqual(&item1.next().getData(), "Item 3",
"Test 5 failed. The next item is not Item 3");
%This.AssertStringsEqual(&item1.next().prev().getData(), "Item 1",
"Test 6 failed. The prev item is not Item 1");

end-method;

I hope that helps clear up some of the confusion around the term "private" as it relates to Application Classes.

Mobile-first learning platform EmpoweredU acquired by Qualcomm

Michael Feldstein - Wed, 2014-07-30 17:49

Qualcomm, the giant $26 billion wireless technology conglomerate, acquired EmpoweredU – a mobile-first learning platform available for the education market. What does this acquisition mean?

Who is EmpoweredU?

The company was created by CEO Steve Poizner in 2011 in partnership with Creative Artists Agency, the world’s largest sports and talent agency, under the name “Encore Career Institute”. The initial work was to offer continuing ed classes targeted at Baby Boomers through the UCLA extension.  ((These are certificate programs for $5,000 – $10,000 total tuition.))  In essence, this was an Online Service Provider (OSP) model similar to Embanet, Deltak, Academic Partnerships and 2U. As described by the San Francisco Chronicle in 2011:

Poizner, in an interview at the firm’s headquarters this week, said the company combines “three of California’s greatest assets” – its famed public university system, the creative know-how of its technology center, Silicon Valley, and the cutting-edge marketing savvy of Hollywood. [snip]

In addition to its employment potential for Baby Boomers, Poizner said, the collaboration could bring new revenue for cash-strapped UCLA and thousands of new students from around the nation to its online courses.

The company changed names to Empowered Careers and then eventually settled on EmpoweredU.

In the meantime they figured out that the OSP model is high risk and expensive, often requiring investments of $1 – $10 million per program by the OSP, with revenue-sharing profits occurring several years later. EmpoweredU has pivoted over the past year to become a mobile-first learning platform with content services.

The platform is built on top of the Canvas open source version offered by Instructure and started pilots at 15+ schools this spring (including specific programs at USC, UC Berkeley, U of San Francisco, etc). This may be the most significant use of open source Canvas, and it might end up competing with Canvas, at least indirectly.

As we’ll see later, EmpoweredU is also attempting to create a learning ecosystem that can combine multiple technologies.

Why is Qualcomm making an ed tech acquisition?

I interviewed Vicki Mealer (Senior Director, Business Development, Qualcomm Labs, which is the unit acquiring EmpoweredU) and Steve Poizner today. Vicki’s description of Qualcomm’s interest in ed tech is that they are all about mobile technology, and they have had a philanthropic interest in education for years (donating over $240 million cash to various institutions).  Qualcomm wants to be a behind-the-scenes cheerleader, but they also need for an ecosystem to for each market. Qualcomm Labs started looking at education a year ago, trying to identify and overcome barriers for adoption of mobile technology. Some of the perceived barriers:

  • The digital divide leading to students having gaps in their connectivity (wi-fi vs. cellular);
  • Vendor lock-in and lack of modularity, causing school leaders to have painful technology replacement decisions to move into a mobile strategy; and
  • A lack of software and tools for instructors to take advantage of mobile features and be able to develop curriculum that leverages the technology – partially to have instructors catch up to where the students are.

For Qualcomm Labs, EmpoweredU can provide the modular ecosystem for education and shares their device-agnostic views. This will help them accelerate adoption of mobile in education.

Steve is becoming the SVP of a new business unit within the Labs, called Qualcomm Education. The EmpoweredU unit will combine with a separate Mobile Learning Framework initiative and broaden its focus to K-20.

Should we care?

I visited the company in May of this year and saw a very different design approach than the current generation of browser-based learning platforms that have added mobile features as after-the-face enhancements. At this point EmpoweredU is a niche player only targeting specific academic programs that can afford an iPad one-to-one approach or similar methods to ensure that all students have tablets. Longer term they see this need broadening out to entire institutions. The technology has a full browser interface, so the company could target institution-wide opportunities should they choose.

What is meant by mobile-first in this case is that the platform was conceived and designed around the iPad, directly integrating device features such as location as well as camera and microphone input. In addition, the platform uses push notifications to alert students to assignments or due dates.

Main UI 1

One feature that I find quite important for the mobile world is automatic caching to allow offline access. The default setup syncs the current, past, and next week’s material to the device while connected, allowing offline work that will be re-synched when back on the network.

While the platform was written originally for the iPad, they now support multiple devices and have one pilot that is web only.

In a nod to their OSP origins and content-generating experience, EmpoweredU offers “content sherpas” and a content authoring system. The idea is to support faculty and designers who are attempting to design courses and content that take advantage of the mobile platform.

Overview

They released initial analytics support in the spring.

During the interview, it became apparent that Qualcomm is interested not just in the learning platform, but in EmpoweredU’s broader plans to create an ecosystem.

Components

I pushed them to describe who would be their competitors, either in higher ed or K-12, but they would not directly answer. They kept coming back to the ecosystem and the ability to provide a modular approach and not force rip and replace strategies. I can see this in theory but question what this means in reality.

From an initial look at the company, it will be interesting to watch to see if Qualcomm’s financial backing will allow EmpoweredU to move beyond a niche provider for select programs and attempt to directly compete in the LMS market for institutions or at least compete more broadly. It will also be interesting to see if they are successful in their entrance to the K-12 market. If so, the learning platform market will get even more interesting.

As for the full ecosystem, there are not enough details to understand how seriously to take this approach. Are schools even ready for this approach? How does this ecosystem relate to the LTI specifications that are fundamentally changing the ed tech market? I have many questions in this area that we’ll have to watch over time.

Update: Corrected University of San Francisco reference (and not UC San Francisco) per comments below.

The post Mobile-first learning platform EmpoweredU acquired by Qualcomm appeared first on e-Literate.

“Why I Left The Wall Street Journal To Join Oracle” by Michael Hickins

Linda Fishman Hoyle - Wed, 2014-07-30 16:48

Michael Hickins (pictured left) worked for The Wall Street Journal as editor of its CIO Journal.

Now he works for Oracle as the director of strategic communications.

He made the move to “get closer to the people who make and use technology.” He believes that the technology made by companies like Oracle is changing the world. He wants to help people understand those changes and be part of their stories.

As a reporter, Hickins’ vision of the world “was limited by what people choose to share.” He felt removed from what he was writing about. His readers were at an even greater distance from the source.

So instead of continuing to write “from the safety of his cave,” he has joined one of the greatest technology companies in history so that he can experience technology first hand and tell the world about the far-reaching effects of Oracle Cloud innovations.

Read the article.

Complément : Réduction du temps de patching OMS

Jean-Philippe Pinte - Wed, 2014-07-30 13:34

Depuis quelques semaines, la nouvelle version d' Oracle Enterprise Manager (12.1.0.4) ainsi que les bundle patches pour OMS sont disponibles.
Si vous prévoyez de d'appliquer ces patches et que la problématique d'arrêt de service vous concerne, sachez qu'il est désormais possible de réduire le temps d'arrêt !

Le document intitulé "Reducing Downtime While Patching Multi-OMS Environments" contient les instructions et les étapes pour cela.

High Res only - one version of the resource to serve

Eric Rajkovic - Wed, 2014-07-30 12:39
With the new hardware and fast internet connection, is it the norm to only keep images in one high-res version, regardless of the way the content is utilized in the article?

Today's example is this article on Forbes; Why I Left The Wall Street Journal To Join Oracle

From the article, here are two images :Siloette
Looks like a style of publishing where a subset of a full publication is injected on the left pane of another article to cross reference articles. If you continue reading on the site, then you start to benefit from this as you already have the resource in your local cache.
Is this the new trend or one off?
The other new trend I am seeing is to use flouted images on iOS, as background image in LinkedIn Connect for iOS.

Interesting Behavior of MaxCmdsInTran Parameter

Pythian Group - Wed, 2014-07-30 10:06

I recently worked on transactional replication issue and discovered interesting behavior of the log reader agent switch called MaxCmdsInTran and wanted to share it with you guys.

Lets take a look at  the use of this switch by looking at the msdn documentation below,

MaxCmdsInTran number_of_commands

Specifies the maximum number of statements grouped into a transaction as the Log Reader writes commands to the distribution database. Using this parameter allows the Log Reader Agent and Distribution Agent to divide large transactions (consisting of many commands) at the Publisher into several smaller transactions when applied at the Subscriber. Specifying this parameter can reduce contention at the Distributor and reduce latency between the Publisher and Subscriber. Because the original transaction is applied in smaller units, the Subscriber can access rows of a large logical Publisher transaction prior to the end of the original transaction, breaking strict transnational atomicity. The default is 0, which preserves the transaction boundaries of the Publisher.

However, I observed that if you do any update on Primary Column which won’t be split into multiple smaller transactions as described in the documentation.

Looking further on this reveals that  it probably the effect of bounded update. Bounded update has to be processed as a whole since it send all delete followed by all insert, can’t break into smaller transactions as it won’t know what would be a safe boundary.

The key difference comes from the fact that how updates are replicated when you update PK column and non-PK column. Let’s take an example to look at this (In this example C1 is non-PK and C2 is PK column)

If you update the non-PK column it replicated as update.

– Updating non-PK column

begin tran My_Deferred_Update_1_Row

update T1 set c1 = 1 where C1=2

commit tran My_Deferred_Update_1_Row

– Below is what gets added in msrepl_commands

exec sp_replshowcmds 1000

xact_seqno                                      command

0x0000016E000005330004 {CALL [dbo].[sp_MSupd_dbot1] (1,,2,0×01)}

What is bounded update?

However when you do a update on PK/Clustered index columns are replicated as Delete/Insert pair.

– Updating unique column

begin tran My_Bounded_Update_2_Rows

update T1 set c2 = c2 + 1000

commit tran My_Bounded_Update_2_Rows

– Below is what get added in msrepl_commands

exec sp_replshowcmds 1000

xact_seqno                                        command

0x00000017000000B5000E  {CALL?[dbo].[sp_MSdel_dboT1] (1)}

0x00000017000000B5000E  {CALL?[dbo].[sp_MSdel_dboT1] (2)}

0x00000017000000B5000E  {CALL?[dbo].[sp_MSins_dboT1] (1,3000,1)}

0x00000017000000B5000E  {CALL?[dbo].[sp_MSins_dboT1] (2,1002,2)}

As you can see in above case when we do update on PK/clustered index column, the updates are sent as deletes followed by inserts( this is called bounded update). This is one single transaction which is converted into delete and update pair. All deletes are sent first followed by insert.

We cannot break this transaction (PK update) as it will cause the delete (few or all) to happen first and then insert in separate transaction and will break transaction boundary, breaking this operation into multiple transaction will cause inconsistency and that’s most probably reason for this switch won’t work in this situation.

Why replication sending all deletes first and then all inserts and not the pairs delete/insert in order?

Let’s assume table A contains two rows, unique column C1 values being 1 and 2.

Now user runs the following: update A set c1 = c1 + 1.

The log records will be like

LOP_BEGIN_UPDATE

Del 1

Ins 2

Del 2

Ins 3

LOP_END_UPDATE

And the commands posted in the distribution database will be like

{CALL [sp_MSdel_dboA] (1)}

{CALL [sp_MSdel_dboA] (2)}

{CALL [sp_MSins_dboA] (1,2)}

{CALL [sp_MSins_dboA] (2,3)}

But if its send update directly, you’ll see

Update A set c1 = 2

Update A set c1 = 3

In that case, the first update will fail since c1 = 2 already exist. that’s why it deletes the row first before inserting them back to the new value.

I would recommend to look at the option of publishing the stored procedure execution to avoid this kind of huge updates which will cause performance issues in replication.

Happy Reading!

 

 

Categories: DBA Blogs

Fusion Developer Relations Resources - A must have for Sales Cloud integrators

Angelo Santagata - Wed, 2014-07-30 09:31

All,

Full Disclosure I received this from an email from Apps Developer Relations but its soooo good I wanted to share it with all. Bottom line if your integrating your product/package/system with Sales Cloud after you've perused the standard documentation (which we are evolving rapidly) this is a treasure trove of articles/blogs/viewlets you can use.. 

Obviously use this in conjunction with the standard Oracle Cloud Documentation (http://docs.oracle.com/cloud/latest/salescs_gs/index.html

Enjoy!

Title:

Introducing the Oracle Fusion Applications Developer Relations Team

Abstract:

You’ll find a wealth of resources and hands-on expertise available from the Oracle Fusion Applications Developer Relations Team.

:

If you are evaluating or designing customizations and extensions for your Fusion Applications environment (SaaS or On-Premises) then you’ll find a wealth of resources and hands-on expertise available from this Oracle group. The team was formed to help customers and partners be successful with their development projects using the Fusion Applications platform, and provide the following publically available services:

  • An extensive Blog Site with over 150 articles covering many types of customization, extension, and integration.
  • An open Forum Site for technical questions from the development community.
  • A popular YouTube Channel with over 100 bite-size videos demonstrating a broad range of customization and extension features.
  • Whitepapers on topics including custom application development, ESS development, and Groovy and Expression Language.

So check out their resources or get in touch, and make your own development project tasks a little easier.

In-Memory Column Store: 10046 May Be Lying to You!

Pythian Group - Wed, 2014-07-30 07:46

The Oracle In-Memory Column Store (IMC) is a new database option available to Oracle Database Enterprise Edition (EE) customers. It introduces a new memory area housed in your SGA, which makes use of the new compression functionality brought by the Oracle Exadata platform, as well as the new column oriented data storage vs the traditional row oriented storage. Note: you don’t need to be running on Exadata to be able to use the IMC!

 

Part I – How does it work?

In this part we’ll take a peek under the hood of the IMC and check out some of its internal mechanics.

Let’s create a sample table which we will use for our demonstration:


create table test inmemory priority high
as
select a.object_name as name, rownum as rn,
sysdate + rownum / 10000 as dt
from all_objects a, (select rownum from dual connect by level <= 500)
/

Almost immediately upon creating this table, the w00? processes will wake up from sleeping on the event ‘Space Manager: slave idle wait’ and start their analysis to check out the new table. By the way, the sleep times for this event are between 3 and 5 seconds, so it’s normal if you experience a little bit of a delay.

The process who picked it up will then create a new entry in the new dictionary table compression$, such as this one:

SQL> exec pt('select ts#,file#,block#,obj#,dataobj#,ulevel,sublevel,ilevel,flags,bestsortcol, tinsize,ctinsize,toutsize,cmpsize,uncmpsize,mtime,spare1,spare2,spare3,spare4 from compression$');
TS# : 4
FILE# : 4
BLOCK# : 130
OBJ# : 20445
DATAOBJ# : 20445
ULEVEL : 5
SUBLEVEL : 9
ILEVEL : 1582497813
FLAGS :
BESTSORTCOL : -1
TINSIZE : 16339840
CTINSIZE :
TOUTSIZE : 9972219
CMPSIZE :
UNCMPSIZE :
MTIME : 13-may-2014 23:14:46
SPARE1 : 31
SPARE2 : 5256
SPARE3 : 571822
SPARE4 :



Plus, there is also a BLOB column in compression$, which holds the analyzer’s findings:


SQL> select analyzer from compression$;

ANALYZER
——————————————————————————
004B445A306AD5025A0000005A6B8E0200000300000000000001020000002A0000003A0000004A(output truncated for readability)


A quick check reveals that this is indeed our object:


SQL> exec pt('select object_name, object_type, owner from dba_objects where data_object_id = 20445');
OBJECT_NAME : TEST
OBJECT_TYPE : TABLE
OWNER : FOO
-----------------

PL/SQL procedure successfully completed.


And we can see the object is now stored in the IMC by looking at v$im_segments:

SQL> exec pt('select * from v$im_segments');
OWNER : FOO
SEGMENT_NAME : TEST
PARTITION_NAME :
SEGMENT_TYPE : TABLE
TABLESPACE_NAME : USERS
INMEMORY_SIZE : 102301696
BYTES : 184549376
BYTES_NOT_POPULATED : 0
POPULATE_STATUS : COMPLETED
INMEMORY_PRIORITY : HIGH
INMEMORY_DISTRIBUTE : AUTO
INMEMORY_DUPLICATE : NO DUPLICATE
INMEMORY_COMPRESSION : FOR QUERY LOW
CON_ID : 0
-----------------

PL/SQL procedure successfully completed.



Thus, we are getting the expected performance benefit of it being in the IMC:

SQL> alter session set inmemory_query=disable;

Session altered.

Elapsed: 00:00:00.01
SQL> select count(*) from test;

COUNT(*)
———-
4187500

Elapsed: 00:00:03.96
SQL> alter session set inmemory_query=enable;

Session altered.

Elapsed: 00:00:00.01
SQL> select count(*) from test;

COUNT(*)
———-
4187500

Elapsed: 00:00:00.13


So far, so good.


Part II – Execution Plans

Some things we need to be aware of, though, when we are using the IMC in 12.1.0.2. One of them being that we can’t always trust in the execution plans anymore.

Let’s go back to our original sample table and recreate it using the default setting of INMEMORY PRIORITY NONE.


drop table test purge
/

create table test inmemory priority none
as
select a.object_name as name, rownum as rn,
sysdate + rownum / 10000 as dt
from all_objects a, (select rownum from dual connect by level <= 500)
/



Now let’s see what plan we’d get if we were to query it right now:


SQL> explain plan for select name from test where name = 'ALL_USERS';

Explained.

SQL> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT
——————————————————————————————————————————————————————————————————–
Plan hash value: 1357081020

———————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
———————————————————————————–
| 0 | SELECT STATEMENT | | 614 | 12280 | 811 (73)| 00:00:01 |
|* 1 | TABLE ACCESS INMEMORY FULL| TEST | 614 | 12280 | 811 (73)| 00:00:01 |
———————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – inmemory(“NAME”=’ALL_USERS’)
filter(“NAME”=’ALL_USERS’)

14 rows selected.


Okay, you might say now that EXPLAIN PLAN is only a guess. It’s not the real plan, and the real plan has to be different. And you would be right. Usually.

Watching the slave processes, there is no activity related to this table. Since it’s PRIORITY is NONE, it won’t be loaded into IMC until it’s actually queried for the first or second time around.

So let’s take a closer look than, shall we:

SQL> alter session set tracefile_identifier='REAL_PLAN';

Session altered.

SQL> alter session set events ’10046 trace name context forever, level 12′;

Session altered.

SQL> select name from test where name = ‘ALL_USERS’;



Now let’s take a look at the STAT line on that tracefile. Note: I closed the above session to make sure that we’ll get the full trace data.


PARSING IN CURSOR #140505885438688 len=46 dep=0 uid=64 oct=3 lid=64 tim=32852930021 hv=3233947880 ad='b4d04b00' sqlid='5sybd9b0c4878'
select name from test where name = 'ALL_USERS'
END OF STMT
PARSE #140505885438688:c=6000,e=10014,p=0,cr=2,cu=0,mis=1,r=0,dep=0,og=1,plh=1357081020,tim=32852930020
EXEC #140505885438688:c=0,e=58,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=32852930241
WAIT #140505885438688: nam='SQL*Net message to client' ela= 25 driver id=1650815232 #bytes=1 p3=0 obj#=20466 tim=32852930899
WAIT #140505885438688: nam='direct path read' ela= 13646 file number=4 first dba=21507 block cnt=13 obj#=20466 tim=32852950242
WAIT #140505885438688: nam='direct path read' ela= 2246 file number=4 first dba=21537 block cnt=15 obj#=20466 tim=32852953528
WAIT #140505885438688: nam='direct path read' ela= 1301 file number=4 first dba=21569 block cnt=15 obj#=20466 tim=32852955406

FETCH #140505885438688:c=182000,e=3365871,p=17603,cr=17645,cu=0,mis=0,r=9,dep=0,og=1,plh=1357081020,tim=32857244740
STAT #140505885438688 id=1 cnt=1000 pid=0 pos=1 obj=20466 op='TABLE ACCESS INMEMORY FULL TEST (cr=22075 pr=22005 pw=0 time=865950 us cost=811 size=12280 card=614)'



So that’s still the wrong one right there, and the STAT line even clearly shows that we’ve actually done 22005 physical reads, and therefore likely no in-memory scan, but a full scan from disk. There’s clearly a bug there with the execution plan reported, which is plain wrong.

Thus, be careful about using INMEMORY PRIORITY NONE, as you may not get what you expect. Since the PRIORITY NONE settings may also be overridden by any other PRIORITY settings, your data may get flushed out of the IMC, even though your execution plans will say otherwise. And I’m sure many of you know it’s often not slow response times on queries which cause a phone ringing hot. It’s inconsistent response times. This feature, if used inappropriately will pretty much guarantee inconsistent response times.

Apparently, what we should be doing is size up the In Memory Column store appropriately, to hold the objects we actually need to be in there. And make sure they’re always in there by setting a PRIORITY of LOW or higher. Use CRITICAL and HIGH to ensure the most vital objects of the application are populated first.

There was one other oddity that I noticed while tracing the W00? processes.

Part III – What are you scanning, Oracle ?

The m000 process’ trace file reveals many back-to-back executions of this select:


PARSING IN CURSOR #140670951860040 len=104 dep=1 uid=0 oct=3 lid=0 tim=23665542991 hv=2910336760 ad='fbd06928' sqlid='24uqc4aqrhdrs'
select /*+ result_cache */ analyzer from compression$ where obj#=:1 and ulevel=:2



They all supply the same obj# bind value, which is our table’s object number. The ulevel values used vary between executions.


However, looking at the related WAIT lines for this cursor, we see:


WAIT #140670951860040: nam='direct path read' ela= 53427 file number=4 first dba=18432 block cnt=128 obj#=20445 tim=23666569746
WAIT #140670951860040: nam='direct path read' ela= 38073 file number=4 first dba=18564 block cnt=124 obj#=20445 tim=23666612210
WAIT #140670951860040: nam='direct path read' ela= 38961 file number=4 first dba=18816 block cnt=128 obj#=20445 tim=23666665534
WAIT #140670951860040: nam='direct path read' ela= 39708 file number=4 first dba=19072 block cnt=128 obj#=20445 tim=23666706469
WAIT #140670951860040: nam='direct path read' ela= 40242 file number=4 first dba=19328 block cnt=128 obj#=20445 tim=23666749431
WAIT #140670951860040: nam='direct path read' ela= 39147 file number=4 first dba=19588 block cnt=124 obj#=20445 tim=23666804243
WAIT #140670951860040: nam='direct path read' ela= 33654 file number=4 first dba=19840 block cnt=128 obj#=20445 tim=23666839836
WAIT #140670951860040: nam='direct path read' ela= 38908 file number=4 first dba=20096 block cnt=128 obj#=20445 tim=23666881932
WAIT #140670951860040: nam='direct path read' ela= 40605 file number=4 first dba=20352 block cnt=128 obj#=20445 tim=23666924029
WAIT #140670951860040: nam='direct path read' ela= 32089 file number=4 first dba=20612 block cnt=124 obj#=20445 tim=23666962858
WAIT #140670951860040: nam='direct path read' ela= 36223 file number=4 first dba=20864 block cnt=128 obj#=20445 tim=23667001900
WAIT #140670951860040: nam='direct path read' ela= 39733 file number=4 first dba=21120 block cnt=128 obj#=20445 tim=23667043146
WAIT #140670951860040: nam='direct path read' ela= 17607 file number=4 first dba=21376 block cnt=128 obj#=20445 tim=23667062232

… and several more.


Now, compression$ contains only a single row. Its total extent size is neglibile as well:


SQL> select sum(bytes)/1024/1024 from dba_extents where segment_name = 'COMPRESSION$';

SUM(BYTES)/1024/1024
——————–
.0625


So how come Oracle is reading so many blocks ? Note that each of the above waits is a multi-block read, of 128 blocks.

Let’s take a look at what Oracle is actually reading there:

begin
pt('select segment_name, segment_type, owner
from dba_extents where file_id = 4
and 18432 between block_id and block_id + blocks - 1');
end;
/

SEGMENT_NAME : TEST
SEGMENT_TYPE : TABLE
OWNER : FOO
—————–

PL/SQL procedure successfully completed.

There’s our table again. Wait. What ?

There must be some magic going on underneath the covers here. In my understanding, a plain select against table A, is not scanning table B.

If I manually run the same select statement against compression$, I get totally normal trace output.

This reminds me of the good old:

SQL> select piece from IDL_SB4$;
ERROR:
ORA-00932: inconsistent datatypes: expected CHAR got B4



But I digress.

It could simply be a bug that results in these direct path reads being allocated to the wrong cursor. Or it could be intended, as it’s indeed this process’ job to analyze and load this table, and using this the resource usage caused by this is instrumented and can be tracked?

Either way, to sum things up we can say that:

- Performance benefits can potentially be huge
- Oracle automatically scans and caches segments marked as INMEMORY PRIORITY LOW|MEDIUM|HIGH|CRITICAL (they don’t need to be queried first!)
- Oracle scans segments marked as INMEMORY PRIORITY NONE (the default) only after they’re accessed the second time – and they may get overridden by higher priorities
- Oracle analyzes the table and stores the results in compression$
- Based on that analysis, Oracle may decide to load one or the other column only into IMC, or the entire table, depending on available space, and depending on the INMEMORY clause used
- It’s the W00? processes using some magic to do this analysis and read the segment into IMC.
- This analysis is also likely to be triggered again, whenever space management of the IMC triggers again, but I haven’t investigated that yet.

Categories: DBA Blogs