Skip navigation.

Feed aggregator

Automatic ADF Popup Opening on Fragment Load

Andrejus Baranovski - Sun, 2015-03-29 08:56
I had a post about opening ADF Popup on page load - Opening ADF PopUp on Page Load. Approach is quite straightforward, developer needs to use showPopupBehavior operation with appropriate trigger type. When it comes to ADF Popup opening on fragment load, implementation is a bit more complex. There is a known method to implement hidden text field and in the getter method call your custom logic - getter will be executed when fragment loads. However, this is not very efficient, you will need to add condition to distinguish between first and subsequent calls to the getter (it will be executed multiple times). I will describe in this post different approach - using ADF poll component and forcing it to execute only once after fragment load.

Here you can download sample application - FragmentPopUpLoadApp.zip. This sample implements two UI tabs. Each of the tabs renders ADF region. First region displays information about all employees - tree map with salary information:


Automatic popup opening is implemented in the second region - Employees By Department tab. As soon as user opens this tab, popup is load to select department. Data in the region is filtered, based on department selected in the popup:


Filtered data after selection was made in automatically opened popup:


Popup in the fragment is loaded on the first load by ADF poll component. Poll component is set with short interval of 10 milliseconds. During its first execution it will call Java listener method and in addition JavaScript client listener will be invoked. Inside JavaScript client listener, we disable ADF poll component by setting its interval to be negative. This is how ADF poll executes only once and then it stops:


Here is Java listener method, invoked by ADF poll component - it loads the popup:


ADF poll is stopped after its first execution. However, we need to ensure it will be started again - if user re-opens the same tab. For this purpose I have implemented conditional ADF region activation - region is de-activated when user navigates away from the tab. Tab disclosure listener updates helper variable to track which tab becomes active:


Disclosure listener updates page flow scope variable - forceActivate:


This variable is used in the region definition - region is active when tab is selected, and inactive otherwise:

node-oracledb 0.4.2 is on NPM (Node.js driver for Oracle Database)

Christopher Jones - Sat, 2015-03-28 18:41

The 0.4.2 version of the Node.js driver for Oracle Database is out.

  • Node-oracledb is now officially on the npmjs.com repository. This simplifies the Install instructions by removing the need to manually clone or download from GitHub. Thanks to Tim Branyen for setting this up and handing over stewardship to us.

  • Metadata support was added. Column names are now provided in the execute() callback result object. See the doc example.

  • We saw a few people try to use strangely old versions of Node 0.10. I've bumped up the lower limit requirement a bit. It won't force you to use the latest Node.js 0.10 patch set but you really should keep up to date with security fixes.

    If you want to build with Node 0.12, there is a community contributed patch from Richard Natal that can be found here. This patch also allows node-oracledb to work with io.js.

  • The default Instant Client directory on AIX was changed from /opt/oracle/instantclient_12_1 to /opt/oracle/instantclient. This now matches the default of other platforms.

  • One other small change was some improvements to the Windows install documentation.

Yes, work is continuing behind the scenes on other features.

A Glance at Smartwatches in the Enterprise: A Moment in Time Experience

Usable Apps - Sat, 2015-03-28 02:30

Ultan O’Broin (@usableapps) talks to Oracle Applications User Experience (OAUX) Vice President Jeremy Ashley (@jrwashley) about designing apps for that smartwatch, and every other smartwatch, too.

Nobody wants their device to disrupt them from what they are doing or to have to move to another one to continue working. Keeping users in the moment of their tasks—independent of the devices they’re using—is central to any great user experience.

The ability to apply our Oracle Applications Cloud design philosophy to the smartwatch demonstrates an ideal realization of the “glance” method, keeping users in that moment: Making the complex simple, flexible, intuitive, and most of all, convenient. OAUX recognizes the need for smartwatch wearers to experience that “right here, right now” feeling, the one in which you have just what you need, just when you need it.

The wearable technology space is currently focused on smartwatches. We’re excited by Apple’s announcement about their smartwatch, and we’re even more thrilled to now show you our proof of concept glance designs for the Oracle Applications Cloud on the Apple Watch. We want to hear your reaction! 

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch


Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud for Apple Watch proof of concept designs

For the smartwatch specifically, VP Jeremy Ashley explained how our glance approach applies to smartwatch wearers, regardless of their choice of device:

“The most common wearable user interaction is to glance at something. The watch works as the wearer’s mini dialog box to the cloud, making microtransactions convenient on the wrist, and presenting the right information to the wearer at the right time. How quickly and easily someone can do something actually useful is the key activity."

Glance brings cloud interaction to wearers in a personal way, requesting and not demanding attention, while eliminating a need to switch to other devices to “dig in,” or to even have to pull a smartphone out of the pocket to respond.

“To continue the journey to completing a task using glance is as simple and natural as telling the time on your wrist”, says Jeremy.

Being able to glance down at your wrist at a stylish smartwatch experience—one that provides super-handy ways to engage with gems of information— enhances working in the cloud in powerful and productive ways, whether you’re a sales rep walking from your car to an opportunity engagement confidently glancing at the latest competitive news, or a field technician swiping across a watchface to securely record time on a remote job.

Glancing at a UI is the optimal wearable experience for the OAUX mobility strategy, where the cloud, not the device, is our platform. This means you can see our device-agnostic glance design at work not only on an Apple Watch, but on Android Wear, Pebble, and other devices, too.

Glance on Android Wear Samsung Gear Live and Pebble

Glance for Oracle Applications Cloud proof of concept apps on Android Wear Samsung Gear Live and Pebble

Designing a Glanceable Platform

The path to our glance designs began with OAUX research into every kind of smartwatch we could get on our wrists so that we could study their possibilities, experience how they felt, how they looked, and how they complemented everyday work and life activities. Then we combined ideas and experiences with Oracle Cloud technology to deliver a simplified design strategy that we can apply across devices. As a result, our UI designs are consistent and familiar to users as they work flexibly in the cloud, regardless of their device, type of operating system, or form factor.

This is not about designing for any one specific smartwatch. It’s a platform-agnostic approach to wearable technology that enables Oracle customers to get that awesome glanceable, cloud-enabled experience on their wearable of choice.

Why Smartwatches?

Smartwatches such as the Apple Watch, Pebble, and Android Wear devices have resonated strongly with innovators and consumers of wearable technology. The smartwatch succeeds because we’re already familiar and comfortable with using wristwatches, and they’re practical and easy to use.

From first relying on the sun to tell the time, to looking up at town hall clocks, to taking out pocket watches, and then being able to glance at our wrists to tell the time, we’ve seen an evolution in glanceable technology analogous to the miniaturization of computing from large mainframes to personal, mobile devices for consumers.

Just like enterprise apps, watches have already been designed for many specializations and roles, be they military, sport, medical, fashion, and so on. So the evolution of the smartwatch into an accepted workplace application is built on a firm foundation.

More Information

Again, OAUX is there, on trend, ready and offering a solution grounded in innovation and design expertise, one that responds to how we work today in the cloud.

In future articles, we’ll explore more examples that showcase how we’re applying the glance approach to wearable technology, and we’ll look at design considerations in more detail. You can read more about our Oracle Applications Cloud design philosophy and other trends and innovations that influence our thinking in our free eBook.

Check the Usable Apps website for events where you can experience our smartwatch and other innovations for real, read our Storify feature on wearable technology, and see our YouTube videos about our UX design philosophy and strategy.

More Apple Watch glance designs are on Instagram

Seriously proud of this and it doesn't make me grumpy!

Grumpy old DBA - Fri, 2015-03-27 18:27
So the GLOC 2015 conference registration is open (GLOC 2015 ) ( has been for a while ) and recently we completed posting all the speakers/topics.  That's been good darn good.

Just out today is our SAG  ( schedule at a glance ) which demonstrates just how good our conference will be.  Low cost high quality and just an event that you really should think about being in Cleveland for in may.

The schedule at a glance does not include our 4 top notch 1/2 day workshops going on monday but you can see them from the regular registration.

I am so grateful for the speakers we have on board.  It's a lot of work behind the scenes getting something like this rolling but when you see a lineup like this just wow!
Categories: DBA Blogs

Be Careful when using FRA with Streams

Michael Dinh - Fri, 2015-03-27 16:12

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production

select state from gv$streams_capture;

STATE
----------------------------------------------------------------------------------------------------------------------------------------------------------------
WAITING FOR REDO: LAST SCN MINED 442455793041

select thread#, sequence#, status
from v$archived_log
where 442455793041 between first_change#
and next_change# order by 1,2;

   THREAD#  SEQUENCE# S
---------- ---------- -
	 1    1070609 D
	 1    1070609 D
	 1    1070609 D
	 1    1070610 D
	 1    1070610 D
	 2    1153149 D
	 2    1153149 D
	 2    1153149 D

8 rows selected.

Who’s deleting the archived logs? Thanks to Praveen G. who figured this out. From the alert log.

WARNING: The following archived logs needed by Streams capture process
are being deleted to free space in the flash recovery area. If you need
to process these logs again, restore them from a backup to a destination
other than the flash recovery area using the following RMAN commands:
   RUN{
      # <directory/ASM diskgroup> is a location other than the
      # flash recovery area
      SET ARCHIVELOG DESTINATION TO '<directory/ASM diskgroup>';
      RESTORE ARCHIVELOG ...;
   }

Pythian at Collaborate 15

Pythian Group - Fri, 2015-03-27 15:05

Make sure you check out Pythian’s speakers at Collaborate 15. Stop by booth #1118 for a chance meet some of Pythian’s top Oracle experts, talk shop, and ask questions. This many Oracle experts in one place only happens once a year, have a look at our list of presenters, we think you’ll agree.

Click here to view a PDF of our presenters

 

Pythian’s Collaborate 15 Presenters | April 12 – 16 | Mandalay Bay Resort and Casino, Las Vegas

 

Christo Kutrovsky | ATCG Senior Consultant | Oracle ACE

 

Maximize Exadata Performance with Parallel Queries

Wed. April 15 | 10:45 AM – 11:45 AM | Room Banyan D

 

Big Data with Exadata

Thu. April 16 | 12:15 PM – 1:15 PM | Room Banyan D

 

Deiby Gomez Robles | Database Consultant | Oracle ACE

 

Oracle Indexes: From the Concept to Internals

Tue. April 14 | 4:30 PM – 5:30 PM | Room Palm C

 

Marc Fielding | ATCG Principal Consultant | Oracle Certified Expert

 

Ensuring 24/7 Availability with Oracle Database Application Continuity

Mon. April 13 | 2:00 PM – 3:00 PM | Room Palm D

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue. April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Maris Elsins | Oracle Application DBA | Oracle ACE

Mining the AWR: Alternative Methods for Identification of the Top SQLs in Your Database

Tue. April 14 | 3:15 PM – 4:15 PM | Room Palm B

 

Ins and Outs of Concurrent Processing Configuration in Oracle e-Business Suite

Wed. April 15 | 8:00 AM – 9:00 AM | Room Breakers B

 

DB12c: All You Need to Know About the Resource Manager

Thu. April 16 | 9:45 AM – 10:45 AM | Room Palm A

 

Alex Gorbachev | CTO | Oracle ACE Director

 

Using Hadoop for Real-time BI Queries

Tue, April 14 | 9:45 AM – 10:45 AM | Room Jasmine E

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue, April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Anomaly Detection for Database Monitoring

Thu, April 16 | 11:00 AM – 12:00 PM | Room Palm B

 

Subhajit Das Chaudhuri | Team Manager

 

Deep Dive: Integration of Oracle Applications R12 with OAM 11g, OID 11g , Microsoft AD and WNA

Tue, April 14 | 3:15 PM – 4:15 PM | Room Breakers D

 

Simon Pane | ATCG Senior Consultant | Oracle Certified Expert

 

Oracle Service Name Resolution – Getting Rid of the TNSNAMES.ORA File!

Wed, April 15 | 9:15 AM – 10:15 AM | Room Palm C

 

René Antunez | Team  Manager | Oracle ACE

 

Architecting Your Own DBaaS in a Private Cloud with EM12c

Mon. April 13 | 9:15 AM – 10:15 AM | Room Reef F

 

Wait, Before We Get the Project Underway, What Do You Think Database as a Service Is…

Mon, Apr 13 | 03:15 PM – 04:15 PM | Room Reef F

 

My First 100 days with a MySQL DBMS

Tue, Apr 14 | 09:45 AM – 10:45 AM | Room Palm A

 

Gleb Otochkin | ATCG Senior Consultant | Oracle Certified Expert

 

Your Own Private Cloud

Wed. April 15 | 8:00 AM – 9:00 AM | Room Reef F

 

Patching Exadata: Pitfalls and Surprises

Wed. April 15 | 12:00 PM – 12:30 PM | Room Banyan D

 

Second Wind for Your exadata

Tue. April 14 | 12:15 PM – 12:45 PM | Room Banyan C

 

Michael Abbey | Team Manager, Principal Consultants | Oracle ACE

 

Working with Colleagues in Distant Time Zones

Mon, April 13 | 12:00 PM – 12:30 PM | Room North Convention, South Pacific J

 

Manage Referential Integrity Before It Manages You

Tue, April 14 | 2:00 PM – 3:00 PM | Room Palm C

 

Nothing to BLOG About – Think Again

Wed, April 15 | 7:30 PM – 8:30 PM | Room North Convention, South Pacific J

 

Do It Right; Do It Once. A Roadmap to Maintenance Windows

Thu, April 16 | 11:00 AM – 12:00 PM | Room North Convention, South  Pacific J

Categories: DBA Blogs

Oracle FMW Partner Community Forum 2015: The Oracle Applications Cloud UX Rapid Development Kit Goes to Hungary!

Usable Apps - Fri, 2015-03-27 13:22

Vlad Babu (@vladbabu), Oracle Applications Cloud Pre-Sales UX Champ, files a report about his Oracle Applications User Experience (OAUX) while attending the recent Oracle Fusion Middleware Partner Community Forum 2015 in Budapest, Hungary.

Over 200 Oracle Partners from the Oracle Fusion Middleware (FMW) area stepped away from their projects in early March 2015 to take part in a groundbreaking event in Budapest, Hungary: the Oracle Fusion Middleware Partner Community Forum 2015. For some time, this two-day event had been just a glimmer in the eye of Jürgen Kress (@soacommunity),  Senior Manager SOA/FMW Partner Programs EMEA. However, with the unprecedented success of the partner programs and community growth in recent years, he really felt compelled to make this event  happen. And he did!

Andrew Sutherland, Senior Vice President Business Development - Technology License & Systems EMEA, and Amit Zavery (@azavery), Senior Vice President, Integration Products, were the keynote speakers. They inspired the audience when they spoke about Digital Disruption and how Oracle is soaring to success with Integration Cloud Services offerings, such as Oracle Cloud Platform (Platform as a Service [PaaS]).

Tweet from Debra Lilley

Tweet from Debra Lilley: Pervasiveness of UX to Cloud success

The user experience (UX) presence at the event struck a chord with Debra Lilley (@debralilley), (Vice President of Certus Cloud Services), who remarked on how important the all-encompassing Oracle Applications User Experience Simplified User Experience Rapid Development Kit (RDK) is for enabling great partner development for the cloud experience. Yes, integration and PaaS4SaaS are key partner differentiators going forward!

PTS Code Accelerator Kit and Oracle Applications UX design patterns eBook

Tweet from Vlad Babu: PTS Code Accelerator Kit and Oracle Applications UX design patterns eBook 

So, how can partners truly leverage their investment in Oracle Fusion Middleware? Use the RDK. Oracle Partners were really excited by and empowered when they used the RDK for designing and coding a simplified UI for the Oracle Applications Cloud. The RDK contains all the information you’ll need before you even start coding, such as easy-to-use RDK wireframing stencils. The YouTube guidance offers great productivity features when creating new extensions in PaaS or developing from scratch a brand new, custom application using Oracle ADF technology.

Tweet from Debra Lilley

Tweet from Debra Lilley: Integration is key to SaaS. 

For example, Certus Solutions leveraged the RDK Simplified User Experience Design Patterns eBook that covers simplified UI design patterns and the ADF-based code templates in the RDK to develop a new extension for the Oracle HCM Cloud. The result? Certus Solutions received the FMW Community Cloud Award for outstanding work in validating PaaS4SaaS with the Usable Apps team!

Tweet from Debra Lilley announcing that Certus Solutions received the FMW Community Cloud Award

Tweet from Debra Lilley: Announcing that Certus Solutions received the FMW Community Cloud Award  

Experiencing the motivation and innovation from successful partners, this event proved to be a unique and rewarding chance to interact with key Oracle Partners. This event was truly a fantastic two-day event to remember. Here’s to the next opportunity to wear the OAUX colors with pride!

Tweet from Debra Lilley

Tweet from Debra Lilley: Simplicity, Extensibility, Mobile worn with pride. 

For more information, I encourage you to visit the Usable Apps website where you’ll find lots of essential information about designing and building new simplified UIs for the Oracle Applications Cloud.

Your reward is waiting.

Postscript on Student Textbook Expenditures: More details on data sources

Michael Feldstein - Fri, 2015-03-27 12:20

By Phil HillMore Posts (307)

There has been a fair amount of discussion around my post two days ago about what US postsecondary students actually pay for textbooks.

The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.

I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.

<wonk> I argued that the two best sources for rising average textbook price are the Bureau of Labor Statistics and the National Association of College Stores (NACS), and when you look at what students actually pay (including rental, non-consumption, etc) the best sources are NACS and Student Monitor. In this post I’ll share more information on the data sources and their methodologies. The purpose is to help people understand what these sources tell us and what they don’t tell us.

College Board and NPSAS

My going-in- argument was that the College Board is not a credible source on what students actually pay:

The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes.

Both the College Board and National Postsecondary Student Aid Study (NPSAS, official data for the National Center for Education Statistics, or NCES) currently use cost of attendance data created by financial aid offices of each institution, using the category “Books and Supplies”. There is no precise guidance from DOE on the definition of this category, and financial aid offices use very idiosyncratic methods for this budget estimate. Some schools like to maximize the amount of financial aid available to students, so there is motivation to keep this category artificially high.

The difference is three-fold:

  • NPSAS uses official census reporting from schools while the College Board gathers data from a subset of institution – their member institutions;
  • NPSAS reports the combined data “Average net price” and not the sub-category “Books and Supplies”; and
  • College Board data targeted at freshman full-time student.

From NCES report just released today based on 2012 data (footnote to figure 1):

The budget includes room and board, books and supplies, transportation, and personal expenses. This value is used as students’ budgets for the purposes of awarding federal financial aid. In calculating the net price, all grant aid is subtracted from the total price of attendance.

And the databook definition used, page 130:

The estimated cost of books and supplies for classes at NPSAS institution during the 2011–12 academic year. This variable is not comparable to the student-reported cost of books and supplies (CSTBKS) in NPSAS:08.

What’s that? It turns out that in 2008 NCES actually used a student survey – asking them what they spent rather than asking financial aid offices for net price budget calculation. NCES fully acknowledges that the current financial aid method “is not comparable” to student survey data.

As an example of how this data is calculated, see this guidance letter from the state of California [emphasis added].

The California Student Aid Commission (CSAC) has adopted student expense budgets, Attachment A, for use by the Commission for 2015-16 Cal Grant programs. The budget allowances are based on statewide averages from the 2006-07 Student Expenses and Resources Survey (SEARS) data and adjusted to 2015-16 with the forecasted changes in the California Consumer Price Index (CPI) produced by the Department of Finance.

The College Board asks essentially the same question from the same sources. I’ll repeat again – The College Board is not claiming to be an actual data source for what students actually spend on textbooks.

NACS

NACS has two sources of data – both bookstore financial reporting from member institutions and from a Student Watch survey report put out in the Fall and Spring of each academic year. NACS started collecting student expenditure data in 2007, initially every two years, then every year, then twice a year.

NACS sends their survey through approximately 20 – 25 member institutions to distribute to the full student population for that institution or a representative sample. For the Fall 2013 report:

Student WatchTM is conducted online twice a year, in the fall and spring terms. It is designed to proportionately match the most recent figures of U.S. higher education published in The Chronicle of Higher Education: 2013/2014 Almanac. Twenty campuses were selected to participate based on the following factors: public vs. private schools, two-year vs. four-year degree programs, and small, medium, and large enrollment levels.

Participating campuses included:

  • Fourteen four-year institutions and six two-year schools; and
  • Eighteen U.S. states were represented.

Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.

I interviewed Rich Hershman and Liz Riddle, who shared the specific definitions they use.

Required Course Materials:Professor requires this material for the class and has made this known through the syllabus, the bookstore, learning management system, and/or verbal instructions. These are materials you purchase/rent/borrow and may include textbooks (including print and/or digital versions), access codes, course packs, or other customized materials. Does not include optional or recommended materials.

The survey goes to students who report what they actually spent. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.

The data is aggregated across full-time and part-time students, undergraduates and graduates. So the best way to read the data I shared previously ($638 per year) is as per-capita spending. The report breaks down further by institution type (2-yr public, etc) and type (purchase new, rental, etc). The Fall 2014 data is being released next week, and I’ll share more breakdowns with this data.

In future years NACS plans to expand the survey to go through approximately 100 institutions.

Student Monitor

Student Monitor describes their survey as follows:

  • Conducted each Spring and Fall semester
  • On campus, one-on-one intercepts conducted by professional interviewers during the three week period March 24th to April 14th, 2014 [Spring 2014 data] and October 13th-27th [Fall 2014 data]
  • 1,200 Four Year full-time undergrads (Representative sample, 100 campuses stratified by Enrollment, Type, Location, Census Region/Division)
  • Margin of error +/- 2.4%

In other words, this is an intercept survey conducted with live interviews on campus, targeting full-time undergraduates. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.

In comparison to NACS, Student Monitor tracks more schools (100 vs. 20) but fewer students (1,200 vs. 12,000).

Despite the differences in methodology, Student Monitor and NACS report spending that is fairly consistent (both on the order of $600 per year per student).

New Data in Canada

Alex Usher from Higher Education Strategy Associates shared a blog post in response to my post that is quite interesting.

This data is a little old (2012), but it’s interesting, so my colleague Jacqueline Lambert and I thought we’d share it with you. Back then, when HESA was running a student panel, we asked about 1350 university students across Canada about how much they spent on textbooks, coursepacks, and supplies for their fall semester. [snip]

Nearly 85% of students reported spending on textbooks. What Figure 1 shows is a situation where the median amount spent is just below $300, and the mean is near $330. In addition to spending on textbooks, another 40% or so bought a coursepack (median expenditure $50), and another 25% reported buying other supplies of some description (median expenditure: also $50). Throw that altogether and you’re looking at average spending of around $385 for a single semester.

Subtracting out the “other supplies” that do not fit in NACS / Student Monitor definitions, and acknowledging that fall spending is typically higher than spring due to full-year courses, this data is also in the same ballpark of $600 per year (slightly higher in this case).

Upcoming NPSAS Data

The Higher Education Act of 2008 required NCES to add student expenditures on course materials to the NPSAS database, but this has not been added yet. According to Rich Hershman from NACS, NCES is using a survey question that is quite similar to NACS and field testing this spring. The biggest difference will be that NPSAS is annual data whereas NACS and Student Monitor send out their survey in fall and spring (then combining data).

Sometime in 2016 we should have better federal data on actual student expenditures.

</wonk>

Update: Mistakenly published without reference to California financial aid guidance. Now fixed.

Update 3/30: I mistakenly referred to the IPEDS database for NCES when this data is part of National Postsecondary Student Aid Study (NPSAS). All references to IPEDS have been corrected to NPSAS. I apologize for confusion.

The post Postscript on Student Textbook Expenditures: More details on data sources appeared first on e-Literate.

1 million page views in less than 5 years

Hemant K Chitale - Fri, 2015-03-27 10:26
My Oracle Blog has recorded 1million page views in less than 5 years.

Although the blog began on 28-Dec-2006, the first month with recorded page view counts was July-2010 -- 8,176 page views.


.
.
.

Categories: DBA Blogs

Conference Recaps and Such

Oracle AppsLab - Fri, 2015-03-27 09:28

I’m currently in Washington D.C. at Oracle HCM World. It’s been a busy conference; on Wednesday, Thao and Ben ran a brainstorming session on wearables as part of the HCM product strategy council’s day of activities.

brainstorm

Then yesterday, the dynamic duo ran a focus group around emerging technologies and their impact on HCM, specifically wearables and Internet of Things (IoT). I haven’t got a full download of the session yet, but I hear the discussion was lively. They didn’t even get to IoT, sorry Noel (@noelportual).

I’m still new to the user research side of our still-kinda-new house, so it was great to watch these two in action as a proverbial fly on the wall. They’ll be doing similar user research activities at Collaborate 15 and OHUG 15.

If you’re attending Collaborate and want to hang out with the OAUX team and participate in a user research or usability testing activity, hit this link. The OHUG 15 page isn’t up yet, but if you’re too excited to wait, contact Gozel Aamoth, gozel dot aamoth at oracle dot com.

Back to HCM World, in a short while, I’ll be presenting a session with Aylin Uysal called Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy, and then it’s off to the airport.

Earlier this week, Noel was in Eindhoven for OBUG Experience 2015. From the pictures I’ve seen, it was a fun event. Jeremy (@jrwashley) not only gave the keynote, but he found time to hang out with some robot footballers.

robot

Check out the highlights:

Busy week, right? Next week is more of the same as Noel and Tony head to Modern CX in Las Vegas.

Maybe we’ll run into you at one of these conferences? Drop a comment.

In other news, as promised last week, I updated the feed name. Doesn’t look like that affected anything, but tell your friends just in case.

Update: Nope, changing the name totally borks the old feed, so update your subscription if you want to keep getting AppsLab goodness delivered to your feed reader or inbox.Possibly Related Posts:

Lifting the Lid on OBIEE Internals with Linux Diagnostics Tools

Rittman Mead Consulting - Fri, 2015-03-27 08:44

There comes the point in any sufficiently complex or difficult problem diagnosis that the log files in OBIEE alone are not sufficient for building up a complete picture of what’s going on. Even with the debug/trace data that Presentation Services and other components can be configured precisely to write you’re sometimes just left having to guess what is going on inside the black box of each of the OBIEE system components.

Here we’re going to look at a couple of examples of lifting the lid just a little bit further on what OBIEE is up to, using standard Linux diagnostic tools. These are not something to be reaching for in the first instance, but more getting on to a last resort. Almost always the problem is simpler than you’ll think, and leaping for an network trace or stack trace is going to be missing the wood for the trees.

Diagnostics in action

At a client recently they had a problem with a custom skin deployment on a clustered (scaled-out) OBIEE deployment. Amongst other things the skin was setting the default palette for charts (viewui/chart/dvt-graph-skin.xml), and they were seeing only 50% of chart executions pick up the custom palette – the other 50% used the default. If either entire node was shut down, things were fine, but otherwise it was a 50:50 chance what the colours would be. Most odd….

When you configure a custom skin in OBIEE you should be setting CustomerResourcePhysicalPath in instanceconfig.xml, along with CustomerResourceVirtualPath. Both these are necessary so that Presentation Services knows:

  1. Logical – How to generate URLs for content requested by the user’s browser (eg logos, CSS files, etc).
  2. Physical – How to physically reference files on the file system that are read by OBIEE itself (eg XML files, language files)

The way the client had configured their custom skin was that it was on storage local to each node, and in a node-specific path, something like this:

  • /data/instance1/s_custom/
  • /data/instance2/s_custom/

Writing out the details in hindsight always makes a problem’s root cause a lot more obvious, but at the time this was a tricky problem. Let’s start with the basics. Java Host is responsible for rendering charts, and for some reason, it was not reading the custom colour scheme file from the custom skin correctly. Presentation Services uses all the available Java Hosts in a cluster to request charts, presumably on some kind of round-robin basis. An analysis request on NODE01 has a 50:50 chance of getting its chart rendered on Java Host on NODE01 or Java Host on NODE02:


Turned all the log files up to 11 didn’t yield anything useful. For some reason half the time Java Host would just “ignore” the custom skin. Shutting down each node proved that in isolation the custom skin configuration on each node was definitely correct, because then the colours started working just fine. It was only when multiple Java Hosts across the nodes were active that there was a problem.

How Java Host picks up the custom skin is entirely undocumented, and I ended up figuring out that it must get the path to the skin as part of the chart request from Presentation Services. Since Presentation Services on NODE01 has been configured with a CustomerResourcePhysicalPath of /data/instance1/s_custom/, Java Host on NODE02 would fail to find this path (since on NODE02 the skin is located at /data/instance2/s_custom/) and so fall back on the default. This was my hypothesis that I then proved by making the path available for each skin available on each node (symlink, or using a standard path would also have worked, eg /data/shared/s_custom, or even better, a shared mount point), and from there everything worked just fine.

But a hypothesis and successful resolution alone wasn’t entirely enough. Sure the client was happy, but there was that little itch, that unknown “black box” system that appeared to behave how I had deduced, but could we know for sure?

tcpdump – network analysis

All of the OBIEE components communicate with each other and the outside world over TCP. When Presentation Services wants a chart rendered it does so by sending a request to Java Host – over TCP. Using the tcpdump tool we can see that in action, and inspect what gets sent:

$ sudo tcpdump -i venet0 -i lo -nnA 'port 9810'

The -A flag capture the ASCII representation of the packet; use -X if you want ASCII and hex. Port 9810 is the Java Host listen port.

The output looks like this:


You’ll note that in this case it’s intra-node communication, i.e. src and dest IP addresses are the same. The port for Java Host (9810) is clear, and we can verify that the src port (38566) is Presentation Services with the -p (process) flag of netstat:

$ sudo netstat -pn |grep 38566
tcp        0      0 192.168.10.152:38566        192.168.10.152:9810         ESTABLISHED 5893/sawserver

So now if you look in a bit more detail at the footer of the request from Presentation Services that tcpdump captured you’ll see loud and clear (relatively) the custom skin path with the graph customisation file:


Proof that the Presentation Services is indeed telling Java Host where to go and look for the custom attributes (including colours)! NB this is on a test environment, so that paths vary from the /data/instance... example above)

strace – system call analysis

So tcpdump gives us the smoking gun, but can we find the corpse as well? Sure we can! strace is a tool for tracing system calls, and a fantastically powerful one, but here’s a very simple example:

$strace -o /tmp/obijh1_strace.log -f -p $(pgrep -f obijh1)

-o means to write it to file, -f follows child processes as well, and -p passes the process id that strace should attach to. Have set the trace running I run my chart, and then go and pick through my trace file.

We know it’s the dvt-graph-skin.xml file that Java Host should be reading to pick up the custom colours, so let’s search for that:


Well there we go – Java Host went to go and look for the skin in the path that it was given by Presentation Services, and couldn’t find it. From there it’ll fall back on the product defaults.

Right Tool, Right Job

As as I said at the top of this article, these diagnostic tools are not the kind of things you’d be using day to day. Understanding their output is not always easy and it’s probably easy to do more harm than good with false assumption about what a trace is telling you. But, in the right situations, they are great for really finding out what is going on under the covers of OBIEE.

If you want to find out more about this kind of thing, this page is a great starting point.

Categories: BI & Warehousing

Oracle Cloud World: Modern Business in the Cloud

WebCenter Team - Fri, 2015-03-27 08:13
Oracle Corporation Oracle CloudWorld San Jose Evolve IT

Infographic Evolve IT

Join us at Oracle CloudWorld to learn how to solve real problems and modernize your business. If you're focused on evolving IT, you should be here.

Plan your day or register now. Keynote Speakers John R. Rymer John R. Rymer
Vice President, Principal Analyst
Forrester Research
Thomas Kurian Thomas Kurian
President
Product Development, Oracle
Shawn Price Shawn Price
Senior Vice President
Oracle Cloud Go-to-Market
and Product Business Groups, Oracle
Russell Pearson Russell Pearson
Global Oracle Leader, PwC
Sidebar Shadow Register now Register Now Thursday, April 30, 2015
8:00 a.m. – 5:00 p.m.

The Fairmont San Jose
170 South Market Street
San Jose, CA 95113 Sidebar Shadow Stay Connected
Oracle on Facebook Linkedin Icon Oracle on Youtube Oracle on Twitter Oracle Blog #moderncloud Sidebar Shadow
Oracle is committed to promoting a corporate culture that is centered on integrity, accountability and ethical business conduct, please click here for important ethics information regarding this event. Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

ANSI expansion

Jonathan Lewis - Fri, 2015-03-27 04:46

Here’s a quirky little bug that appeared on the OTN database forum in the last 24 hours which (in 12c, at least) produces an issue which I can best demonstrate with the following cut-n-paste:


SQL> desc purple
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 G_COLUMN_001                        NOT NULL NUMBER(9)
 P_COLUMN_002                                 VARCHAR2(2)

SQL> select p.*
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;
  join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
       *
ERROR at line 4:
ORA-01792: maximum number of columns in a table or view is 1000

SQL> select p.g_column_001, p.p_column_002
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;

no rows selected

A query that requires “star-expansion” fails with ORA-01792, but if you explicitly expand the ‘p.*’ to list all the columns it represents the optimizer is happy. (The posting also showed the same difference in behaviour when changing “select constant from  {table join}” to “select (select constant from dual) from {table join}”)

The person who highlighted the problem supplied code to generate the tables so you can repeat the tests very easily; one of the quick checks I did was to modify the code to produce tables with a much smaller number of columns and then expanded the SQL to see what Oracle would have done with the ANSI. So, with only 3 columns each in table RED and GREEN, this is what I did:

set serveroutput on
set long 20000

variable m_sql_out clob

declare
    m_sql_in    clob :=
                        '
                        select p.*
                        from GREEN g
                        join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
                        join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
                        ';
begin

    dbms_utility.expand_sql_text(
        m_sql_in,
        :m_sql_out
    );

end;
/

column m_sql_out wrap word
print m_sql_out

The dbms_utility.expand_sql_text() function is new to 12c, and you’ll need the execute privilege on the dbms_utility package to use it; but if you want to take advantage of it in 11g you can also find it (undocumented) in a package called dbms_sql2.

Here’s the result of the expansion (you can see why I reduced the column count to 3):


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."G_COLUMN_001_6" "G_COLUMN_001","A1"."P_COLUMN_002_7" "P_COLUMN_002"
FROM  (SELECT "A3"."G_COLUMN_001_0" "G_COLUMN_001","A3"."G_COLUMN_002_1"
"G_COLUMN_002","A3"."G_COLUMN_003_2" "G_COLUMN_003","A3"."G_COLUMN_001_3"
"G_COLUMN_001","A3"."R_COLUMN__002_4" "R_COLUMN__002","A3"."R_COLUMN__003_5"
"R_COLUMN__003","A2"."G_COLUMN_001" "G_COLUMN_001_6","A2"."P_COLUMN_002"
"P_COLUMN_002_7" FROM  (SELECT "A5"."G_COLUMN_001"
"G_COLUMN_001_0","A5"."G_COLUMN_002" "G_COLUMN_002_1","A5"."G_COLUMN_003"
"G_COLUMN_003_2","A4"."G_COLUMN_001" "G_COLUMN_001_3","A4"."R_COLUMN__002"
"R_COLUMN__002_4","A4"."R_COLUMN__003" "R_COLUMN__003_5" FROM
"TEST_USER"."GREEN" "A5","TEST_USER"."RED" "A4" WHERE
"A5"."G_COLUMN_001"="A4"."G_COLUMN_001") "A3","TEST_USER"."PURPLE" "A2" WHERE
"A3"."G_COLUMN_001_0"="A2"."G_COLUMN_001") "A1"

Tidying this up:


SELECT
        A1.G_COLUMN_001_6 G_COLUMN_001,
        A1.P_COLUMN_002_7 P_COLUMN_002
FROM    (
        SELECT
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_002_1 G_COLUMN_002,
                A3.G_COLUMN_003_2 G_COLUMN_003,
                A3.G_COLUMN_001_3 G_COLUMN_001,
                A3.R_COLUMN__002_4 R_COLUMN__002,
                A3.R_COLUMN__003_5 R_COLUMN__003,
                A2.G_COLUMN_001 G_COLUMN_001_6,
                A2.P_COLUMN_002 P_COLUMN_002_7
        FROM    (
                SELECT
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A5.G_COLUMN_002 G_COLUMN_002_1,
                        A5.G_COLUMN_003 G_COLUMN_003_2,
                        A4.G_COLUMN_001 G_COLUMN_001_3,
                        A4.R_COLUMN__002 R_COLUMN__002_4,
                        A4.R_COLUMN__003 R_COLUMN__003_5
                FROM
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                WHERE
                        A5.G_COLUMN_001=A4.G_COLUMN_001
                ) A3,
                TEST_USER.PURPLE A2
        WHERE
                A3.G_COLUMN_001_0=A2.G_COLUMN_001
        ) A1

As you can now see, the A1 alias lists all the columns in GREEN, plus all the columns in RED, plus all the columns in PURPLE – totalling 3 + 3 + 2 = 8. (There is a little pattern of aliasing and re-aliasing that turns the join column RED.g_column_001 into G_COLUMN_001_3, making it look at first glance as if it has come from the GREEN table).

You can run a few more checks, increasing the number of columns in the RED and GREEN tables, but essentially when the total number of columns in those two tables goes over 998 then adding the two extra columns from PURPLE makes that intermediate inline view break the 1,000 column rule.

Here’s the equivalent expanded SQL if you identify the columns explicitly in the select list (even with several hundred columns in the RED and GREEN tables):


SELECT
        A1.G_COLUMN_001_2 G_COLUMN_001,
        A1.P_COLUMN_002_3 P_COLUMN_002
FROM    (
        SELECT
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_001_1 G_COLUMN_001,
                A2.G_COLUMN_001 G_COLUMN_001_2,
                A2.P_COLUMN_002 P_COLUMN_002_3
        FROM    (
                SELECT
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A4.G_COLUMN_001 G_COLUMN_001_1
                FROM
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                WHERE
                        A5.G_COLUMN_001=A4.G_COLUMN_001
                ) A3,
                TEST_USER.PURPLE A2
        WHERE
                A3.G_COLUMN_001_0=A2.G_COLUMN_001
        ) A1

As you can see, the critical inline view now holds only the original join columns and the columns required for the select list.

If you’re wondering whether this difference in expansion could affect execution plans, it doesn’t seem to; the 10053 trace file includes the following (cosmetically altered) output:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT
        P.G_COLUMN_001 G_COLUMN_001,
        P.P_COLUMN_002 P_COLUMN_002
FROM
        TEST_USER.GREEN   G,
        TEST_USER.RED     R,
        TEST_USER.PURPLE  P
WHERE
        G.G_COLUMN_001=P.G_COLUMN_001
AND     G.G_COLUMN_001=R.G_COLUMN_001

So it looks as if the routine to transform the syntax puts in a lot of redundant text, then the optimizer takes it all out again.

The problem doesn’t exist with traditional Oracle syntax, by the way, it’s an artefact of Oracle’s expansion of the ANSI syntax, and 11.2.0.4 is quite happy to handle the text generated by the ANSI transformation when there are well over 1,000 columns in the inline view.


Index on SUBSTR(string,1,n) - do you still need old index?

Yann Neuhaus - Fri, 2015-03-27 03:57

In a previous post I've shown that from 12.1.0.2 when you have an index on trunc(date) you don't need additional index. If you need the column with full precision, then you can add it to the index on trunc(). A comment from Rainer Stenzel asked if that optimization is available for other functions. And Mohamed Houri has linked to his post where he shows that it's the same with a trunc() on a number.

Besides that, there is the same kind of optimization with SUBSTR(string,1,n) so here is the demo, with a little warning at the end.

I start with the same testcase as the previous post.

SQL> create table DEMO as select prod_id,prod_name,prod_eff_from +rownum/0.3 prod_date from sh.products,(select * from dual connect by level>=1000);
Table created.

SQL> create index PROD_NAME on DEMO(prod_name);
Index created.

SQL> create index PROD_DATE on DEMO(prod_date);
Index created.
string>Z

I've an index on the PROD_NAME and I can use it with equality or inequality predicates:

SQL> set autotrace on explain
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 72593368

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROD_NAME">'Z')

And I also can use it with a LIKE when there is no starting joker:
SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 72593368

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROD_NAME" LIKE 'Z%')
       filter("PROD_NAME" LIKE 'Z%')

That optimization is available for several releases (9.2 if I remember well but I didn' check).

substr(string,1,n)

But sometimes, when we want to check if a column starts with a string, the application uses SUBSTR instead of LIKE:

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1665545956

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX FULL SCAN  | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUBSTR("PROD_NAME",1,1)='Z')

But - as you see - there is no access predicate here. The whole index has to be read.

Of course, I can use a function based index for that:

SQL> create index PROD_NAME_SUBSTR on DEMO( substr(prod_name,1,1) );
Index created.

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 4209586087

-------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |     1 |    31 |
|   1 |  HASH UNIQUE                 |                  |     1 |    31 |
|   2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    31 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

One index only?

Then, as in the previous post about TRUNC I'll check if that new index is sufficient. Let's fdrop the first one.

SQL> drop index PROD_NAME;
Index dropped.
The previous index is dropped. Let's see if the index on SUBSTR can be used with an equality predicate:
SQL> select distinct prod_name from DEMO where prod_name = 'Zero';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 953445334

-------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT          |                  |     1 |    27 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    27 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME"='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

Good. The index on substring is used for index range scan on the prefix, and then the filter occurs on the result. This is fine as long as the prefix is selective enough.

It is also available with inequality:
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

...

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME">'Z')
   3 - access(SUBSTR("PROD_NAME",1,1)>='Z')

And we can use it even when using a substring with a different number of characters:
SQL> select distinct prod_name from DEMO where substr(prod_name,1,4) = 'Zero';
no rows selected

...

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUBSTR("PROD_NAME",1,4)='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

However, if we use the LIKE syntax:

SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 51067428

---------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    27 |
|   1 |  HASH UNIQUE       |      |     1 |    27 |
|*  2 |   TABLE ACCESS FULL| DEMO |     1 |    27 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME" LIKE 'Z%')

The LIKE snytax does not allow to filter from the index on SUBSTR. So there are cases where we have to keep all indexes. Index on full column for LIKE predicates, and index on substring for SUBSTR predicates.

Note that indexes on SUBSTR are mandatory when you have columns larger than your block size, which is probably the case if you allow extended datatypes (VARCHAR2 up to 32k)

ASH

Jonathan Lewis - Fri, 2015-03-27 03:41

There was a little conversation on Oracle-L about ASH (active session history) recently which I thought worth highlighting – partly because it raised a detail that I had got wrong until Tim Gorman corrected me a few years ago.

Once every second the dynamic performance view v$active_session_history copies information about active sessions from v$session. (There are a couple of exceptions to the this rule – for example if a session has called dbms_lock.sleep() it will appear in v$session as state = ‘ACTIVE’, but it will not be recorded in v$active_session_history.) Each of these snapshots is referred to as a “sample” and may hold zero, one, or many rows.

The rows collected in every tenth sample are flagged for copying into the AWR where, once they’ve been copied into the underlying table, they can be seen in the view dba_hist_active_sess_history.  This is where a common misunderstanding occurs: it is not every 10th row in v$active_session_history it’s every 10th second; and if a sample happens to be empty that’s still the sample that is selected (which means there will be a gap in the output from dba_hist_active_sess_history). In effect dba_hist_active_sess_history holds copies of the information you’d get from v$session if you sampled it once every 10 seconds instead of once per second.

It’s possible to corroborate this through a fairly simple query as the rows from v$active_session_history that are going to be dumped to the AWR are as they are created:


select
        distinct case is_awr_sample when 'Y' then 'Y' end flag,
        sample_id,
        sample_time
from
        v$active_session_history
where
        sample_time > sysdate - 1/1440
order by
        2,1
;

F  SAMPLE_ID SAMPLE_TIME
- ---------- --------------------------------
     3435324 26-MAR-15 05.52.53.562 PM
     3435325 26-MAR-15 05.52.54.562 PM
     3435326 26-MAR-15 05.52.55.562 PM
     3435327 26-MAR-15 05.52.56.562 PM
     3435328 26-MAR-15 05.52.57.562 PM
     3435329 26-MAR-15 05.52.58.562 PM
     3435330 26-MAR-15 05.52.59.562 PM
     3435331 26-MAR-15 05.53.00.562 PM
Y    3435332 26-MAR-15 05.53.01.562 PM
     3435333 26-MAR-15 05.53.02.572 PM
     3435334 26-MAR-15 05.53.03.572 PM
     3435335 26-MAR-15 05.53.04.572 PM
     3435336 26-MAR-15 05.53.05.572 PM
     3435337 26-MAR-15 05.53.06.572 PM
     3435338 26-MAR-15 05.53.07.572 PM
     3435339 26-MAR-15 05.53.08.572 PM
     3435340 26-MAR-15 05.53.09.572 PM
     3435341 26-MAR-15 05.53.10.582 PM
Y    3435342 26-MAR-15 05.53.11.582 PM
     3435343 26-MAR-15 05.53.12.582 PM
     3435344 26-MAR-15 05.53.13.582 PM
     3435345 26-MAR-15 05.53.14.582 PM
     3435346 26-MAR-15 05.53.15.582 PM
     3435347 26-MAR-15 05.53.16.582 PM
     3435348 26-MAR-15 05.53.17.582 PM
     3435349 26-MAR-15 05.53.18.592 PM
     3435350 26-MAR-15 05.53.19.592 PM
     3435351 26-MAR-15 05.53.20.592 PM
Y    3435352 26-MAR-15 05.53.21.602 PM
     3435355 26-MAR-15 05.53.24.602 PM
     3435358 26-MAR-15 05.53.27.612 PM
     3435361 26-MAR-15 05.53.30.622 PM
     3435367 26-MAR-15 05.53.36.660 PM
     3435370 26-MAR-15 05.53.39.670 PM
     3435371 26-MAR-15 05.53.40.670 PM
     3435373 26-MAR-15 05.53.42.670 PM
     3435380 26-MAR-15 05.53.49.700 PM
     3435381 26-MAR-15 05.53.50.700 PM
Y    3435382 26-MAR-15 05.53.51.700 PM
     3435383 26-MAR-15 05.53.52.700 PM

40 rows selected.

As you can see at the beginning of the output the samples have a sample_time that increases one second at a time (with a little slippage), and the flagged samples appear every 10 seconds at 5.53.01, 5.53.11 and 5.53.21; but then the instance becomes fairly idle and there are several sample taken over the next 20 seconds or so where we don’t capture any active sessions; in particular there are no rows in the samples for 5.53.31, and 5.53.41; but eventually the instance gets a little busy again and we see that we’ve had active sessions in consecutive samples for the last few seconds, and we can see that we’ve flagged the sample at 5.53.51 for dumping into the AWR.

You’ll notice that I seem to be losing about 1/100th second every few seconds; this is probably a side effect of virtualisation and having a little CPU-intensive work going on in the background. If you see periods where the one second gap in v$active_session_history or 10 second gap in dba_hist_active_sess_history has been stretched by several percent you can assume that the CPU was under pressure over that period. The worst case I’ve seen to date reported gaps of 12 to 13 seconds in dba_hist_active_sess_history.  The “one second” algorithm is “one second since the last snapshot was captured” so if the process that’s doing the capture doesn’t get to the top of the runqueue in a timely fashion the snapshots slip a little.

When the AWR snapshot is taken, the flagged rows from v$active_session_history are copied to the relevant AWR table. You can adjust the frequency of sampling for both v$active_session_history, and dba_hist_active_sess_history, of course – there are hidden parameters to control both: _ash_sampling_interval (1,000 milliseconds) and _ash_disk_filter_ratio (10). There’s also a parameter controlling how much memory should be reserved in the shared pool to hold v$active_session_history.: _ash_size (1048618 bytes per session in my case).  The basic target is to keep one hour’s worth of data in memory, but if there’s no pressure for memory you can find that the v$active_session_history holds more than the hour; conversely, if there’s heavy demand for memory and lots of continuously active sessions you may find that Oracle does “emergency flushes” of v$active_session_history between the normal AWR snapshots. I have heard of people temporarily increasing the memory and reducing the interval and ratio – but I haven’t yet felt the need to do it myself.

 


Oracle Priority Support Infogram for 26-MAR-2015

Oracle Infogram - Thu, 2015-03-26 15:12

Cloud
A Glance at Smartwatches in the Enterprise: A Moment in Time Experience, from the Usable Apps in the Cloud Blog.
Solaris
From Solaris 11 Maintenance Lifecycle: iSCSI improvements
OVM
Oracle VM Server for SPARC 3.2 - Live Migration, from Virtually All the Time.
NetBeans
From Geertjan’s Blog: Mobile Boilerplate and NetBeans IDE
Fusion
Finding "End of Support" Information for Fusion Middleware Applications, from Proactive Support - Java Development using Oracle Tools.
Live Virtual Training: Fast Track: Oracle Fusion Project Portfolio Management 2014 for PreSales, from Oracle PartnerNetwork News.
WebLogic
Oracle WebLogic Server Now Running on Docker Containers, from The WebLogic Server Blog.
BI
From the Oracle BI applications blog: How to Invoke Oracle Enterprise Data Quality(OEDQ) using ODI Client.
Hyperion
Hyperion Essbase Family 11.1.2.4 PSU post updated, from the Business Analytics - Proactive Support.
BPM
Video: Error Handling and Recovery in Oracle BPM12c, from ArchBeat.
Data Warehousing
From the Data Warehouse Insider:
Finding the Distribution Method in Adaptive Parallel Joins
Finding the Reason for DOP Downgrades
Security
86% of Data Breaches Miss Detection, How Do You Beat The Odds? from Security Inside Out.
RFID
IoB - Internet of Bees, from Hinkmond Wong's Weblog.
EBS
From the Oracle E-Business Suite Technology blog:
Oracle Database 12.1.0.2 Certified with EBS 12.1 on HP-UX Itanium, IBM AIX
From CN Support NEWS:

E-Business Suite 12.1 Premier Support Now Through Dec. 2016 and 11.5.10 Sustaining Support Exception Updates

Configuring Oracle #GoldenGate Monitor Agent

DBASolved - Thu, 2015-03-26 14:06

In a few weeks I’ll be talking about monitoring Oracle GoldenGolden using Oracle Enterprise Manager 12c at IOUG Collaborate in Las Vegas.  This is one of the few presentations I will be giving that week (going to be a busy week).  Although this posting, kinda mirrors a previous post on how to configure the Oracle GoldenGate JAgent, it is relevant because:

1. Oracle changed the name of the JAgent to Oracle Monitor Agent
2. Steps are a bit different with this configuration

Most people running Oracle GoldenGate and want to monitor the processes with EM12c, will try to use the embedded JAgent.  This JAgent will work with the OGG Plug-in 12.1.0.1.  To get many of the new features and use the new plug-in (12.1.0.2), the new Oracle Monitor Agent (12.1.3.0) needs to be downloaded and installed.  Finding the binaries for this is not that easy though.  In order to get the binaires, download Oracle GoldenGate Monitor v12.1.3.0.0 from OTN.oracle.com.

Once downloaded, unzip the file to a directory to a temp location

$ unzip ./fmw_12.1.3.0.0_ogg_Disk1_1of1.zip -d ./oggmonitor
Archive: ./fmw_12.1.3.0.0_ogg_Disk1_1of1.zip
 inflating: ./oggmonitor/fmw_12.1.3.0.0_ogg.jar

In order to install the agent, you need to have java 1.8 installed somewhere that can be used.  The 12.1.3.0.0 software is built using JDK 1.8.

$ ./java -jar ../../ggmonitor/fmw_12.1.3.0.0_ogg.jar

After executing the command, the OUI installer will start.  As you walk through the OUI, when the select page comes up; select the option to only install the Oracle GoldenGate Monitor Agent.


The proceed through the rest of the OUI and complete the installation.

After the installation is complete, then the JAgent needs to be configured.  In order to do this, navigate to the directory where the binaries were installed.

$ cd /u01/app/oracle/product/jagent/oggmon/ogg_agent

In this directory, look for a file called create_ogg_agent_instance.sh.  This files has to be ran first to create the JAgent that will be associated with Oracle GoldenGate. In order to run this script, the $JAVA_HOME variable needs to be pointed to the JDK 1.8 location as well.  Inputs that will need to be provided are the Oracle GoldenGate Home and where to install the JAgent (this is different from where the OUI installed).

$ ./create_ogg_agent_instance.sh
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.2.0/12c/oggcore_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/12.1.3.0/jagent
Sucessfully created OGG Agent instance.

Next, go to the directory for the OGG Agent Instance (JAgent), then to the configuration (cfg) directory.  In this directory, the Config.properities file needs to be edited.  Just like with the old embedded JAgent, the same variables have to be changed.

$ cd /u01/app/oracle/product/12.1.3.0/jagent
$ cd ./cfg
$ vi ./Config.properties

Change the following or keep the defaults, then save the file:

jagent.host=fred.acme.com (default is localhost)
jagent.jmx.port=5555 (default is 5555)
jagent.username=root (default oggmajmxuser)
jagent.rmi.port=5559 (default is 5559)
agent.type.enabled=OEM (default is OGGMON)

Then create the password that will be stored in the wallet directory under $OGG_HOME.  

cd /u01/app/oracle/product/12.1.3.0/jagent
$ cd ./bin
$ ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent:
Please confirm password for Java Agent:
Mar 26, 2015 3:18:46 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Mar 26, 2015 3:18:47 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.

Now, enable monitoring in the GLOBALS file in $OGG_HOME.

$ cd /u01/app/oracle/product/12.1.2.0/12c/oggcore_1
$ vi ./GLOBALS


After enabling monitoring, the JAgent should appear when doing an info all inside of GGSCI.


Before starting the JAgent, create a datastore.  What I’ve found works is to delete the datastore, restart GGSCI and create a new one. 

$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle<br>Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug&nbsp; 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (fred.acme.com)> info all
Program           Group Lag at Chkpt Time Since Chkpt
MANAGER  RUNNING
JAGENT   STOPPED

GGSCI (fred.acme.com)>; stop mgr!
Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

GGSCI (fred.acme.com)>; delete datastore
Are you sure you want to delete the datastore? yes
Datastore deleted.
GGSCI (fred.acme.com)>; exit

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug&nbsp; 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (fred.acme.com)>; create datastore
Datastore created.

GGSCI (fred.acme.com)>; start mgr
Manager started.

GGSCI (fred.acme.com)>; start jagent
Sending START request to MANAGER ...
JAGENT starting

GGSCI (fred.acme.com)>; info all

Program  Group Lag at Chkpt Time Since Chkpt
MANAGER  RUNNING
JAGENT   RUNNING

With the JAgent running, now configure Oracle Enterprise Manager 12c to use the JAgent.

Note: In order to monitor Oracle GoldenGate with Oracle Enterprise Manager 12c, you need to deploy the Oracle GoldenGate Plug-in (12.1.0.2).

To configure discovery of the Oracle GoldenGate process, go to Setup -> Add Target -> Configure Auto Discovery

Select the Host where the JAgent is running.

Ensure the the Discovery Module for GoldenGate Discovery is enabled and then click the Edit Parameters to provided the username and rmx port specified in the Config.properties file.  And provide the password was setup in the wallet. Then click OK.

At this point, force a discovery of any new targets that need to be monitored by using the Discover Now button.

If the discovery was successful, the Oracle GoldenGate Manager process should be able to be seen and promoted for monitoring.

After promoting the Oracle GoldenGate processes, they can then be seen in the Oracle GoldenGate Interface within Oracle Enterprise Manager 12c (Target -> GoldenGate).

At this point, Oracle GoldenGate is being monitored by Oracle Enterprise Manager 12c.  The new plug-in for Oracle GoldenGate is way better than the previous one; however, there still are a few thing that could be better.  More on that later.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

AOUG - Real World Performance Tour

Yann Neuhaus - Thu, 2015-03-26 13:26

This week, Tom Kyte, Graham Wood and Andrew Holdsworth were present in Europe for several dates. One of the events was organised by the Austrian Oracle User Group (AOUG) in collaboration with the German and Swiss User Group (DOAG and SOUG) and I had the chance to be there to attend to one session of the Real Worl Performance tour session in Vienna.

Oracle Database 12c In-Memory Q&A Webinar

Pythian Group - Thu, 2015-03-26 09:21

Today I will be debating Oracle 12c’s In-Memory option with Maria Colgan of Oracle (aka optimizer lady, now In-Memory lady).

This will be in a debate form with lots of Q&A from the audience. Come ask the questions you always wanted to ask.

Link to register and attend:
https://attendee.gotowebinar.com/register/7874819190629618178

Starts at 12:00pm EDT.

Categories: DBA Blogs

12c MView refresh

Jonathan Lewis - Thu, 2015-03-26 07:19

Some time ago I wrote a blog note describing a hack for refreshing a large materialized view with minimum overhead by taking advantage of a single-partition partitioned table. This note describes how Oracle 12c now gives you an official way of doing something similar – the “out of place” refresh.

I’ll start by creating a matieralized view and creating a couple of indexes on the resulting underlying table; then show you three different calls to refresh the view. The materialized view is based on all_objects so it can’t be made available for query rewrite (ORA-30354: Query rewrite not allowed on SYS relations) , and I haven’t created any materialized view logs so there’s no question of fast refreshes – but all I intend to do here is show you the relative impact of a complete refresh.


create materialized view mv_objects nologging
build immediate
refresh on demand
as
select
        *
from
        all_objects
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'mv_objects',
		method_opt 	 => 'for all columns size 1'
	);
end;
/

create index mv_obj_i1 on mv_objects(object_name) nologging compress;
create index mv_obj_i2 on mv_objects(object_type, owner, data_object_id) nologging compress 2;

This was a default install of 12c, so there were about 85,000 rows in the view. You’ll notice that I’ve created all the objects as “nologging” – this will have an effect on the work done during some of the refreshes.

Here are the three variants I used – all declared explicitly as complete refreshes:


begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> true
	);
end;
/

begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false
	);
end;
/

begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false,
		out_of_place		=> true
	);
end;
/

The first one (atomic_refresh=>true) is the one you have to use if you want to refresh several materialized views simultaneously and keep them self consistent, or if you want to ensure that the data doesn’t temporarily disappear if all you’re worried about is a single view. The refresh works by deleting all the rows from the materialized view then executing the definition to generate and insert the replacement rows before committing. This generates a lot of undo and redo – especially if you have indexes on the materialized view as these have to be maintained “row by row” and may leave users accessing and applying a lot of undo for read-consistency purposes. An example at a recent client site refreshed a table of 6.5M rows with two indexes, taking about 10 minutes to refresh, generating 7GB of redo as it ran, and performing 350,000 “physical reads for flashback new”. This strategy does not take advantage of the nologging nature of the objects – and as a side effect of the delete/insert cycle you’re likely to see the indexes grow to roughly twice their optimal size and you may see the statistic “recursive aborts on index block reclamation” climbing as the indexes are maintained.

The second option (atomic_refresh => false) is quick and efficient – but may result in wrong results showing up in any code that references the materialized view (whether explicitly or by rewrite). The session truncates the underlying table, sets any indexes on it unusable, then reloads the table with an insert /*+ append */. The append means you get virtually no undo generated, and if the table is declared nologging you get virtually no redo. In my case, the session then dispatched two jobs to rebuild the two indexes – and since the indexes were declared nologging the rebuilds generated virtually no redo. (I could have declared them with pctfree 0, which would also have made them as small as possible).

The final option is the 12c variant – the setting atomic_refresh => false is mandatory if we want  out_of_place => true. With these settings the session will create a new table with a name of the form RV$xxxxxx where xxxxxx is the hexadecimal version of the new object id, insert the new data into that table (though not using the /*+ append */ hint), create the indexes on that table (again with names like RV$xxxxxx – where xxxxxx is the index’s object_id). Once the new data has been indexed Oracle will do some name-switching in the data dictionary (shades of exchange partition) to make the new version of the materialized view visible. A quirky detail of the process is that the initial create of the new table and the final drop of the old table don’t show up in the trace file  [Ed: wrong, see comment #1] although the commands to drop and create indexes do appear. (The original table, though it’s dropped after the name switching, is not purged from the recyclebin.) The impact on undo and redo generation is significant – because the table is empty and has no indexes when the insert takes place the insert creates a lot less undo and redo than it would if the table had been emptied by a bulk delete – even though the insert is a normal insert and not an append; then the index creation honours my nologging definition, so produces very little redo. At the client site above, the redo generated dropped from 7GB to 200MB, and the time dropped to 200 seconds which was 99% CPU time.

Limitations, traps, and opportunities

The manuals say that the out of place refresh can only be used for materialized views that are joins or aggregates and, surprisingly, you actually can’t use the method on a view that simply extracts a subset of rows and columns from a single table.  There’s a simple workaround, though – join the table to DUAL (or some other single row table if you want to enable query rewrite).

Because the out of place refresh does an ordinary insert into a new table the resulting table will have no statistics – you’ll have to add a call to gather them. (If you’ve previously been using a non-atomic refreshes this won’t be a new problem, of course). The indexes will have up to date statistics, of course, because they will have been created after the table insert.

The big opportunity, of course, is to change a very expensive atomic refresh into a much cheaper out of place refresh – in some special cases. My client had to use the atomic_refresh=>true option in 11g because they couldn’t afford to leave the table truncated (empty) for the few minutes it took to rebuild; but they might be okay using the out_of_place => true with atomic_refresh=>false in 12c because:

  • the period when something might break is brief
  • if something does go wrong the users won’t get wrong (silently missing) results, they’ll an Oracle error (probably ORA-08103: object no longer exists)
  • the application uses this particular materialized view directly (i.e. not through query rewrite), and the query plans are all quick, light-weight indexed access paths
  • most queries will probably run correctly even if they run through the moment of exchange

I don’t think we could guarantee that last statement – and Oracle Corp. may not officially confirm it – and it doesn’t matter how many times I show queries succeeding but it’s true. Thanks to “cross-DDL read-consistency” as it was called in 8i when partition-exchange appeared and because the old objects still exist in the data files, provided your query doesn’t hit a block that has been overwritten by a new object, or request a space management block that was zero-ed out on the “drop” a running query can keep on using the old location for an object after it has been replaced by a newer version. If you want to make the mechanism as safe as possible you can help – put each relevant materialized view (along with its indexes) into its own tablespace so that the only thing that is going to overwrite an earlier version of the view is the stuff you create on the next refresh.