Skip navigation.

Feed aggregator

Pythian’s Annual MySQL Community Dinner at Pedro’s

Pythian Group - Tue, 2015-02-17 08:54

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Monday April 13, 2015 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

Cost: Tickets are $35 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (Pedro’s Dinner Menu 2015)

How: RSVP through Eventbrite

Attendees:

Laine Campbell

Derek Downey

Gillian Gunson

Miklos Szel

Marco Tusa

Mark Filipi

Alkin Tezuysal

Brian Cain

Brian Kelly

Joe Muraski

Patrick Pearson

Looking forward to seeing you all at the event!

 

Categories: DBA Blogs

Database Flashback -- 4

Hemant K Chitale - Tue, 2015-02-17 08:44
Continuing my series on Oracle Database Flashback.  This post demonstrates a FLASHBACK DATABASE when a Tablespace is set to  FLASHBACK OFF.

UPDATE 20-Feb-15 : When might I have FLASHBACK OFF for a Tablespace in a FLASHBACK ON Database ?  When, the Tablespace contents are "throwaway-able".  I can avoid the overheads of Flashback for DML in the Tablespace and I am confident that on the rare occasion that I have to Flashback the Database, I can discard and rebuild the tablespace and it's contents because they are not important for persistency.


[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 17 22:30:53 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
17-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573520 15-FEB-15 1440 190971904 0

SYS>

After my last post on 15/16-Feb (midnight), I had generated some transactions. The Flashback Size has grown from 24.576MB to 190.972MB.

Question : Does Database Flashback always apply to the whole database ?  Can we exclude a selected tablespace ?

SYS>create tablespace TEST_NOFB;

Tablespace created.

SYS>alter tablespace TEST_NOFB flashback OFF;

Tablespace altered.

SYS>

So, it is possible to set FLASHBACK OFF for a tablespace !  What are the implications ?  Oracle does NOT save Flashback information for this tablespace.  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

UPDATE : 18-FEB-15

Continuing the case.
Note : Subsequent to last night's post I had some more activity on the database and even a RESTORE and RESETLOGS.  The v$flashback_database_log contents are different now.

[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 18 23:27:19 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
18-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 62783488 0

SYS>

Now, let me create a RESTORE POINT and then some activity in the TEST_NOFB database.

SYS>create restore point FEB18_15;

Restore point created.

SYS>connect hemant/hemant
Connected.
HEMANT>create table test_nofb_tbl tablespace test_nofb
2 as select * from dba_objects
3 where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'TEST_NOFB_TBL'
4 /

TABLESPACE_NAME
------------------------------
TEST_NOFB

HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>truncate table test_nofb_tbl;

Table truncated.

HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>commit;

Commit complete.

HEMANT>connect / as sysdba
Connected.
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 62783488 646397952

SYS>

So, I have generated DML activity in the TEST_NOFB Tablespace.

Let me define another Restore Point and generate more DML.

SYS>create restore point FEB18_15_2nd;

Restore point created.

SYS>connect hemant/hemant
Connected.
HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from test_nofb_tbl;

COUNT(*)
----------
450276

HEMANT>

I now have two Restore Points and DML against the target table in a FLASHBACK OFF Tablespace between and after the Restore Points.

Let my try to Flashback to the 2nd Restore Point.  What should I see ?  225138 rows in the table ? Or no rows in the table ?

HEMANT>connect / as sysdba
Connected.
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>
SYS>flashback database to restore point FEB18_15_2nd;
flashback database to restore point FEB18_15_2nd
*
ERROR at line 1:
ORA-38753: Cannot flashback data file 6; no flashback log data.
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>

AAHA ! It cannot flashback the datafile because no flashback log exists for it.

Let me quote again what I said yesterday :  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

So, I must take the necessary action :

SYS>alter database datafile 6 offline;

Database altered.

SYS>flashback database to restore point FEB18_15_2nd;

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>
SYS>alter database datafile 6 online;
alter database datafile 6 online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-38798: Cannot perform partial database recovery
ORA-38797: Full database recovery required after a database has been flashed back


SYS>

I *cannot* selectively recover that datafile.  What options do I have ?
In the meantime, let's look at the alert.log.

Wed Feb 18 23:47:04 2015
flashback database to restore point FEB18_15_2nd
ORA-38753 signalled during: flashback database to restore point FEB18_15_2nd...
Wed Feb 18 23:49:42 2015
alter database datafile 6 offline
Completed: alter database datafile 6 offline
flashback database to restore point FEB18_15_2nd
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
Serial Media Recovery started
Warning: Datafile 6 (/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf) is offline during full database recovery and will not be recovered
Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo03.log
Recovery of Online Redo Log: Thread 1 Group 1 Seq 4 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14577570 time 02/18/2015 23:40:29
Flashback Media Recovery Complete
Completed: flashback database to restore point FEB18_15_2nd
Wed Feb 18 23:50:08 2015
alter database open read only
AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
ARCH: STARTING ARCH PROCESSES
.....
.....
Completed: alter database open read only
Wed Feb 18 23:50:10 2015
.....
.....
Wed Feb 18 23:50:36 2015
alter database datafile 6 online
ORA-1113 signalled during: alter database datafile 6 online...
Wed Feb 18 23:50:36 2015
Signalling error 1152 for datafile 6!
Checker run found 2 new persistent data failures
ALTER DATABASE RECOVER datafile 6
Media Recovery Start
Serial Media Recovery started
Media Recovery failed with error 38798
ORA-283 signalled during: ALTER DATABASE RECOVER datafile 6 ...

So, the datafile is unusable.
I can only drop the tablespace.

SYS>shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>
SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-38798: Cannot perform partial database recovery
ORA-38797: Full database recovery required after a database has been flashed back


SYS>alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SYS>alter database open resetlogs;

Database altered.

SYS>
SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-19909: datafile 6 belongs to an orphan incarnation
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>select tablespace_name from dba_data_files
2 where file_id=6;

TABLESPACE_NAME
------------------------------
TEST_NOFB

SYS>drop tablespace test_nofb;
drop tablespace test_nofb
*
ERROR at line 1:
ORA-01549: tablespace not empty, use INCLUDING CONTENTS option


SYS>drop tablespace test_nofb including contents and datafiles;

Tablespace dropped.

SYS>

There ! A Tablespace that was in FLASHBACK OFF mode cannot be accessed after a Flashback Database --- not even in respect to data that existed before the Restore Point that is used for the Flashback.
.
.
.

Categories: DBA Blogs

Did your data survive Juno? [VIDEO]

Chris Foot - Tue, 2015-02-17 08:09

Transcript

Hi, welcome to RDX! A couple of weeks ago, the Northeastern U.S. was hit by Juno, a blizzard that got us thinking about disaster recovery.

Take insanely cold weather, two feet of snow and strong winds, you’ve got yourself the makings of an outage. There’s a chance all those tree limbs hanging over power lines may initiate a DR plan. Ensuring your data center’s generators are protected from the cold is an essential part of keeping systems online.

But what if the worst occurs? This is where a database and server replication strategy comes into play. GoldenGate, a solution that supports log-based bidirectional data replication, can help you replicate applicable information and migrate it to online servers.

Thanks for watching! If you want to learn about how RDX can help you develop a replication strategy, check out our services page in the transcript!

The post Did your data survive Juno? [VIDEO] appeared first on Remote DBA Experts.

Driving Productivity with Documents Cloud Service Powerful Mobile App

WebCenter Team - Tue, 2015-02-17 08:04

Author: Mark Paterson, Director, Oracle Documents Cloud Service Product Management

More and more users are relying on their mobile smartphones and tablets to continue working outside of the office. We want to help businesses--large and small--achieve flexibility and agility in the way their users can work for maximum productivity and success, anywhere at any time.

For users who rely on access to content this means giving them the means to browse all the content they have access to, ability to easily search for material, ability to discuss and act on content, and the ability to take content offline directly on their mobile devices.

Oracle Documents Cloud Service (DOCS) offers mobile users apps for both iOS and Android.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-CA;}

A Quick Look at Key Mobile Features:

Oracle Documents Cloud Service allows you to:

o Easily access, view and manage your files on your mobile device

o Take content offline, ensuring that you always have access to your most important files

o Upload pictures, documents and videos to quickly sync across devices and share with others

o Share folders with other colleagues directly from your mobile device

o Create public links that can be set to expire on a given date or protected with an access code

o Reserve files you are editing to avoid conflicts

o Open, review and edit files from other apps installed on your mobile device

o Set a Passcode for your application

· It is worth noting that all content is encrypted during transmission and when downloaded to your mobile device local storage.

Today’s post focuses on maximizing the protection of your enterprise’s content by ensuring you use of application level passcode protection

Setting a Passcode

Much like our security codes on mobile devices, Passcodes are four-digit numbers that you set to lock the app when you're not using it.

If you leave your device unattended and don't return within a specified time, you must enter the passcode. This helps to prevent unauthorized access to the app if your device is left active where others could use it. After five unsuccessful attempts, any content you have stored locally will be wiped from the device.

On iOS:

v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-CA;}

1. Swipe the Browse screen to the right to show the navigation panel.

2. Tap to access your settings.

3. Tap Passcode Lock in the Security section.

4. Tap Turn Passcode On.

5. Enter the four digits to use as a passcode. Enter the digits again for confirmation.

6. Tap Require Passcode and choose the amount of time that has to pass before you need to enter the passcode. For example, if you choose After 1 minute, then you leave the app and return after more than one minute, you have to enter the passcode to use the app again.

On Android:

1. Tap to open the navigation panel, then tap to access your settings.

2. In the Security section, tap Enable Passcode.

3. Enter the four digits to use as a passcode, then re-enter the digits to confirm.

4. Tap Prompt for Passcode and choose the amount of time that has to pass before you need to enter the passcode.

Oracle Documents Cloud Service, thus, enables you to securely share and store your files using your mobile device anywhere, everywhere.

Hope you enjoyed this post. We will continue to explore the myriad of things you can do directly from your mobile device using Oracle Documents Cloud Service. Meanwhile, if there is a particular topic you would be interested in, do let us know.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

NGDLE: The quest to eat your cake and have it too

Michael Feldstein - Tue, 2015-02-17 05:51

By Phil HillMore Posts (289)

And I’m going old school and sticking to the previous saying.

Google_Ngram_Viewer

Today I’m participating in the EDUCAUSE meeting on Next Generation Digital Learning Environments, funded by the Bill & Melinda Gates Foundation[1]. From the invitation:

The purpose of the panel is to identify potential investment strategies that are likely to encourage and hasten the arrival of “next-generation digital learning environments,” online learning environments that take us beyond the LMS to fully support the needs of today’s students and instructors. [snip]

It is clear that to meet the needs of higher education and today’s learner, the NGDLE must support a much wider range of functionality than today’s LMS, including different instructional modes, alternative credit models, personalized learning, robust data and content exchange, real-time and iterative assessment, the social web, and contemporary software design and usability practices. The policy and cultural context at our colleges and universities must also adapt to a world in which all learning has a digital component.

As I’m making an ill-timed trip from sunny California to snow-ravaged DC for a reduced-attendance meeting, I should at least lay down some of my thoughts on the subject in writing[2].

There is potential confusion of language here by implying NGDLE as an environment to replace today’s LMS. Are we talking about new, monolithic systems that replace today’s LMS but also have a range of functionality to support new needs, or are we talking about an environment that allows reasonably seamless integration and navigation between multiple systems? Put another way, investing in what?

To get at that question we should consider the current LMS market.

Current Market

Unlike five years ago, market dynamics are now leading to systems that better meet the needs of students. Primarily driven by the entrance of the Canvas LMS, the end of the Blackboard – Desire2Learn patent lawsuit, and new ed tech investment, today’s systems are lower in costs than previous systems and have much better usability. Canvas changed the standard of what an LMS can be for traditional courses – competitors that view it as just the shiny new object and not a material difference in usability have done so at their own peril. Blackboard is (probably / eventually / gosh I hope) releasing an entirely new user experience this year that seems to remove much of the multiple-click clunkiness of the past. Moodle has eliminated most of the scroll of death. Sakai 10 introduced a new user interface that is far better than what they had in the past.

It seems at every school I visit and every report I read, students are asking for consistency of usage and navigation along with more usable systems. This is, in fact, what the market is finally starting to deliver. It’s not a perfect market, but there are real changes occurring.

I have already written about the trend of the LMS, particularly based on IMS standards, to go from a walled garden approach:

walledgarden2

to an open garden approach that allows the coordination of the base system with external tools.

walledgarden5

 

Largely due to adoption of Learning Tools Interoperability (LTI) specifications from IMS Global, it is far easier today to integration different applications with an LMS. Perhaps more importantly, the ability to move the integration closer to end users (from central IT to departments and faculty) is getting closer and closer to reality. Michael has also written about the potential of the Caliper framework to be even more significant in expanding interoperability.

The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.

At the same time there are a new wave of learning platforms designed specifically for this latter category. I have started to cover the CBE platforms recently, as Motivis, Helix, FlatWorld, LoudCloud Systems, and others have been introduced with radically different features and capabilities. At e-Literate TV we are learning more about adaptive and personalized systems such as ALEKS, Smart Sparrow, OLI, Cerego and others that design around the learning.

If you look at this new wave of learning environments, you’ll see that they are designed around the learner instead of the course and are focused on competencies or some other form of learning outcomes.

In a sense, the market is working. Better usability for traditional LMS, greater interoperability, and new learning platforms designed around the learner. There is a risk for NGDLE in that you don’t want to screw up the market when it’s finally moving in the right direction.

And Yet . . .

The primary benefits of today’s LMS remains administrative management of traditionally-designed courses. From last year’s ECAR report on the LMS, faculty and students rated their LMS satisfaction highest for the basic administrative functions.

Faculty satisfaction LMS

Student satisfaction LMS

Opponents of the traditional LMS are right to call out how its design can stifle creativity and prevent real classroom engagement. Almost all capabilities of the LMS are available on the free Internet, typically in better-designed tools.

This situation leads to three challenges:

  • The community has discussed the need for direct teaching and learning support for years, yet most courses only use the LMS for rosters, grade book and document sharing (syllabus, readings, assignments). The market changed en masse to call their systems Learning Management Systems in the late 2000s, but the systems mostly remain Course Management Systems as previously named. Yes, some schools and faculty – innovators and early adopters – have found ways to get learning benefits out of the systems, but that is secondary to managing the course.
  • New educational delivery models such as competency-based education (CBE) and personalized learning require a learner-centric design that is not just based on added some features on top of the core LMS. It is worth noting that the new learning platforms tend to be wholesale replacements for the LMS in specific programs rather than expansion of capabilities.
  • The real gains in learner-specific functionality have arisen from applications that don’t attempt to be all things to all people. In today’s world it’s far easier to create a new web and mobile-based application that ever before, and many organizations are taking this approach. Any attempt to push platforms into broader functionality creates the risk of pushing the market backwards into more feature bloat.
Back to the NGDLE

I won’t go into investment strategies for NGDLE, as that is the topic for group discussions today. But I think it is worth calling out the need to support two seemingly incompatible needs.

  • Given the very real improvements in the LMS market, we should not abandon the gains made by institutions and faculty that have taken ~15 years to achieve.
  • The market should not just evolve – new educational models require new ground-up designs, and we need far more emphasis on learning support and student engagement.

Is it possible to eat your cake and have it, too? In my opinion, our best chance is through the encouragement and support for interoperability frameworks that allow a course or learner hub / aggregator – providing consistent navigation and support for faculty not looking to innovate with technology – along with an ecosystem of true learning applications and environments. This is the move to learning platforms, not just as marketing terms but as true support for integrated world of applications.

  1. Disclosure: Our upcoming e-Literate TV series has also received a grant from the Gates Foundation.
  2. Now that I’ve gone down for breakfast, the 2-inch snowfall would be somewhat embarrassing if not for the city being shut down.

The post NGDLE: The quest to eat your cake and have it too appeared first on e-Literate.

BPM 12c BundelPatch1: InitiatorTask problem: Solved!

Darwin IT - Tue, 2015-02-17 05:23
Lately I wrote about the release of the BPM12c Bundelpatch 1 and the problems regarding Initiator tasks we were having: BPM 12c BundelPatch1: InitiatorTask problem.

I created a service request for it and now there is a patch for it: BPM 12c BP interim patch #20440332
This patch must be applied after applying the BundelPatch and it's recommended prereq-patch (p20163149).

The patch 20440332 is official hence for public download. With this patch it works for us.

Thanks to Gucci for his support.

Creating an IBM Bluemix Server Connection from Eclipse

Pas Apicella - Tue, 2015-02-17 04:52
The post below shows how to create an IBM Bluemix server connection into the USA public hosted PaaS using Eclipse. It assumes you already have an existing account setup with http://bluemix.net/

1. Ensure you have added "IBM Eclipse Tools for Bluemix" as shown in the marketplace installed items.


More info can be found here.

http://marketplace.eclipse.org/content/ibm-eclipse-tools-bluemix

2. In the "Servers" tab click the link to create a new server and click on IBM Bluemix as shown below.


3. Click Next
4. Enter your account details, you can target the public cloud depending on the location, the default is USA based PUBLIC cloud instance


5. Click Next
6. Select the space to target for deployed applications


7. Click Finish

By clicking on the connection you can view connection attributes, plus deployed application, services, routes, memory settings plus more.



The following UTube video shows this in more detail.

https://www.youtube.com/watch?v=GpkXgklzOlk
IBM Eclipse Tools for Bluemix IBM Eclipse Tools for Bluemix
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Partner Webcast – Agile Business Intelligence in the Cloud for Oracle Partner Solutions

As the #1 vendor in business analytics with the industry’s most complete and integrated range of enterprise-class business intelligence (BI) solutions, Oracle leads the way in helping organizations...

We share our skills to maximize your revenue!
Categories: DBA Blogs

A new index on a small table makes a big difference

Bobby Durrett's DBA Blog - Mon, 2015-02-16 16:12

A few weeks back on the weekend just before I went on call we got a complaint about slowness on an important set of reports.  I worried that the slowness of these reports would continue during my support week so I tried to figure out why they were slow.  I reviewed an AWR report for the 24 hours when the reports were running and found a simple query against a tiny table at the top of the “SQL ordered by Elapsed Time” report:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      307,516.33 3.416388E+09

I edited the AWR report to show just elapsed seconds and number of executions.  3.4 billion executions totaling 307,000 seconds of elapsed time.  This was about 90 microseconds per execution.

The previous weekend the same query looked like this:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      133,143.65 3.496291E+09

So, about the same number of executions but less than half of the elapsed time.  This was about 38 microseconds per execution.  I never fully explained the change from week to week, but I found a way to improve the query performance by adding a new index.

The plan was the same both weekends so the increase in average execution time was not due to a plan change.  Here is the plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 1430621991

------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| DIV_RPT_GEN_CTL    |     1 |
|   2 |   INDEX RANGE SCAN          | DIV_RPT_GEN_CTL_U1 |     1 |
------------------------------------------------------------------

I found that the table only had 369 rows and 65 blocks so it was tiny.

The table’s only index was on columns RPT_NM and RPT_ID but only RPT_NM was in the query.  For the given value of RPT_NM the index would look up all rows in the table with that value until it found those with GEN_STAT=1.  I suspect that on the weekend of the slowdown that the number of rows being scanned for a given RPT_NM value had increased, but I can not prove it.

I did a count grouping by the column GEN_STAT and found that only 1 of the 300 or so rows had GEN_STAT=1.

SELECT GEN_STAT,count(*)
FROM DIV_RPT_GEN_CTL
group by GEN_STAT;

  GEN_STAT   COUNT(*)
---------- ----------
         1          1
         2        339
         0         29

So, even though this table is tiny it made sense to add an index which included the selective column GEN_STAT.  Also, since the reports execute the query billions of times per day it made sense to include the one column in the select clause as well, DIV_NBR.  By including DIV_NBR in the index the query could get DIV_NBR from the index and not touch the table.  The new index was on the columns RPT_NM, GEN_STAT, and DIV_NBR in that order.

Here is the new plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 2395994055

-------------------------------------------------------
| Id  | Operation        | Name               | Rows  |
-------------------------------------------------------
|   0 | SELECT STATEMENT |                    |       |
|   1 |  INDEX RANGE SCAN| DIV_RPT_GEN_CTL_U2 |     1 |
-------------------------------------------------------

Note that it uses the new index and does not access the table.  Here is the part of the AWR report for the problem query for last weekend:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6       84,303.02 4.837909E+09

4.8 billion executions and only 84,000 seconds elapsed.  That is about 17.4 microseconds per execution.  That is less than half of what the average execution time was the weekend before the problem started.

The first Monday after we put the index in we found that one of the slow reports had its run time reduced from 70 minutes to 50 minutes.  It was great that we could improve the run time so much with such a simple fix.

It was a simple query to tune.  Add an index using the columns in the where clause and the one column in the select clause.  It was a tiny table that normally would not even need an index.  But, any query that an application executes billions of times in a day needs to execute in the most efficient way possible so it made sense to add the best possible index.

– Bobby

Categories: DBA Blogs

Is CDB stable after one patchset and two PSU?

Yann Neuhaus - Mon, 2015-02-16 15:23

There has been the announce that non-CDB is deprecated, and the reaction that CDB is not yet stable.

Well. Let's talk about the major issue I've encountered. Multitenant is there for consolidation. What is the major requirement of consolidation? It's availability. If you put all your databases into one server and managed by one instance, then you don't expect a failure.

When 12c was out (and even earlier as we are beta testers) - 12.1.0.1 - David Hueber has encountered an important issue. When a SYSTEM datafile was lost, then we cannot revocer it without stopping the whole CDB. That's bad of course.

When Patchet 1 was out  (and we were beta tester again) I tried to check it that had been solved. I've seen that they had introduced the undocumented "_enable_pdb_close_abort" parameter in order to allow a shutdown abort of a PDB. But that was worse. When I dropped a SYSTEM datafile the whole CDB instance crashed immediately. I opened a SR and Bug 19001390 'PDB system tablespace media failure causes the whole CDB to crash' was created for that. All is documented in that blog post.

Now the bug status is: fixed in 12.1.0.2.1 (Oct 2014) Database Patch Set Update

Good. I've installed the latest PSU which is 12.1.0.2.2 (Jan 2015) And I test the most basic recovery situation: loss of a non-system tablespace in one PDB.

Here it is:

 

RMAN> report schema;
Report of database schema for database with db_unique_name CDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB/system01.dbf
3 770 SYSAUX NO /u02/oradata/CDB/sysaux01.dbf
4 270 UNDOTBS1 YES /u02/oradata/CDB/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB/pdbseed/system01.dbf
6 5 USERS NO /u02/oradata/CDB/users01.dbf
7 490 PDB$SEED:SYSAUX NO /u02/oradata/CDB/pdbseed/sysaux01.dbf
11 260 PDB2:SYSTEM NO /u02/oradata/CDB/PDB2/system01.dbf
12 520 PDB2:SYSAUX NO /u02/oradata/CDB/PDB2/sysaux01.dbf
13 5 PDB2:USERS NO /u02/oradata/CDB/PDB2/PDB2_users01.dbf
14 250 PDB1:SYSTEM NO /u02/oradata/CDB/PDB1/system01.dbf
15 520 PDB1:SYSAUX NO /u02/oradata/CDB/PDB1/sysaux01.dbf
16 5 PDB1:USERS NO /u02/oradata/CDB/PDB1/PDB1_users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 60 TEMP 32767 /u02/oradata/CDB/temp01.dbf
2 20 PDB$SEED:TEMP 32767 /u02/oradata/CDB/pdbseed/pdbseed_temp012015-02-06_07-04-28-AM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB/PDB1/temp012015-02-06_07-04-28-AM.dbf
4 20 PDB2:TEMP 32767 /u02/oradata/CDB/PDB2/temp012015-02-06_07-04-28-AM.dbf


RMAN> host "rm -f /u02/oradata/CDB/PDB1/PDB1_users01.dbf";
host command complete


RMAN> alter system checkpoint;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
RMAN-03002: failure of sql statement command at 02/19/2015 22:51:55
ORA-03113: end-of-file on communication channel
Process ID: 19135
Session ID: 357 Serial number: 41977
ORACLE error from target database:
ORA-03114: not connected to ORACLE

 

Ok, but I have the PSU:

 

$ /u01/app/oracle/product/12102EE/OPatch/opatch lspatches
19769480;Database Patch Set Update : 12.1.0.2.2 (19769480)

 

Here is the alert.log:

 

Completed: alter database open
2015-02-19 22:51:46.460000 +01:00
Shared IO Pool defaulting to 20MB. Trying to get it from Buffer Cache for process 19116.
===========================================================
Dumping current patch information
===========================================================
Patch Id: 19769480
Patch Description: Database Patch Set Update : 12.1.0.2.2 (19769480)
Patch Apply Time: 2015-02-19 22:14:05 GMT+01:00
Bugs Fixed: 14643995,16359751,16870214,17835294,18250893,18288842,18354830,
18436647,18456643,18610915,18618122,18674024,18674047,18791688,18845653,
18849537,18885870,18921743,18948177,18952989,18964939,18964978,18967382,
18988834,18990693,19001359,19001390,19016730,19018206,19022470,19024808,
19028800,19044962,19048007,19050649,19052488,19054077,19058490,19065556,
19067244,19068610,19068970,19074147,19075256,19076343,19077215,19124589,
19134173,19143550,19149990,19154375,19155797,19157754,19174430,19174521,
19174942,19176223,19176326,19178851,19180770,19185876,19189317,19189525,
19195895,19197175,19248799,19279273,19280225,19289642,19303936,19304354,
19309466,19329654,19371175,19382851,19390567,19409212,19430401,19434529,
19439759,19440586,19468347,19501299,19518079,19520602,19532017,19561643,
19577410,19597439,19676905,19706965,19708632,19723336,19769480,20074391,
20284155
===========================================================
2015-02-19 22:51:51.113000 +01:00
db_recovery_file_dest_size of 4560 MB is 18.72% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Setting Resource Manager plan SCHEDULER[0x4446]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager CDB plan DEFAULT_MAINTENANCE_PLAN via parameter
2015-02-19 22:51:54.892000 +01:00
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: '/u02/oradata/CDB/PDB1/PDB1_users01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: '/u02/oradata/CDB/PDB1/PDB1_users01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
USER (ospid: 19102): terminating the instance due to error 63999
System state dump requested by (instance=1, osid=19102 (CKPT)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_diag_19090_20150219225154.trc
ORA-1092 : opitsk aborting process
2015-02-19 22:52:00.067000 +01:00
Instance terminated by USER, pid = 19102

 

You can see the bug number in 'bug fixed' and the instance is still terminating after media failure on a PDB datafile. That's bad news. 

 

I've lost one datafile. At first checkpoint the CDB is crashed. I'll have to open an SR again. But for sure consolidation through multitenancy architecture is not yet for sensible production.

Webcast Q&A: Delivering Next-Gen Digital Experiences

WebCenter Team - Mon, 2015-02-16 14:45


Oracle Corporation Digital Strategies For Customer Engagement Growth

In case you missed our webcast "Delivering Next-Gen Digital Experiences" last week, we had a great turnout and wanted to provide a Q&A summary for those questions that were asked.

Q. How do I download the slide presentation?
The slides are available by clicking on the folder icon at the bottom of the console.

Q. This covers a lot of different systems and processes, how can you help me understand where it all lives in my organization? Great question, we actually have an Assessment that will cover later in the presentation.
Q. Does Oracle offer an integrated tool, or does one need to purchase each separate tool (i.e. Eloqua, RightNow, etc.)?
The answer is "it depends" -- on your starting point and end-goals. Elements of the DX strategy are embodied in specific products, but the larger story is more of a "solution" which takes specific elements of the products mentioned. The Assessment will help determine the components.
Q. Can you please provide some examples wherever possible?
This white paper outlines some specific customers referenced with some of their results.
Q. Does Eloqua provide an integrated solution? Or would we need to purchase multiple tools to build this (i.e. Eloqua, RightNow), etc.? The complete Digital Experience will include other elements beyond Eloqua depending on your specific needs. But the Marketing Cloud is a key element.
Q. What does the "assessment" cost?
There is no charge from Oracle, but there is time that has to be invested.
Please be sure to view the on demand version of the webcast in case you missed it, and check out the corresponding white paper "Delivering Next-Generation Digital Experiences!"

Next Generation Outline Extractor - New Version Available

Tim Tow - Mon, 2015-02-16 14:42
Today we released a new version of the Next Generation Outline Extractor, version 2.0.3.769.  Here are the release notes from this new version:

Version 2.0.3.769 supports the following Essbase versions:

9.3.1
9.3.1.1
9.3.1.2
9.3.3
11.1.1
11.1.1.1
11.1.1.2
11.1.1.3
11.1.1.4
11.1.2
11.1.2.1
11.1.2.1.102
11.1.2.1.103
11.1.2.1.104
11.1.2.1.105
11.1.2.1.106
11.1.2.2
11.1.2.2.102
11.1.2.2.103
11.1.2.2.104
11.1.2.3
11.1.2.3.001
11.1.2.3.002
11.1.2.3.003
11.1.2.3.500
11.1.2.3.501
11.1.2.3.502
11.1.2.3.505
11.1.2.4

Issues resolved in version 2.0.3.769:

2015.02.15 - Issue 1355 - All Writers - Add functionality to replace all line feeds, carriage returns, tabs, and extraneous spaces in formulas

2015.02.13 - Issue 1354 - RelationalWriter - Changed the default database name from dodeca to extractor

2015.02.13 - Issue 1353 - RelationalWriter - Added CONSOLIDATION_TYPE_SYMBOL, SHARE_FLAG_SYMBOL, TIME_BALANCE, TIME_BALANCE_SYMBOL, TIME_BALANCE_SKIP, TIME_BALANCE_SKIP_SYMBOL, EXPENSE_FLAG, EXPENSE_FLAG_SYMBOL, TWO_PASS_FLAG, and TWO_PASS_FLAG_SYMBOL columns to the CACHED_OUTLINE_MEMBERS table

2015.02.13 - Issue 1352 - RelationalWriter - Added Server, Application, and Cube columns to the CACHED_OUTLINE_VERSIONS table

2015.02.13 - Issue 1351 - Fixed issue with LoadFileWriter where UDA column headers were incorrectly written in the form UDAS0,DimName instead of UDA0,DimName

In addition, a number of fixes, etc, were put into 2.0.2 and earlier releases and those releases went unannounced.  Those updates included the following items:

  1. There is no longer a default .properties file for the Extractor.  This will force a user to specify a .properties file.  (2.0.2.601)
  2. Removed the "/" character as a switch for command line arguments as it causes problems in Linux. (2.0.2.605)
  3. Fixed issue when combining MaxL input with relational output where a "not supported" error message would appear due to certain properties were not being read correctly from the XML file (2.0.2.601)
  4. Command line operations resulted in an error due to an improper attempt to interact with the GUI progress bar. (2.0.2.601)
  5. Shared members attributes where not be properly written resulting in a delimiter/column count mismatch. (2.0.2.625)
  6. Added encoding options where a user can choose between UTF-8 and ANSI encodings.  The Extractor will attempt to detect encoding from selected outline and, if the detected outline encoding is different from the user selected outline encoding, a warning message appears.
Categories: BI & Warehousing

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 1

Randolf Geist - Mon, 2015-02-16 14:21
In this blog post I want to cover some aspects of the the new HYBRID HASH adaptive distribution method that I haven't covered yet in my other posts.

As far as I know it serves two purposes for parallel HASH and MERGE JOINs, adaptive broadcast distribution and hybrid distribution for skewed join expressions. In the first part of this post I want to focus on former one (goto part 2).

1. Adaptive Broadcast Distribution For Small Left Row Sources
It allows the PX SEND / RECEIVE operation for the left (smaller estimated row source) of the hash join to decide dynamically at runtime, actually at each execution, if it should use either a BROADCAST or HASH distribution, and correspondingly for the other row source to use then either a ROUND-ROBIN or a HASH distribution, too. This is described for example in the corresponding white paper by Maria Colgan here.

It's important to emphasize that this decision is really done at each execution of the same cursor, so the same cursor can do a BROADCAST distribution for the left row source at one execution and HASH distribution at another execution depending on whether the number of rows detected by the STATISTICS COLLECTOR operator exceeds the threshold or not. This is different from the behaviour of "adaptive joins" where the final plan will be resolved at first execution and from then on will be re-used, and therefore a STATISTICS COLLECTOR operator as part of an adaptive plan no longer will be evaluated after the first execution.

Here is a simple script demonstrating that the distribution method is evaluated at each execution:

define dop = 4

create table t_1
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(&dop*2) */ * from dual
connect by
level <= &dop*2) a
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

create table t_2
compress
as
select
rownum as id
, mod(rownum, &dop) + 1 as fk_id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a
;

exec dbms_stats.gather_table_stats(null, 't_2', method_opt=>'for all columns size 1')

alter table t_1 parallel &dop cache;

alter table t_2 parallel &dop cache;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

delete from t_1 where rownum <= 1;

select count(*) from t_1;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

rollback;
For the table queue 0 (the distribution of T_1) the distribution for the first execution in above script look like this:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 8 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 8

Consumer 1 P000 3 38 ##########
P001 1 13 ###
P002 2 25 #######
P003 2 25 #######
********** ********** ----------
Total 8
So the eight rows are distributed assumingly by hash. But for the second execution with only seven rows in T_1 I get this output:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 28 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 28

Consumer 1 P000 7 25 ##########
P001 7 25 ##########
P002 7 25 ##########
P003 7 25 ##########
********** ********** ----------
Total 28
So the seven rows were this time broadcasted.

The "pqstat" script is simply a query on V$PQ_TQSTAT, which I've mentioned for example here.

So I run the same query twice, the first time the threshold is exceeded and a HASH distribution takes place. After deleting one row the second execution of the same cursor turns into a BROADCAST / ROUND-ROBIN distribution. You can verify that this is the same parent / child cursor via DBMS_XPLAN.DISPLAY_CURSOR / V$SQL. Real-Time SQL Monitoring also can provide more details about the distribution methods used (click on the "binoculars" icon in the "Other" column of the active report for the PX SEND HYBRID HASH operations).

Note that the dynamic switch between HASH to BROADCAST unfortunately isn't the same as a decision of the optimizer at parse time to use BROADCAST distribution, because in such a case the other row source won't be distributed at all, which comes with some important side effects:

Not only the redistribution of larger row sources simply can take significant time and resources (CPU and in case of RAC network), but due to the (in 12c still existing) limitation of Parallel Execution that only a single redistribution is allowed to be active concurrently reducing the number of redistributions in the plan simply as a side effect can reduce the number of BUFFERED operations (mostly HASH JOIN BUFFERED, but could be additional BUFFER SORTs, too), which are a threat to Parallel Execution performance in general.

Here is a very simple example showing the difference:


-- HYBRID HASH with possible BROADCAST distribution of T_1
----------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | Q1,02 | PCWP | |
| 5 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 6 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 7 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 8 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 9 | PX RECEIVE | | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 11 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 12 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
----------------------------------------------------------------------------

-- TRUE BROADCAST of T_1
-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | Q1,01 | PCWP | |
| 5 | PX SEND BROADCAST | :TQ10000 | Q1,00 | P->P | BROADCAST |
| 6 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
| 8 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 9 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------
So even if in the first plan the T_1 row source really has less than 2*DOP rows and the HYBRID HASH distribution turns into a BROADCAST distribution, this doesn't change the overall plan shape generated by the optimizer. The second HYBRID HASH distribution won't be skipped and will turn into a ROUND-ROBIN distribution instead, which can be confirmed by looking at the output from V$PQ_TQSTAT for example. So the data of the second row source still needs to be distributed, and hence the HASH JOIN will be operating as BUFFERED join due to the plan shape and the limitation that only a single PX SEND / RECEIVE pair can be active at the same time.

In the second plan the BROADCAST distribution of T_1 means that T_2 will not be re-distributed, hence there is no need to operate the HASH JOIN in buffered mode.

So the only purpose of this particular adaptive HYBRID HASH distribution is obviously to avoid skew if there are only a couple of rows (and hence possible join key values) in the left row source, because a HASH distribution based on such a low number of distinct values doesn't work well. Oracle's algorithm needs a certain number of distinct values otherwise it can end up with a bad distribution. This probably also explains why the threshold of 2*DOP was chosen so low.

What Does Unizin Mean for Digital Learning?

Michael Feldstein - Mon, 2015-02-16 13:41

By Michael FeldsteinMore Posts (1013)

Speaking of underpants gnomes sales pitches, Phil and I spent a fair amount of time hearing about Unizin at the ELI conference. Much of that time was spent hearing friends that I know, trust, and respect talk about the project. At length, in some cases. On the one hand, it is remarkable that, after these long conversations, I am not much clearer on the purpose of Unizin than I was the week before. On the other hand, being reminded that some of my friends really believe in this thing helped me refill my reservoir of patience for the project, which had frankly run dry.

Alas, that reservoir was largely drained away again during a Unizin presentation with the same title as this blog post. I went there expecting the presenters to answer that question for the audience.

Alack.

The main presentation was given by Anastasia Morrone of IUPUI, was probably the most straightforward and least hype-filled presentation about Unizin that I have heard so far. It was also short. Just when I was warming to it and figuring we’d get to the real meat, her last slide came up:

Split into groups of 5-7 people and discuss the following:

How can faculty, teaching center consultants, and learning technologists contribute to best practices with the evolving Unizin services?

Wait. What?

That’s right. They wanted us to tell them what Unizin means for digital learning. That might have been a good question to ask before they committed to spend a million dollars each on the initiative.

I joined one of the groups, resolving to try as hard as I could to keep my tongue in check and be constructive (or, at least, silent) for as long as I could. The very first comment in my group—not by me, I swear—was, “Before I can contribute, can somebody please explain to me what Unizin is?” It didn’t get any better from there. At the end of the breakout session, our group’s official answer was essentially, “Yeah, we don’t have any suggestions to contribute, so we’re hoping the other groups come up with something.” None of them did, really. The closest they came were a couple of vague comments on inclusive governance. I understand from a participant in one of the other groups that they simply refused to even try to answer the question. It was brutal.

Click here to view the embedded video.

Still, in the spirit of the good intentions behind their request for collaborative input, I will list here some possible ways in which Unizin could provide value, in descending order of credibility.

I’ll start with the moderately credible:

  • Provide a layer of support services on top of and around the LMS: This barely even gets mentioned by Unizin advocates but it is the one that makes the most sense to me. Increasingly, in addition to your LMS, you have a bunch of connected tools and services. It might be something basic like help desk support for the LMS itself. It might be figuring out how an external application like Voicethread works best with your LMS. As the LMS evolves into the hub of a larger ecosystem, it is putting increasing strain on IT department in everything from procurement to integration to ongoing support. Unizin could be a way of pooling resources across institutions to address those needs. If I were a CIO in a big university with lots of demands for LMS plug-in services, I would want this.
  • Provide a university-controlled environment for open courses: Back when Instructure announced Canvas Network, I commented that the company had cannily targeted the issue that MOOC providers seemed to be taking over the branding, not to mention substantial design and delivery decisions, from their university “partners.” Canvas Network is marketed as “open courses for the rest of us.” By adopting Canvas as their LMS, Unizin gets this for free. Again, if I were a CIO or Provost at a school that was either MOOCing or MOOC-curious, I would want this.
  • Providing buying power: What vendor would not want to sew up a sales deal with ten large universities or university systems (and counting) through one sales process? So far it is unclear how much Unizin has gained in reality through group negotiations, but it’s credible that they could be saving significant money through group contracting.
  • Provide a technology-assisted vehicle for sharing course materials and possibly even course cross-registrations: The institutions involved are large, and most or all probably have specialty strengths in some curricula area or other. I could see them wanting to trade, say, an Arabic degree program for a financial technology degree program. You don’t need a common learning technology infrastructure to make this work, but having one would make it easier.
  • Provide a home for a community researching topics like learning design and learning analytics: Again, you don’t need a common infrastructure for this, but it would help, as would having courses that are shared between institutions.

Would all of this amount to a significant contribution to digital learning, as the title of the ELI presentation seems to ask? Maybe! It depends on what happens in those last two bullet points. But the rollout of the program so far does not inspire confidence that the Unizin leadership knows how to facilitate the necessary kind of community-building. Quite the opposite, in fact. Furthermore, the software has only ancillary value in those areas, and yet it seems to be what Unizin leaders want to talk about 90%+ of the time.

Would these benefits justify a million-dollar price tag? That’s a different question. I’m skeptical, but a lot depends on specific inter-institutional intentions that are not public. A degree program has a monetary value to a university, and some universities can monetize the value better than others depending on which market they can access with significant degrees of penetration. Throw in the dollar savings on group contracting, and you can have a relatively hard number for the value of the coalition to a member. I know that a lot of university folk hate to think like that, but it seems to be the most credible way to add the value of these benefits up and get to a million dollars.

Let’s see if we can sweeten the pot by adding in the unclear or somewhat dubious but not entirely absurd benefits that some Unizin folk have claimed:

  • Unizin will enable universities to “own” the ecosystem: This claim is often immediately followed by the statement that their first step in building that ecosystem was to license Canvas. The Unizin folks seem to have at least some sense that it seems contradictory to claim you are owning the ecosystem by licensing a commercial product, so they immediately start talking about how Canvas is open source and Unizin could take it their own way if they wanted to. Yet this flies in the face of Unizin’s general stated direction of mostly licensing products and building connectors and such when they have to. Will all products they license be open source? Do they seriously commit to forking Canvas should particular circumstances arise? If not, what does “ownership” really mean? I buy it in relation to the MOOC providers, because there they are talking about owning brand and process. But beyond that, the message is pretty garbled. There could be something here, but I don’t know what it is yet.
  • Unizin could pressure vendors and standards groups to build better products: In the abstract, this sounds credible and similar to the buying power argument. The trouble is that it’s not clear either that pressure on these groups will solve our most significant problems or that Unizin will ask for the right things. I have argued that the biggest reason LMSs are…what they are is not vendor incompetence or recalcitrance but that faculty always ask for the same things. Would Unizin change this? Indiana University used what I would characterize as a relatively progressive evaluation framework when they chose Canvas, but there is no sign that they were using the framework to push their faculty to fundamentally rethink what they want to do with a virtual learning environment and therefore what it needs to be. I don’t doubt the intellectual capacity of the stakeholders in these institutions to ask the right questions. I doubt the will of the institutions themselves to push for better answers from their own constituents. As for the standards, as I have argued previously, the IMS is doing quite well at the moment. They could always move faster, and they could always use more university members who are willing to come to the table with concrete use cases and a commitment to put in the time necessary to work through a standards development process (including implementation). Unizin could do that, and it would be a good thing if they did. But it’s still pretty unclear to me how much their collective muscle would be useful to solve the hard problems.

Don’t get me wrong; I believe that both of the goals articulated above are laudable and potentially credible. But Unizin hasn’t really made the case yet.

Instead, at least some of the Unizin leaders have made claims that are either nonsensical (in that they don’t seem to actually mean anything in the real world) or absurd:

  • “We are building common gauge rails:” I love a good analogy, but it can only take you so far. What rides on those rails? And please don’t just say “content.” Are we talking about courses? Test banks? Individual test questions? Individual content pages? Each of these have very different reuse characteristics. Content isn’t just a set of widgets that can be loaded up in rail cars and used interchangeably wherever they are needed. If it were, then reuse would have been a solved problem ten years ago. What problem are you really trying to solve here, and why do you think that what you’re building will solve it (and is worth the price tag)?
  • “Unizin will make migrating to our next LMS easier because moving the content will be easy.” No. No, no, no, no, no, no, no. This is the perfect illustration of why the “common gauge rails” statement is meaningless. All major LMSs today can import IMS Common Cartridge format, and most can export in that format. You could modestly enhance this capability by building some automation that takes the export from one system and imports it into the other. But that is not the hard part of migration. The hard part is that LMSs work differently, so you have to redesign your content to make best use of the design and features of the new platform. Furthermore, these differences are generally not one that you want to stamp out—at least, not if you care about these platforms evolving and innovating. Content migration in education is inherently hard because context makes a huge difference. (And content reuse is exponentially harder for the same reason.) There are no widgets that can be neatly stacked in train cars. Your rails will not help here.
  • “Unizin will be like educational moneyball.” Again with the analogies. What does this mean? Give me an example of a concrete goal, and I will probably be able to evaluate the probability that you can achieve it, it’s value to students and the university, and therefore whether it is worth a million-dollar institutional investment. Unizin doesn’t give us that. Instead, it gives us statements like, “Nobody ever said that your data is too big.” Seriously? The case for Unizin comes down to “my data is bigger than yours”? Is this a well-considered institutional investment or a midlife crisis? The MOOC providers have gobs and gobs of data, but as HarvardX researcher Justin Reich has pointed out, “Big data sets do not, by virtue of their size, inherently possess answers to interesting questions….We have terabytes of data about what students clicked and very little understanding of what changed in their heads.” Tell us what kinds of research questions you intend to ask and how your investment will make it possible to answer them. Please. And also, don’t just wave your hands at PAR and steal some terms from their slides. I like PAR. It’s a Good Thing. But what new thing are you going to do with it that justifies a million bucks per institution?

I want to believe that my friends, who I respect, believe in Unizin because they see a clear justification for it. I want to believe that these schools are going to collectively invest $10 million or more doing something that makes sense and will improve education. But I need more than what I’m getting to be convinced. It can’t be the case that the people not in the inner circle have to convince themselves of the benefit of Unizin. One of my friends inside the Unizin coalition said to me, “You know, a lot of big institutions are signing on. More and more.” I replied, “That means that either something very good is happening or something very bad is happening.” Given the utter disaster that was the ELI session, I’m afraid that I continue to lean in the direction of badness.

 

The post What Does Unizin Mean for Digital Learning? appeared first on e-Literate.

node-oracledb 0.3.1 is on GitHub (Node.js driver for Oracle Database)

Christopher Jones - Mon, 2015-02-16 11:46

On behalf of the development team, I have merged some new features and fixes to node-oracledb

Updates for node-oracledb 0.3.1

  • Added Windows build configuration. See Node-oracledb Installation on Windows. Thanks to Rinie Kervel for submitting a pull request, and thanks to all those that commented and tested.
  • Added Database Resident Connection Pooling (DRCP) support. See API Documentation for the Oracle Database Node.js Driver

    "Database Resident Connection Pooling enables database resource sharing for applications that run in multiple client processes or run on multiple middle-tier application servers. DRCP reduces the overall number of connections that a database must handle. DRCP is distinct from node-oracledb's local connection pool. The two pools can be used separately, or together.
  • Made an explicit connection release() do a rollback, to be consistent with the implicit release behavior.

  • Made install on Linux look for Oracle libraries in a search order:

    • Using install-time environment variables $OCI_LIB_DIR and $OCI_INC_DIR
    • In the highest version Instant Client RPMs installed
    • In $ORACLE_HOME
    • In /opt/oracle/instantclient
  • Added RPATH support on Linux, so LD_LIBRARY_PATH doesn't always need to be set. See Advanced installation on Linux

  • The directory name used by the installer for the final attempt at locating an Instant Client directory is now /opt/oracle/instantclient or C:\oracle\instantclient. This path may be used if OCI_DIR_LIB and OCI_INC_LIB are not set and the installer has to guess where the libraries are.

  • Added a compile error message "Oracle 11.2 or later client libraries are required for building" if attempting to build with older Oracle client libraries. This helps developers self-diagnose this class of build problem.

  • Fixed setting the isAutoCommit property.

  • Fixed a crash using pooled connections on Windows.

  • Fixed a crash querying object types.

  • Fixed a crash doing a release after a failed terminate. (The Pool is still unusable - this will be fixed later)

  • Clarified documentation that terminate() doesn't release connections. Doing an explicit release() of each pooled connection that is no longer needed is recommended to avoid resource leaks and maximize pool usage.

  • Updated version to 0.3.1 (surprise!)

Log Buffer #409, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:29

This Log Buffer Edition sheds light at some of the nifty blog post of the week from Oracle, SQL Server and MySQL.

Oracle:

Patch Set Update: Hyperion Data Relationship Management 11.1.2.3.504

The Hitchhiker’s Guide to the EXPLAIN PLAN Part 33: The mother of all SQL antipatterns?

MongoDB as a Glassfish Security Realm

E-Business Suite customers must ensure that their database remains on a level that is covered by Error Correction Support (ECS)

EM12c: How to Retrieve Passwords from the Named Credentials

SQL Server:

How does a View work on a Table with a Clustered Columnstore Index ?

How do you develop and deploy your database?

Microsoft Azure Storage Queues Part 3: Security and Performance Tips

Stairway to SQL Server Security Level 6: Execution Context and Code Signing

Centralize Your Database Monitoring Process

MySQL:

New Galera Cluster version is now released! It includes patched MySQL server 5.6.21 and Galera replication provider 3.9

Shinguz: Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

The next release of MongoDB includes the ability to select a storage engine, the goal being that different storage engines will have different capabilities/advantages, and user’s can select the one most beneficial to their particular use-case. Storage engines are cool.

The MySQL grant syntax allows you to specify dynamic database names using the wildcard characters.

Oracle‘s 10 commitments to MySQL – a 5 year review

Categories: DBA Blogs

Log Buffer #410, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:28

This Log Buffer Edition spread love of databases just before Valentine’s Day. Lovely blog posts from Oracle, SQL Server and MySQL are here for you to love.

Oracle:

Creating a Mobile-Optimized REST API Using Oracle Service Bus by Steven Davelaar.

GROUP BY – wrong results in 12.1.0.2

Using Edition-Based Redefinition to Bypass Those Pesky Triggers

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code.

If you have a sorted collection of elements, how would you find index of specific value?

SQL Server:

How to use the IsNothing Inspection Function in SSRS

What better way to learn how to construct complex CHECK CONSTRAINTs, use the SQL 2012 window frame capability of the OVER clause and LEAD analytic function, as well as how to pivot rows into columns using a crosstab query?

SQL Server’s GROUP BY clause provides you a way to aggregate your SQL Server data and to group data on a single column, multiple columns, or even expressions. Greg Larsen discusses how to use the GROUP by clause to summarize your data.

Surely, we all know how T-SQL Control-of-flow language works? In fact it is surprisingly easy to get caught out.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others.

MySQL:

The NoSQL databases are gaining increasing popularity. MongoDB, being one of the most established among them, uses JSON data model and offers great scalability and ease of use due to the dynamic data schemas..

Is upgrading RDS like a shit-storm that will not end?

Over the last few MySQL releases the size of the MySQL package have increased in size and it looks like the trend is continuing.

This article details the proper method of load balancing either a Percona XTRADB Cluster (PXC) or MariaDB Cluster.

One common job for a DBA is working with a Development Team member on a batch loading process that is taking more time than expected.

Categories: DBA Blogs

Using rsync to clone Goldengate installation

Michael Dinh - Mon, 2015-02-16 09:15

You may be thinking, why clone Goldengate and why now just download it?
The exact version and patch level might not be available.
Too lazy to search for it and many other reasons you can come up with.

Why use rsync and not tar – scp? I wanted to refresh memory of using rsync.

Commands used:

local source /u01/app/ggs01/ and remote target arrow:/u01/app/ggs03/

rsync -avh --delete --dry-run --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/
rsync -avh --delete --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/

Note:
/u01/app/ggs01/ means synch contents of directory to target
/u01/app/ggs01 means create ggs01 directory and sync contents to target

Demo:

Source: /u01/app/ggs01 and dirdata is symbolic link

oracle@arrow:las:/u01/app/ggs01
$ ls -ld dir*
drwxr-x---. 2 oracle oinstall 4096 Jan 13 13:12 dirchk
lrwxrwxrwx. 1 oracle oinstall   15 Feb 15 06:20 dirdat -> /oradata/backup
drwxr-x---. 2 oracle oinstall 4096 Jul 22  2014 dirdatold
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 dirdef
drwxr-x---. 2 oracle oinstall 4096 Apr  4  2014 dirjar
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 dirout
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:35 dirpcs
drwxr-x---. 2 oracle oinstall 4096 Jan 13 12:55 dirprm
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:36 dirrpt
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 dirsql
drwxr-x---. 2 oracle oinstall 4096 Sep 25 08:56 dirtmp

Target: arrow:/u01/app/ggs03/

oracle@arrow:las:/u01/app/ggs01
$ ls -l /u01/app/ggs03/
total 0

Let’s do a dry run first.

oracle@arrow:las:/u01/app/ggs01

$ rsync -avh --delete --dry-run --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/
oracle@arrow's password:

sending incremental file list
./

.....

output ommited for brevity

dirout/
dirpcs/
dirprm/
dirprm/esan.prm
dirprm/jagent.prm
dirprm/mgr.prm
dirrpt/
dirrpt/ESAN.rpt
dirrpt/ESAN0.rpt
dirrpt/ESAN1.rpt
dirrpt/ESAN2.rpt
dirrpt/ESAN3.rpt
dirrpt/ESAN4.rpt
dirrpt/ESAN5.rpt
dirrpt/ESAN6.rpt
dirrpt/ESAN7.rpt
dirrpt/ESAN8.rpt
dirrpt/ESAN9.rpt
dirrpt/MGR.rpt
dirrpt/MGR0.rpt
dirrpt/MGR1.rpt
dirrpt/MGR2.rpt
dirrpt/MGR3.rpt
dirrpt/MGR4.rpt
dirrpt/MGR5.rpt
dirrpt/MGR6.rpt
dirrpt/MGR7.rpt
dirrpt/MGR8.rpt
dirrpt/MGR9.rpt
dirsql/
dirtmp/


sent 6.96K bytes  received 767 bytes  15.44K bytes/sec
total size is 237.10M  speedup is 30704.36 (DRY RUN)

oracle@arrow:las:/u01/app/ggs01
$ ls -l /u01/app/ggs03/
total 0

Perform actual rsync

oracle@arrow:las:/u01/app/ggs01

$ rsync -avh --delete --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/
oracle@arrow's password:

sending incremental file list
./

.....

output ommited for brevity

dirout/
dirpcs/
dirprm/
dirprm/esan.prm
dirprm/jagent.prm
dirprm/mgr.prm
dirrpt/
dirrpt/ESAN.rpt
dirrpt/ESAN0.rpt
dirrpt/ESAN1.rpt
dirrpt/ESAN2.rpt
dirrpt/ESAN3.rpt
dirrpt/ESAN4.rpt
dirrpt/ESAN5.rpt
dirrpt/ESAN6.rpt
dirrpt/ESAN7.rpt
dirrpt/ESAN8.rpt
dirrpt/ESAN9.rpt
dirrpt/MGR.rpt
dirrpt/MGR0.rpt
dirrpt/MGR1.rpt
dirrpt/MGR2.rpt
dirrpt/MGR3.rpt
dirrpt/MGR4.rpt
dirrpt/MGR5.rpt
dirrpt/MGR6.rpt
dirrpt/MGR7.rpt
dirrpt/MGR8.rpt
dirrpt/MGR9.rpt
dirsql/
dirtmp/

sent 237.14M bytes  received 4.40K bytes  31.62M bytes/sec
total size is 237.10M  speedup is 1.00

oracle@arrow:las:/u01/app/ggs01
$ ls -ld /u01/app/ggs03/dir*
drwxr-x---. 2 oracle oinstall 4096 Jan 13 13:12 /u01/app/ggs03/dirchk

lrwxrwxrwx. 1 oracle oinstall   15 Feb 15 06:20 /u01/app/ggs03/dirdat -> /oradata/backup

drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 /u01/app/ggs03/dirdef
drwxr-x---. 2 oracle oinstall 4096 Apr  4  2014 /u01/app/ggs03/dirjar
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 /u01/app/ggs03/dirout
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:35 /u01/app/ggs03/dirpcs
drwxr-x---. 2 oracle oinstall 4096 Jan 13 12:55 /u01/app/ggs03/dirprm
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:36 /u01/app/ggs03/dirrpt
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 /u01/app/ggs03/dirsql
drwxr-x---. 2 oracle oinstall 4096 Sep 25 08:56 /u01/app/ggs03/dirtmp
oracle@arrow:las:/u01/app/ggs01
$

Did it work?

oracle@arrow:las:/u01/app/ggs01
$ cd /u01/app/ggs03/
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     ESAN        00:01:02      788:32:31
REPLICAT    STOPPED     RLAS_SAN    00:00:00      4989:14:29


GGSCI (arrow.localdomain) 2> exit

The above was from a neglected test environment
Delete details for extract/replicat at dirchk

oracle@arrow:las:/u01/app/ggs03
$ cd dirchk/
oracle@arrow:las:/u01/app/ggs03/dirchk
$ ll
total 8
-rw-r-----. 1 oracle oinstall 2048 Jan 13 13:12 ESAN.cpe
-rw-r-----. 1 oracle oinstall 2048 Jul 22  2014 RLAS_SAN.cpr
oracle@arrow:las:/u01/app/ggs03/dirchk
$ rm *
oracle@arrow:las:/u01/app/ggs03/dirchk
$ cd ../dirpcs/
oracle@arrow:las:/u01/app/ggs03/dirpcs
$ ll
total 0
oracle@arrow:las:/u01/app/ggs03/dirpcs
$ cd ..
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (arrow.localdomain) 2> start mgr

Manager started.


GGSCI (arrow.localdomain) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (arrow.localdomain) 4> exit

Now, what’s wrong?

oracle@arrow:las:/u01/app/ggs03
$ tail ggserr.log
2015-02-12 15:36:02  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2015-02-12 15:36:02  ERROR   OGG-00664  Oracle GoldenGate Manager for Oracle, mgr.prm:  OCI Error during OCIServerAttach (status = 12541-ORA-12541: TNS:no listener).
2015-02-12 15:36:02  ERROR   OGG-01668  Oracle GoldenGate Manager for Oracle, mgr.prm:  PROCESS ABENDING.
2015-02-12 15:36:04  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-02-15 09:44:51  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-02-15 09:45:31  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-02-15 09:45:41  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2015-02-15 09:45:43  ERROR   OGG-00664  Oracle GoldenGate Manager for Oracle, mgr.prm:  OCI Error during OCIServerAttach (status = 12541-ORA-12541: TNS:no listener).
2015-02-15 09:45:43  ERROR   OGG-01668  Oracle GoldenGate Manager for Oracle, mgr.prm:  PROCESS ABENDING.
2015-02-15 09:45:44  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.

oracle@arrow:las:/u01/app/ggs03
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120

USERID ggs@san, PASSWORD *****

PURGEOLDEXTRACTS /u01/app/ggs01/dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
PURGEMARKERHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24
PURGEDDLHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24

AUTOSTART ER *
AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60

CHECKMINUTES 1
LAGINFOMINUTES 0
LAGCRITICALMINUTES 1

oracle@arrow:las:/u01/app/ggs03
$ vi dirprm/mgr.prm

oracle@arrow:las:/u01/app/ggs03
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120

-- USERID ggs@san, PASSWORD 888

PURGEOLDEXTRACTS /u01/app/ggs01/dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
PURGEMARKERHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24
PURGEDDLHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24

AUTOSTART ER *
AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60

CHECKMINUTES 1
LAGINFOMINUTES 0
LAGCRITICALMINUTES 1
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (arrow.localdomain) 2> start mgr

Manager started.


GGSCI (arrow.localdomain) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING


GGSCI (arrow.localdomain) 4> exit
oracle@arrow:las:/u01/app/ggs03
$

Don’t forget Oracle libraries are required to run Goldengate

oracle@arrow:las:/u01/app/ggs03
$ ldd ggsci
        linux-vdso.so.1 =>  (0x00007fff95ffa000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00000039e6000000)
        libgglog.so => /u01/app/ggs03/./libgglog.so (0x00007f862ca8d000)
        libggrepo.so => /u01/app/ggs03/./libggrepo.so (0x00007f862c923000)
        libdb-5.2.so => /u01/app/ggs03/./libdb-5.2.so (0x00007f862c688000)
        libicui18n.so.38 => /u01/app/ggs03/./libicui18n.so.38 (0x00007f862c327000)
        libicuuc.so.38 => /u01/app/ggs03/./libicuuc.so.38 (0x00007f862bfee000)
        libicudata.so.38 => /u01/app/ggs03/./libicudata.so.38 (0x00007f862b012000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00000039e6800000)
        libxerces-c.so.28 => /u01/app/ggs03/./libxerces-c.so.28 (0x00007f862aafa000)
        libantlr3c.so => /u01/app/ggs03/./libantlr3c.so (0x00007f862a9e4000)

        libnnz11.so => /u01/app/oracle/product/11.2.0/dbhome_1/lib/libnnz11.so (0x00007f862a616000)
        libclntsh.so.11.1 => /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.11.1 (0x00007f8627ba0000)

        libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00000039f1c00000)
        libm.so.6 => /lib64/libm.so.6 (0x00000039e7400000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000039f1800000)
        libc.so.6 => /lib64/libc.so.6 (0x00000039e6400000)
        /lib64/ld-linux-x86-64.so.2 (0x00000039e5c00000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00000039f3400000)
        libaio.so.1 => /lib64/libaio.so.1 (0x00007f862799d000)

oracle@arrow:las:/u01/app/ggs03
$ env |egrep 'HOME|LD'
OLDPWD=/home/oracle
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib:/lib:/usr/lib
HOME=/home/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
oracle@arrow:las:/u01/app/ggs03
$ unset ORACLE_HOME
oracle@arrow:las:/u01/app/ggs03
$ export LD_LIBRARY_PATH=/lib:/usr/lib
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci
./ggsci: error while loading shared libraries: libnnz11.so: cannot open shared object file: No such file or directory
oracle@arrow:las:/u01/app/ggs03
$

Proactive Analysis Center (PAC)

Joshua Solomin - Mon, 2015-02-16 08:38

Proactive Analysis Center (PAC) is a comprehensive system health reporting solution for proactive and reactive services accessible through the My Oracle Support portal.
Manage risk through tracking/improving Operational Risk Index (ORI).
Empowers the customer to efficiently manage downtime.
Watch the video below for more information.


The Operational Risk Index (ORI) rank orders systems that have vulnerabilities from high to low, so that downtime can be efficiently planned. These subject matter experts have detailed knowledge of the systems, because they deploy these systems and support these systems.

For more information, review the Document 1634073.2 How to use the Proactive Analysis Center (PAC).

Bookmark this page to view currently available PAC Advisor Webcasts and access to previously recorded sessions: Advisor Webcasts for Oracle Solaris and Systems Hardware (Doc ID 1282218.1).

DBMS_STATS.PURGE_STATS

Dominic Brooks - Mon, 2015-02-16 08:05

Prior to 11.2.0.4, the optimizer history tables are unpartitioned and DBMS_STATS.PURGE_STATS has little choice but to do do a slow delete of stats before the parameterised input timestamp.

Why might you be purging? Here’s one such illustration:

https://jhdba.wordpress.com/tag/dbms_stats-purge_stats/

This delete can be slow if these tables are large and there are a number of reasons why they might be so, notably if MMON cannot complete the purge within its permitted timeframe.

But note that if you’re happy to purge all history, there is a special TRUNCATE option if you make the call with a magic timestamp:

exec DBMS_STATS.PURGE_STATS(DBMS_STATS.PURGE_ALL);

but Oracle Support emphasises that:

This option is planned to be used as a workaround on urgent cases and under the advice of Support…

Ah… the old magic value pattern / antipattern!

PURGE_ALL CONSTANT TIMESTAMP WITH TIME ZONE :=
 TO_TIMESTAMP_TZ('1001-01-0101:00:00-00:00','YYYY-MM-DDHH:MI:SSTZH:TZM');

As part of the upgrade to 11.2.0.4, one gotcha is that these history tables become partitioned.

I don’t have a copy of 11.2.0.4 to hand but I do have 12.1.0.2 and the tables here are daily interval partitioned so I presume this is the same.

One plus side of this newly partitioned table is that the PURGE_STATS can now drop old partitions which is quicker than delete but a minor downside is that the tables have global indexes so the recursive/internal operations have to be done with UPDATE GLOBAL INDEXES

One curiosity in the trace file from this operation was this statement:

delete /*+ dynamic_sampling(4) */ 
from   sys.wri$_optstat_histhead_history
where  savtime_date < to_date('01-01-1900', 'dd-mm-yyyy') 
and    savtime not in (select timestamp '0000-01-01 00:00:00 -0:0' + sid + serial#/86400
                       from   gv$session 
                       where  status = 'ACTIVE' 
                       and    con_id in (0, sys_context('userenv', 'con_id')))       
and    rownum <= NVL(:1, rownum)

This is deleting from the P_PERMANENT default partition but why is this necessary and what is that subquery all about, particularly the timestamp ‘0000-01-01 00:00:00 -0:0′ + sid + serial#/86400 bit?