Skip navigation.

Feed aggregator

Integration Is Hard

Floyd Teter - Tue, 2015-06-16 20:58
If you know me at all, you know I love services-based integration.  The whole idea of interfacing, moving and exchanging data, guided by industry standards...I'm an enthusiastic supporter.  The appeal of this idea made me an ardent supporter of Oracle's Fusion Applications.  And I still believe it's an important part of the potential for today's SaaS offerings.

So I'll share a secret with you...I really hate services-based integration.  It's hard.  Packaged integrations rarely work out of the box.  SaaS integrations are tough to implement.  Integration platforms are still in their infancy.  Data errors are frequent problems.  Documentation is either inaccurate or non-existent.  Building your own - oy!  Even simple integrations require large investments of blood, sweat, and tears.  And orchestrating service integrations into a business process...agony on a stick.  I personally believe that the toughest aspect of enterprise software is services integration.  SaaS, hybrid, on-premise, packaged applications, middleware...it does not matter, services integration is hard regardless of context.

I see SaaS integration as "hero ground":  there is nowhere to go but up, and even simple wins will create heroes.  Service integrations that really work, simple and easily understood documentation, design patterns, data templates and useable tools... I think we have a ton of work to do.  Because, even though it shouldn't be, integration is hard.

Mid-June Roundup

Oracle AppsLab - Tue, 2015-06-16 16:07

A busy June is half over now, but we still have miles to go before July.

We’ve been busy, which you know if you read here. Raymond went to Boston. Tony, Thao (@thaobnguyen), Ben and I were in Las Vegas at OHUG 15. John and Thao were in Minneapolis the week before that. Oh, and Anthony was at Google I/O.

The globetrotting continues this week, as John and Anthony (@anthonyslai) are in the UK giving a workshop on Visualizations at the OUAB meeting. Plus, Thao and Ben are attending the QS15 conference in San Francisco.

And next week, Noel (@noelportugal), Raymond, Mark (@mvilrokx) and I head to Hollywood, FL for Kscope15 (#kscope15).

Did you hear we’re collaborating with the awesome organizers (@odtug) to put on a super fun and cool Scavenger Hunt? If you’re going to Kscope15, you should register.

You can do it now, I’ll wait.

Back? Good check out the sweet infographic Tony C. on our team created for the big Hunt:

posterLayout

Coincidentally, one of the tasks is to attend our OAUX session on Tuesday at 2pm, “Smart Things All Around.” Jeremy Ashley (@jrwashley), our GVP, and Noel will talk about the Scavenger Hunt, IoT, new experiences, design philosophies, all that good stuff.

Speaking of philosophies, VoX has a post on glance-scan-commit the design philosophy that informs our research and development, and more importantly, how glance-scan-commit trickles into product. You should read it.

And finally, Ultan (@ultan) and Mark collaborated on a post about partners, APIs, PaaS and IoT that you should also read, if only so you can drop a PaaS4SaaS into your next conversation.

If you’re attending any of these upcoming events, say hi to us, and look for updates here.Possibly Related Posts:

Preview Release 10 Oracle Applications Cloud Readiness Content!

Linda Fishman Hoyle - Tue, 2015-06-16 15:57
Normal 0 false false false EN-US X-NONE X-NONE

A Guest Post by Katrine Haugerud (pictured left), Senior Director, Oracle Product Management

To help you prepare for upcoming Release 10, we are pleased to offer a preview of its new, modern business-empowering features.

On the Release Readiness page, we have added content for HCM, Sales, ERP, and SCM, as well as Common Technologies for each.

Specifically, we have just introduced:

Spotlights: Delivered by senior development staff, these webcast-delivered presentations highlight top level messages and product themes, and are reinforced with a product demo.

Release Content Documents (RCDs): The content includes a summary level description of each new feature and product.

Next month we will add (and announce) more Release 10 readiness content including:

  • What's New: Learn about what's new in the upcoming release by reviewing expanded discussions of each new feature and product, including capability overviews, business benefits, setup considerations, usage tips, and more.
  • Release Training: Created by product management, these self-paced, interactive training sessions are deep dives into key new enhancements and products. Also referred to as Transfers of Information (TOIs).
  • Product Documentation: Oracle’s online documentation includes detailed product guides and training tutorials to ensure your successful implementation and use of the Oracle Applications Cloud.

Access is Simple

From the cloud.oracle.com: Click on Menu > Discover > What's New



/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

The Pendulum Swings Back

Linda Fishman Hoyle - Tue, 2015-06-16 15:56

A Guest Post by Andy Campbell (pictured left), Oracle HCM Cloud Sales Evangelist

I am currently working on a white paper specifically on the topic of ‘Living with the HR Cloud’ with a number of fascinating case studies.Therefore, I was delighted to come across the latest piece of research from Harvard Business Review entitled Cloud Computing Comes of Age.

This report assesses the maturity (and thereby the experience) of customers who have deployed cloud applications. The results are quite significant. Those organizations classified as ‘cloud leaders’ also achieved higher levels of business success. It reports a correlation between an organization’s cloud maturity and the health of its growth initiatives such as business expansion.

The benefits they realized included improved business agility, enhanced organisational flexibility, and faster speed of deployment. They also reported improved decision making through an increased ability to analyze and act upon data and information. For HR leaders, the natural consequence of this is the ability to offer a more proactive value-added service to the business, something that I think we all aspire to.

Anyway, perhaps of most interest to me was the fact that the cloud leaders took a more managed and enterprise-wide approach to their cloud applications, something that embodied a range of good practices. For example, cloud leaders are more likely to define the business value that they expect to get from their cloud initiatives, 69 percent in fact, compared to only 40 percent of novices. Similarly, only 53 percent of the survey had established policies for cloud security, a figure that rises to 79 percent amongst cloud leaders. Also, cloud leaders are more likely to have a strong partnership between IT and other parts of the business. Cloud technologies including social, mobile, etc. have had a democratizing impact on IT, and enhanced collaboration with business users is, quite rightly, becoming the norm.

However, to me, one thing stands out. Evidently cloud leaders are more than twice as likely to have a CIO who leads the transformation agenda!! Sure IT and business must work together, but somebody needs to be in charge, and that is the CIO.

Now if you had said such a thing a few years ago, you would probably have been strung up by a lynch mob to chants of ‘the business user is king’! The perceived wisdom at the time was that ultimate power was vested with the business and the user community.

However, things have changed and the pendulum has swung back again. As the adoption of cloud technology has become more mainstream, the experience of users is that to be truly successful both parties, IT and the business, need to truly work well with each other.

Overall I/O Query

Bobby Durrett's DBA Blog - Tue, 2015-06-16 14:57

I hacked together a query today that shows the overall I/O performance that a database is experiencing.

The output looks like this:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:59        359254               20              711636
2015-06-15 16:00:59        805884               16              793033
2015-06-15 17:00:13        516576               13              472478
2015-06-15 18:00:27        471098                6              123565
2015-06-15 19:00:41        201820                9              294858
2015-06-15 20:00:55        117887                5              158778
2015-06-15 21:00:09         85629                1               79129
2015-06-15 22:00:23        226617                2               10744
2015-06-15 23:00:40        399745               10              185236
2015-06-16 00:00:54       1522650                0               43099
2015-06-16 01:00:08       2142484                0               19729
2015-06-16 02:00:21        931349                0                9270

I’ve combined reads and writes and focused on three metrics – number of IOs, average IO time in milliseconds, and average IO size in bytes.  I think it is a helpful way to compare the way two systems perform.  Here is another, better, system’s output:

End snapshot time   number of IOs ave IO time (ms) ave IO size (bytes)
------------------- ------------- ---------------- -------------------
2015-06-15 15:00:25        331931                1              223025
2015-06-15 16:00:40        657571                2               36152
2015-06-15 17:00:56       1066818                1               24599
2015-06-15 18:00:11        107364                1              125390
2015-06-15 19:00:26         38565                1               11023
2015-06-15 20:00:41         42204                2              100026
2015-06-15 21:00:56         42084                1               64439
2015-06-15 22:00:15       3247633                3              334956
2015-06-15 23:00:32       3267219                0               49896
2015-06-16 00:00:50       4723396                0               32004
2015-06-16 01:00:06       2367526                1               18472
2015-06-16 02:00:21       1988211                0                8818

Here is the query:

select 
to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS') "End snapshot time",
sum(after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS) "number of IOs",
trunc(10*sum(after.READTIM+after.WRITETIM-before.WRITETIM-before.READTIM)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO time (ms)",
trunc((select value from v$parameter where name='db_block_size')*
sum(after.PHYBLKRD+after.PHYBLKWRT-before.PHYBLKRD-before.PHYBLKWRT)/
sum(1+after.PHYRDS+after.PHYWRTS-before.PHYWRTS-before.PHYRDS)) "ave IO size (bytes)"
from DBA_HIST_FILESTATXS before, DBA_HIST_FILESTATXS after,DBA_HIST_SNAPSHOT sn
where 
after.file#=before.file# and
after.snap_id=before.snap_id+1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number
group by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS')
order by to_char(sn.END_INTERVAL_TIME,'YYYY-MM-DD HH24:MI:SS');

I hope this is helpful.

– Bobby

Categories: DBA Blogs

Oracle and Adaptive Case Management: Part 2

Jan Kettenis - Tue, 2015-06-16 14:24

This posting is the second of a series about Oracle Adaptive Case Management. The first one can be found here. I discuss the different options to define an activity, and the setting you can use to configure when and how activities are started.

There are two ways to implement an activity in ACM. The first one is by creating a Human Task and then "promote" it (as it is called) to an activity. The other way is to create a business process and promote that as an activity. As far as I know there are also plans to use a BPEL process to implement an activity, but that option is not there yet.

When using a Human Task the limitations of it (obviously) are that of a human task, meaning that the means to do some to do some pre- or post-processing for the activity are very limited. There are only a few hooks for Java call outs and XPath expressions, but as processing of that happens on the Human Workflow Engine this won't show up in Enterprise Manager, and error handling will be hard if not impossible. So, when you for example need to call a service before or after a human task (like sending a notification email) you better use a process.


So unless you are sure that such pre- or post-processing will be not necessary, the safest option is to use a process with a human task instead. That will give you all the freedom you have with a BPMN process. The disadvantage is that you will not be able to expose the UI of the task on the Case tab in workspace. However, as for any case management application of a reasonable size you probably will have one or more human activities in a process anyway, and as from a user experience perspective it probably is confusing to have tasks on Task tab, and some of them also on the Case tab, I don't expect this to be a practical issue in most cases. Meaning that in practice you probably handle all tasks from the Task tab only and on the Case tab show only some overview screen.

In ACM activities can be Manually Activated or Automatically Activated. Furthermore you can specify if an activity is Required, Repeated, and/or Conditionally available.


The difference between manually and automatically activated is that in the first case the user explicitly starts an activity by choosing it from a list of available activities. Automatically activitated activites are for example used for some case pre- and post-processing, and for activities that always have to start at some point, and (optionally) given some specific conditions (like some milestone being reached or some other activity being completed). An example is that once a claim has been entered, it has to be reviewed before anything else can happen.

Required activities should be completed before a stage is completed. Be careful though, as nothing is preventing you from closing the stage even though a required activity has not yet finished. If the user has the proper rights, he/she can complete an activity event even when no actual work has been done. There is no option to prevent that. However, in case of an automatically activated activity you can use business rules to reschedule it. For example, if the Review Complaint activity is required, and by that the complaint must have been given a specific status by the Complaints Manager you can use a rule to reactivate the activity if the user tries to close it without having set the status.

Repeatable activities can be started by the user more than once. There is no point in checking automatically activated activities as being repeatable. An example of a repeatable activity can be one where the Complaints Manager invites some Expert to provide input for a complaint, and he/she may need to be able to involve any amount of experts.

Conditionally available activities are triggered by some rule. Both manually as well as automatically activated activities can be conditional. If automatically activated, the activity will start as soon as the rule conditions are satisfied. In case of manually activated activities the rule conditions will determine whether or not the user can choose to from the list of available activities.

SQLcl , yet again

Kris Rice - Tue, 2015-06-16 13:54
By the Numbers There's a new SQLcl out for download.  In case, there are too many to keep track of the build numbers are quite easy to tell if you have the latest.  The build posted today is  sqlcl-4.2.0.15.167.0827-no-jre.zip Here's what we are doing 4.2.0 <-- doesn't matter at all 15     <- year 167   <- day in julian 0827 <- time the build was done So yes, this build was done today at 8am

A Database Wordfile…

Marco Gralike - Tue, 2015-06-16 09:08
It is not often that something like the following happens on Google while searching for…

The Byte-Anniversary

Darwin IT - Tue, 2015-06-16 05:15
I was looking into my blog-entries, and found that my former blog was number 254. So by entering this nonsense blog-entry, I reach the 255. Which makes it my 8-bit, or byte, anniversary:


And apparently my articles have been read as well... Nice thing is that I've reached this amount in nearly 8 years. So up to the next byte. Unfortunately in this case 2 bytes does not make up a Word...

Can Better Visual Design Impact User Engagement?

Rittman Mead Consulting - Tue, 2015-06-16 04:44
Background

For every dashboard succinctly displaying key business metrics, there’s another that is a set of unconnected graphs which don’t provide any insight to its viewers.

In order for your users to get value from your business intelligence and analytics systems they need to be engaging, they need to tell a story.

As part of its User Engagement initiative Rittman Mead has created a User Engagement Service. A key part of this is a Visual Redesign process. Through this process, we review an organisation’s existing dashboards and reports and transform them into something meaningful and engaging.

This service focuses on the user interface and user experience; here we will use our expertise in data visualisation to deliver high value OBIEE dashboards.

The process starts by prioritising your dashboards and then, taking one at a time, rebuilds them. There are 3 key concepts that lie behind this process.

Create a guided structure of information

The layout of information on a dashboard should tell the user a story. This makes it much easier for data to be consumed because users can identify related data and instantly see what is relevant to them. If a user can consume the data they need easily, they’re more likely to come back for more. ‘The founder of modern management’, Peter Drucker said “If you can’t measure it, you can’t manage it.” When users are in touch with their data, they will be more engaged with the business.

visual-redesign-sm

To achieve this, first we must consider the audience. We need to know who will be consuming the dashboards and how they will be used. Then we can begin to create a design that will satisfy the users’ needs. Secondly, we need to think about what we exclude as much as what we include on the dashboard. There is limited space, so it needs to be used effectively. Only information that adds value should be displayed on a dashboard and everything should be there for a reason. Finally, we need to consider what questions the users want to answer and what decisions will be based on this. This will enable us to guide the user to the information that will help them take action.

Choose the right visuals

One common design mistake is overcrowding of dashboards. Dashboards often develop over time to become a jumbled array of graphs and tables with no consideration of the visual design.

The choice of graphs will determine how readable the information being displayed on a dashboard is. We constantly ask ourselves “What is the best graph for the data?” Understanding how different types of graphs answer different questions, allows us to make the best visual choices. This is a vital tool for communicating messages to the users and providing them with the ability to identify patterns and relationships in the data more efficiently.

Thoughtful use of colour

The use of colour is an effective way to draw attention to something, connect related objects and evoke users’ emotions. Thoughtful use of colour can have a big impact on user engagement. To be sure we choose the best colours throughout dashboard design, the key question we need to ask ourselves is, “how will these colours make the user feel?”

Like the charts themselves, every different colour used on a dashboard should be there for a reason. Intentional use of colour could determine how a user will feel whilst consuming the information being displayed to them. Bright, unnatural colours will alarm users and attract their attention. Cool colours will give a restful, calming feel to the user and are most effective for displaying sustained trends. Through taking into consideration the most effective way to use colour, we can work towards creating an attractive visual design, which is engaging and enjoyable to use.

Applying these 3 concepts through Rittman Mead’s visual redesign process, has proven to result in engaging OBIEE dashboards. Users are equipped to make the most out of their data, allowing them to make informative business decisions.

Rittman Mead’s Visual Redesign process is a key part Rittman Mead’s User Engagement Service, for more info see http://www.rittmanmead.com/user-engagement-service/.

If you are interested in hearing more about User Engagement please sign up to our mailing list below.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */


Categories: BI & Warehousing

Index variables in Replace/Insert/Delete: Bug or not a bug?

Darwin IT - Tue, 2015-06-16 04:01
At my current customer I'm to process Attachments in Oracle Service Bus (11g).
I get a soap message, in which several documents are registered to be processed in a Content Server.
For each document the content is delivered as an soap/mime-attachment.

Because of some requirements I need to store the message, complete with the attachments Base64 encoded, in the database. So I have to pick each attachment, base64 encode it and then insert the content at the corresponding document in the soap message. So I need to do an insert or replace of a specific element of the body variable based on an index variable.

It turns out that you can perfectly do an assign with an expression like:
$body/stag:StageDocumentsRequestMessage/Payload/stag:documents/stag:document[contentId/text()=$contentId]
to a variable, for instance called document.

I can do an insert of the base64-encoded content into that document variable. But that does not get into the body variable. Since, apparently, document is a copy of and not a reference to the particular node.

So lets do a replace with the xpath-expresin:
$body/stag:StageDocumentsRequestMessage/Payload/stag:documents/stag:document[contentId/text()=$contentId]
in the variable body. But this gives the error:
[PL_MyPipeLine, Request Pipeline, HandleAttachments, Delete action] XPath expression validation failed: An error was reported compiling the XPath expression: XQuery exception: line 34, column 91: {err}XP0008 [{bea-err}XP0008a]: Variable "$contentId" used but not declared for expression: declare namespace stag = 'http://www.darwin-it.nl/CS/StageDoc';
declare namespace jca = 'http://www.bea.com/wli/sb/transports/jca';
declare namespace wsp = 'http://schemas.xmlsoap.org/ws/2004/09/policy';
...


Same counts for Insert and delete: I thought of inserting a new version of the node in the list and delete the old one, but that would not work either.

I've googled around, and found several occurences of basically the same problem, but no satisfying solution.

At support.oracle.com I found the following bug:
"Bug 17940786 : CANNOT USE INDEX VARIABLE IN THE REPLACE ACTION WITHIN FOR-EACH LOOP" with the following description:

The customer uses 2 for-each loops with index variables ($i, $j).
In the Replace action, in the Xpath expression buider, they want to use
"./entity1[$i]/entity2[$j]". This is not permitted by the editor. The problem
also occurs with only 1 variable like "./entity1[$i]/entity2[1]".

However, for no apparent reason, this "bug" has the status "92 - Closed, Not a Bug". So apparently, Oracle finds it as "functioning as designed". But why can't I modify or delete a particular node indexed by a variable?
Apparently I'm now stuck with building the document list document-by-document and do a replace of the complete document-list...

Feedback from the Oracle documentation team

Tim Hall - Tue, 2015-06-16 03:36

feedbackI got some feedback from the Oracle documentation team, based on my recent post.

GUIDs

One of the concerns I raised was about how the GUIDs would be used in different releases of the documentation. Although I don’t like the look of the GUIDs, I can understand why they might be more convenient that trying to think of a neat, descriptive, human readable slug. My concern was the GUID might be unique for every incarnation of the same page. That is, a new GUID for the same page for each patchset, DB version and/or minor text correction. That would make it really hard to flick between versions, as you couldn’t predict what the page was called in each variant.

It seems my worries were unfounded. The intention is the GUID of a specific page will stay the same, regardless of patchset, DB version or document correction. That’s great news!

Broken Links

The team are trying to put some stuff in place to correct the broken links. I think I might know who is developing this solution. :)

The quick fix will be to direct previously broken links to the table of contents page of the appropriate manual. Later, they will attempt to provide topic-to-topic links. No promises here, but it sounds promising.

Conclusion

I’m going to continue to fix the broken links on my site as I want to maintain the direct topic links in the short term, but this sounds like really good news going forward.

It also sounds like the documentation team are feeling our pain and putting stuff in place to prevent this happening in future, which is fantastic news! :)

Note to self: It’s much better to engage with the right people and discuss the issue, rather than just bitch about stuff.

Cheers

Tim…

Feedback from the Oracle documentation team was first posted on June 16, 2015 at 10:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) – Just Born

Tim Hall - Tue, 2015-06-16 00:49

em-12cOracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) was announced a few days ago. I woke up today and checked the interwebs and it’s actually available for download.

I must admit I’m a little nervous about the upgrade. I had a few bad times with upgrades in the early days of Grid Control and Cloud Control and that has left me with a little bit of voodoo lurking in the back of my mind. The last couple of upgrades have been really easy, so I’m sure it will be fine, but that voodoo…

I’ll download it now and do a clean install. Then do a couple of practice upgrades. If all that goes well, I’ll schedule a date to sacrifice a chicken, raise a zombie from the dead to do my bidding, then do the real upgrade.

Cheers

Tim…

Update. Looking at the certification matrix, the repository is now certified on 12.1.0.2, as well as 11.2.0.4 and 11.2.0.3.

Update 2. Pete mentioned in the comments that 12.1.0.2 has been certified for the Cloud Control repository since march, with some restrictions. So it’s not new to this release. See the comments for details.

Update 3. Remember to download from edelivery.oracle.com (in a couple of days) for your production installations. Apparently there is a difference to the license agreement.

Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) – Just Born was first posted on June 16, 2015 at 7:49 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Indexing and Transparent Data Encryption Part III (You Can’t Do That)

Richard Foote - Tue, 2015-06-16 00:28
In Part II of this series, we looked at how we can create a B-Tree index on a encrypted column, providing we do not apply salt during encryption. However, this is not the only restriction with regard to indexing an encrypted column using column-based encryption. If we attempt to create an index that is not a […]
Categories: DBA Blogs

Dynamic Sampling

Jonathan Lewis - Mon, 2015-06-15 14:41

Following on from an OTN posting about dynamic sampling difficulties I had planned to write a blog post about the difference between “not sampling when hinted” and “ignoring the sample” – but Mohamed Houri got there before me.

It’s just worth highlighing a little detail that is often overlooked, though: there are two versions of the dynamic_sampling() hint, the cursor level and the table level, and the number of blocks sampled at a particular level is dependent on which version you are using.  Level 4 at the cursor level, for example, will sample 64 blocks if and only if a certain condition is met,  but at the table level it will sample 256 blocks unconditionally.

So try to be a little more specific when you say “I told the optimizer to use dynamic sampling …”, it’s either:

“I told the optimizer to use cursor level dynamic sampling at level X …”

or

“I told the optimizer to use table level dynamic sampling at level Y for table A and …”

Note – apart from the changes to dynamic sampling that allow for a level 11, there’s also a change introduced (I think) in 10g for the sample() clause applied to the table during sampling – it’s the addition of a seed() clause which ensures that when you repeat the same level you generate the same set of random rows.

Addendum

Here’s a little code I wrote some time ago to check the effect of the two options at different levels. I started by creating a (nologging) table from the first 50,000 rows of all_objects, then doubled it up a few times to 400,000 rows total, and ensured that there were no stats on the table. Then executed in turn each variant of the following anonymous pl/sql block (note that I have the execute privilege on the dbms_system package):


declare
	m_ct number;
begin
	execute immediate 'alter session set events ''10053 trace name context forever''';
	for i in 1..10 loop
		sys.dbms_system.ksdwrt(1,'=============');
		sys.dbms_system.ksdwrt(1,'Level ' || i);
		sys.dbms_system.ksdwrt(1,'=============');

		execute immediate 
			'select /*+ dynamic_sampling('    || i || ') */ count(*) from t1 ' ||
--			'select /*+ dynamic_sampling(t1 ' || i || ') */ count(*) from t1 ' ||
			'where owner = ''SYS'' and object_type = ''SYNONYM'''
			into m_ct;
	end loop;
end;
/

Obviously I could examine the resulting trace file to pick out bits of each optimisation, but for a quick check a simple grep for “sample block cnt” is almost all I need to do – with the following (slightly decorated) results from 11.2.0.4:


Table level
===========
Level 1
    max. sample block cnt. : 32
    sample block cnt. : 31
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 128
    sample block cnt. : 127
    max. sample block cnt. : 256
    sample block cnt. : 255
    max. sample block cnt. : 512
    sample block cnt. : 511
    max. sample block cnt. : 1024
    sample block cnt. : 1023
    max. sample block cnt. : 2048
    sample block cnt. : 2047
    max. sample block cnt. : 4096
    sample block cnt. : 4095
    max. sample block cnt. : 8192
    sample block cnt. : 8191
Level 10
    max. sample block cnt. : 4294967295
    sample block cnt. : 11565

Cursor level
============
No sampling at level 1
Level 2
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 128
    sample block cnt. : 127
    max. sample block cnt. : 256
    sample block cnt. : 255
    max. sample block cnt. : 1024
    sample block cnt. : 1023
    max. sample block cnt. : 4096
    sample block cnt. : 4095
Level 10
    max. sample block cnt. : 4294967295
    sample block cnt. : 11565


You’ll notice that the cursor level example didn’t do any sampling at level 1. Although the manual doesn’t quite make it clear, sampling will only occur if three conditions are met:

  • The table has no statistics
  • The table has no indexes
  • The table is involved in a join so that a sample could affect the join order and method

If only the first two conditions are met then the execution path will be a full tablescan whatever the sample looks like and the number of rows returned has no further impact as far as the optimizer is concerned – hence the third requirement (which doesn’t get mentioned explicitly in the manuals). If you do have a query that meets all three requirements then the sample size is 32 (31) blocks.

 


CBO Series

Jonathan Lewis - Mon, 2015-06-15 14:19

About a year ago I came across a couple of useful articles from Stefan Koehler, which is when I added his name to my blog roll. As an introduction for other readers I’ve compiled an index for a series of articles he wrote about the CBO viewed, largely, from the perspective of using Oracle to run SAP. Today I realised I hadn’t got around to publishing it, and there’s been a couple of additions since I first started to compile the list.

 


CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command.

Oracle in Action - Mon, 2015-06-15 09:40

RSS content

Today, in my 12.1.0.2 cluster,  I encountered above error message when I was trying to modify ACL of an ASM cluster file system created on volume VOL1 in DATA diskgroup as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'"

CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

I resolved the above problem by using the unsupported flag as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'" -unsupported

 

Hope it helps!!

References:
Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

——————————————————————————————————————-

 Related Links :

Home

12c RAC Index

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.], All Right Reserved. 2015.

The post CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command. appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

What is more efficient: arrays or single columns values? - oracle

Yann Neuhaus - Mon, 2015-06-15 06:38
In the last post on this topic it turned out that using an array as a column type needs more space than using a column per value in PostgreSQL. Now I'll do the same test case in Oracle.

What is more efficient: arrays or single column values?

Yann Neuhaus - Mon, 2015-06-15 05:59
In PostgreSQL ( as well as in other rdbms ) you can define columns as arrays. What I wondered is: What is more efficient when it comes to space: Creating several columns or just creating once column as array? The result, at least for me, is rather surprising.

refhost.xml kludge is fixed

Frank van Bortel - Mon, 2015-06-15 05:50
No More missing packages I wrote several times about manually editing refhost.xml. There's not need for it, just apply Patch 18231786.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0