Skip navigation.

Feed aggregator

Oracle Database RAC Diagnostics and Tuning

Oracle Real Application Clusters (Oracle RAC) is a clustered version of Oracle Database based on a comprehensive high-availability stack that can be used as the foundation of a database cloud system...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Space used by objects

DBA Scripts and Articles - Thu, 2014-08-07 12:35

Calculate the space used by a single object This script will help you calculate the size of a single object : [crayon-542d77fe88875325905449/] Calculate the space used by a whole schema If you want the space used by a whole schema, then here is a variation of the first query : [crayon-542d77fe88880816189925/]

The post Space used by objects appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Whistler, Microsoft and how far cloud has come

Steve Jones - Thu, 2014-08-07 09:00
In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.  Sure back in early 2008 there were some pieces of Microsoft that knew about cloud but that really wasn't a corporate view it was what a very few people inside the company knew. How do I know this? Well back in 2008 I was sitting on the top of a
Categories: Fusion Middleware

Oracle WebCenter Case Study: Improving Invoice to Cash Process

WebCenter Team - Thu, 2014-08-07 08:51

<span id="XinhaEditingPostion"></span>

 Kevin is the IT Director for a top-quality Less-than-Truckload carrier servicing eight Midwestern states. 

A recent industry survey showed the company’s website was falling short of customer expectations in the following three areas: 

  • Ease of use 
  • Providing useful information, and 
  • Utilizing effective technology and tracking systems

Kevin looked to Redstone Content Solutions to improve the website’s functionality utilizing award-winning Oracle WebCenter technologies. 

Actian Vector Hadoop Edition

DBMS2 - Thu, 2014-08-07 05:12

I have a small blacklist of companies I won’t talk with because of their particularly unethical past behavior. Actian is one such; they evidently made stuff up about me that Josh Berkus gullibly posted for them, and I don’t want to have conversations that could be dishonestly used against me.

That said, Peter Boncz isn’t exactly an Actian employee. Rather, he’s the professor who supervised Marcin Zukowski’s PhD thesis that became Vectorwise, and I chatted with Peter by Skype while he was at home in Amsterdam. I believe his assurances that no Actian personnel sat in on the call. :)

In other news, Peter is currently working on and optimistic about HyPer. But we literally spent less than a minute talking about that

Before I get to the substance, there’s been a lot of renaming at Actian. To quote Andrew Brust,

… the ParAccel, Pervasive and Vectorwise technologies are being unified under the Actian Analytics Platform brand. Specifically, the ParAccel technology … is being re-branded Actian Matrix; Pervasive’s technologies are rechristened Actian DataFlow and Actian DataConnect; and Vectorwise becomes Actian Vector.

and

Actian … is now “one company, with one voice and one platform” according to its John Santaferraro

The bolded part of the latter quote is untrue — at least in the ordinary sense of the word “one” — but the rest can presumably be taken as company gospel.

All this is by way of preamble to saying that Peter reached out to me about Actian’s new Vector Hadoop Edition when he blogged about it last June, and we finally talked this week. Highlights include: 

  • Vectorwise, while being proudly multi-core, was previously single-server. The new Vector Hadoop Edition is the first version with node parallelism.
  • Actian’s Vector Hadoop edition uses HDFS (Hadoop Distributed File System) and YARN to manage an Actian-proprietary file format. There is currently no interoperability whereby Hadoop jobs can read these files. However …
  • … Actian’s Vector Hadoop edition relies on Hadoop for cluster management, workload management and so on.
  • Peter thinks there are two paying customers, both too recent to be in production, who between then paid what I’d call a remarkable amount of money.*
  • Roadmap futures* include:
    • Being able to update and indeed trickle-update data. Peter is very proud of Vectorwise’s Positional Delta Tree updating.
    • Some elasticity they’re proud of, both in terms of nodes (generally limited to the replication factor of 3) and cores (not so limited).
    • Better interoperability with Hadoop.

Actian actually bundles Vector Hadoop Edition with DataFlow — the old Pervasive DataRush — into what it calls “Actian Analytics Platform – Hadoop SQL Edition”. DataFlow/DataRush has been working over Hadoop since the latter part of 2012, based on a visit with my then clients at Pervasive that December.

*Peter gave me details about revenue, pipeline, roadmap timetables etc. that I’m redacting in case Actian wouldn’t like them shared. I should say that the timetable for some — not all — of the roadmap items was quite near-term; however, pay no attention to any phrasing in Peter’s blog post that suggests the roadmap features are already shipping.

The Actian Vector Hadoop Edition optimizer and query-planning story goes something like this:

  • Vectorwise started with the open-source Ingres optimizer. After a query is optimized, it is rewritten to reflect Vectorwise’s columnar architecture. Peter notes that these rewrites rarely change operator ordering; they just add column-specific optimizations, whatever that means.
  • Now there are rewrites for parallelism as well.
  • These rewrites all seem to be heuristic/rule-based rather than cost-based.
  • Once Vectorwise became part of the Ingres company (later renamed to Actian), they had help from Ingres engineers, who helped them modify the base optimizer so that it wasn’t just the “stock” Ingres one.

As with most modern MPP (Massively Parallel Processing) analytic RDBMS, there doesn’t seem to be any concept of a head-node to which intermediate results need to be shipped. This is good, because head nodes in early MPP analytic RDBMS were dreadful bottlenecks.

Peter and I also talked a bit about SQL-oriented HDFS file formats, such as Parquet and ORC. He doesn’t like their lack of support for columnar compression. Further, in Parquet there seems to be a requirement to read the whole file, to an extent that interferes with Vectorwise’s form of data skipping, which it calls “min-max indexing”.

Frankly, I don’t think the architectural choice “uses Hadoop for workload management and administration” provides a lot of customer benefit in this case. Given that, I don’t know that the world needs another immature MPP analytic RDBMS. I also note with concern that Actian has two different MPP analytic RDBMS products. Still, Vectorwise and indeed all the stuff that comes out Martin Kersten and Peter’s group in Amsterdam has always been interesting technology. So the Actian Vector Hadoop Edition might be worth taking a look at before you redirect your attention to products with more convincing track records and futures.

Categories: Other

Update on 2U: First full quarterly earnings and insight into model

Michael Feldstein - Wed, 2014-08-06 19:09

2U, the online service provider that went public in the spring, just released its financial report for the first full quarter of operations as a public company. The company beat estimates on total revenue and also lost less money than expected. Overall, it was a strong performance (see WSJ for basic summary or actual quarterly report for more details). The basics:

  • Revenue of $24.7 million for the quarter and $51.1 m for the past six months, which represents year-over-year increase of 32 and 35%;
  • EBITDA Losses of $7.1 m for the quarter and $10.9 m for the past six months, which represents year-over-year increase of -2% and 12%; and
  • Enrollment growth of 31 – 34% year-over-year.

Per the WSJ coverage of the conference call:

“I’m very pleased with our second quarter results, and that we have both the basis and the visibility to increase all of our guidance measures for 2014,” said Chip Paucek, 2U’s Chief Executive Officer and co-founder. “We’ve reached a turning point where, even with continued high investment for growth, our losses have stopped accelerating. At the midpoint of our new guidance range, we now expect our full year 2014 adjusted EBITDA loss to improve by 17% over 2013. Further, we’ve announced a schedule that meets our stated annual goal for new program launches through 2015.”

The company went public in late March at $14 / share and is still at that range ($14.21 before the quarterly earnings release – it might go up tomorrow). As one of only three ed tech companies to have gone public in the US over the past five years, 2U remains worth watching both for its own news and as a bellwether of the IPO market for ed tech.

Notes

The financials provide more insight into the world of Online Service Providers (OSP, aka Online Program Management, School-as-a-Service, Online Enablers, the market with no name). On the conference call 2U’s CEO Chip Paucek reminded analysts that they typically invest (money spent – revenue) $4 – $9 million per program in the early years and do not start to break even until years 3 – 4. 2U might be on the high side of these numbers given their focus on small class sizes at big-name schools, but this helps explain why the OSP market typically focuses on long-term contracts of 10+ years. Without such a long-term revenue-sharing contract, it would difficult for an OSP to ever break even.

As the market matures – with more competitors and with schools developing their own experiences in online programs, it will become more and more difficult for companies to maintain these commitments from schools. We have already seen signs over the past year of changes in institutional expectations.

2U, meanwhile, has positioned itself at the high-end of the market, relying on high tuitions and brand-name elite schools with small classes. The company for the most part will not even compete in a Request for Proposal process, avoiding direct competition with Embanet, Deltak, Academic Partnerships and others. Their prospects seem much stronger than the more competitive mainstream of OSP providers.

See the posts here at e-Literate for more background.

2U has changed one aspect of their strategy, as noted by Donna Murdoch on G+. At least through 2012 the company positioned itself as planning to work with one school per discipline (or vertical in their language). Pick one school for Masters of Social Work, one for MBA, etc. As described in Jan 2012:

“As we come into a new vertical, 2tor basically partners with one great school per vertical. We find one partner, one brand that is world-class. We partner with that brand over a long time period to create the market leader in that space for that discipline.”

2U now specifically plans for secondary schools in different verticals as can be seen in their press release put out today:

Programs Aug 2014

Note the duplication of Social Work between USC and Simmons, Nursing between Georgetown and Simmons, and Data Science between Berkeley and SMU. Note the new approach from page 20 of the quarterly report:

As described above, we have added, and we intend to continue to add, degree programs in a number of new academic disciplines each year, as well as to expand the delivery of existing degree programs to new clients.

View Into Model

Along with the first quarter release (which was not based on a full quarter of operations as a public company), 2U release some interesting videos that give a better view into their pedagogical approach and platform. In this video they describe their “Bi-directional Learning Tool (BLT)”:

This image is from a page on the 2U website showing their approach, with a view of the infamous Brady Bunch layout for live classes (synchronous).

Live Courses

We’ll keep watching 2U and share significant developments as we see them.

The post Update on 2U: First full quarterly earnings and insight into model appeared first on e-Literate.

OWB to ODI 12c Migration in action

Antonio Romero - Wed, 2014-08-06 11:00

The OWB to ODI 12c migration utility provides an easy to use on-ramp to Oracle's strategic data integration tool. The utility was designed and built by the same development group that produced OWB and ODI. 

Here's a screenshot from the recording below showing a project in OWB and what it looks like in ODI 12c;


There is a useful webcast that you can play and watch the migration utility in action. It takes an OWB implementation and uses the migration utility to move into ODI 12c.

http://oracleconferencing.webex.com/oracleconferencing/ldr.php?RCID=df8729e0c7628dde638847d9511f6b46

It's worth having a read of the following OTN article from Stewart Bryson which gives an overview of the capabilities and options OWB customers have moving forward.
http://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb-to-odi-2130001.html

Check it out and see what you think!

Oracle WebCenter Imaging: Avoid the Accounts Payable Zombie Apocalypse

WebCenter Team - Tue, 2014-08-05 06:43

Author: Jane Shirley, Senior Business Analyst for Aurionpro

A World without Oracle WebCenter Imaging
The Client’s Situation: Living an AP Nightmare

“I’m going to have nightmares about this for the rest of my life.” That’s what our client said as she described the company’s paper and spreadsheet-based Accounts Payable (AP) process. Her department processed about 10,000 invoices a month and employees were beginning to resemble the cast from a horror movie – complete with zombie-like AP processors shuffling through cubicle aisles in search of active invoices.

The Issues:

1) Zombification does not support ROI goals or process improvement…
Thankfully, the Aurionpro WebCenter team got to her in time!  We met our client during her company’s financial transformation planning process and spent time with her and the team analyzing requirements and prioritizing processing needs, providing insight on additional ROI realization, and identifying areas for process improvement.  While we found some opportunities for modifications, we kept the client’s Oracle WebCenter Imaging implementation timeline to a bare minimum in order to accelerate the transformation process and reduce development time.

“All implementations require at least some level of configuration and modification,” we explained.  “The trick is to identify the areas where customization is truly required that support a faster time to ROI and make the most sense for the client’s business. As for other aspects, while still important, we recommend putting them into a “Phase II Brain Fungus Antidote” that inhibits the zombification of the project and driving processes. This proven approach helps organizations get the biggest bang for their buck. 

2) Zombies have no requirements (and are usually missing a limb or two…)
We’d advised our client that all of us needed to understand the company’s existing AP System before getting started.  The Aurionpro team’s first step is always requirements gathering. This part of the process has three key objectives:
  • Train the Financial Transformation team in the details of WebCenter Imaging both through system demonstration and generic workflow presentations.
  • Understand the client’s business requirements and document those verbally and visually with reconfirmation
  • Confirm our understanding of the client’s goals.  We’ve learned the most important part of requirements gathering is ensuring that we heard our client’s concerns and expressed them accurately.
3) People who live in zombie societies hide at night and barricade themselves in…
Aurionpro understood that team leads like to do things their own way. For our client, she wanted to be left alone to contemplate things and assess whether the plan was going to get the job done on time and on budget.

“I just like taking the documentation home and studying it,” our client said. “The matrix format that Aurionpro provides allowed me to sort and re-sort the individual requirements so I could understand how each one fits with our goals and also our budget. I could even send the functional flows included in the requirements to my change management team so they could start their part of the process.”

4) Zombies can only be killed with proven methods… All of our clients have given high praise to the matrix format that Aurionpro services teams use to ensure that requirements are clear and that focus is maintained consistently throughout the lifecycle of the project.  Our deployment methodology allows our clients to literally check-off each item on their original requirements list so that they have a visualization of how they’ve come full circle from idea to reality!  Our post-UAT development then covers the team by ensuring that any post-development adjustments are made possible. Lastly, the Aurionpro implementation process for WebCenter Imaging supports companies through the “go-live” period and works to ensure that any zombification of their process are completely resolved.
5) Shufflers have no clear roadmap … “You made it so much easier to support my progress reports to my senior management,” the client told us after the project was complete.  “The weekly status reports that Aurionpro provided gave a crystal-clear snapshot of our project status against the overall timeline.  I always knew where I stood and what was going to happen next. What was a very scary process for me became quite manageable, or at least less terrifying,” she concluded.

The really good news was that once she had her Oracle WebCenter Imaging solution in place, our client’s nightmare world receded and the zombies morphed back into accessible, productive, and engaged colleagues.

About the Author:
Jane Shirley is a Senior Business Analyst for Aurionpro. She has worked in the Oracle WebCenter space on both the customer and the consulting side for large corporate enterprise-wide implementations. After serving time as a Marketing Manager, she spent several years managing invoice processing for a Fortune 50 company. She can be reached at jane.shirley@aurionpro.com.

Get Up Offa That Thing

Doug Burns - Tue, 2014-08-05 04:00
No, no, no ... not *that* JB!
As regular readers will know, the JB who tends to get mentioned most often around these parts is John Beresniewicz who, up until recently, worked at Oracle on all the cool OEM Performance Pages and related instrumentation (alongside others, of course, such as Graham Wood, Uri Shaft, Kyle Hailey and probably a cast of thousands for all I know). Over recent years, JB has become a friend and has always posted deeply insightful comments whenever I’ve blogged about DB Time, Top Activity, Load Maps, ASH Analytics or Adaptive Thresholds. There can be few people who understand those subjects as well and he also has a great way of communicating how such powerful tools can be used to actually make things a lot simpler. (Click On The Big Stuff!, to pick one example) There can be a lot of unexpected complexity behind simplicity, believe me ;-)
So when the opportunity comes to learn from the best, I feel it’s only right I share it. The ASH Architecture and Advanced Usage presentation is a collective effort between Graham Wood, Uri Shaft and JB that has been refined over a number of years. This is the version that JB and Graham delivered at the RMOUG Training Days 2014. It’s excellent stuff from some true Oracle Performance experts.
JB might try to play the grumpiest man in California at times but, after all the work he has contributed to improving the performance analysis tools available to jobbing DBAs, I for one hope he sticks around on this Oracle Performance stuff and isn't distracted by Big Data or Anything-as-a-Service because he’d be too much of a loss (although I wouldn't have to listen to his whining so much, so maybe every cloud etc .... ;-))
Anyway – check out the presentation. It’s well worth your time. Better still, give that man a job so he doesn't have too much time on his hands and be reduced to starting to use Social Media!

PeopleTools 8.54: Accessibility

PeopleSoft Technology Blog - Mon, 2014-08-04 15:46

This entry is one of a series of posts that will introduce readers to important features of this landmark release.

PeopleTools has long supported accessibility, and with PeopleTools 8.54 we are able to conform to WCAG 2.0 AA standards.  This means that upcoming PeopleSoft applications that are built on PeopleTools 8.54 will be designed to conform to those standards.  Customers that build pages, add-on applications, or customizations can also create conforming pages using PeopleTools 8.54. Most enterprises--especially those with a global footprint--are requiring WCAG conformance for the software they use.  In fact many companies that operate in the U.S. only are using the WCAG standards as the basis for their requirements.

Web Content Accessibility Guidelines (WCAG) are part of a series of guidelines published by the W3C's Web Accessibility Initiative. Their goal is to make content accessible, primarily for disabled users but also for all user agents, including highly limited devices, such as mobile phones. The current version (2.0) of the guidelines is also an ISO standard: ISO/IEC 40500:2012. WCAG 2.0 is an international standard used in most countries that enforce accessibility regulations. The new standard provides additional requirements to address new web technologies and has become the standard by which many organizations monitor software compliance. WCAG 2.0 extends standards described in WCAG 1.0 as well as the U.S.-specific regulations described in section 508 of Federal access standards.

Avoid UTL_FILE_DIR Security Weakness – Use Oracle Directories Instead

Eddie Awad - Mon, 2014-08-04 07:00

Integrigy:

The UTL_FILE database package is used to read from and write to operating system directories and files. By default, PUBLIC is granted execute permission on UTL_FILE. Therefore, any database account may read from and write to files in the directories specified in the UTL_FILE_DIR database initialization parameter […] Security considerations with UTL_FILE can be mitigated by removing all directories from UTL_FILE_DIR and using the Directory functionality instead.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

#GoldenGate Bound Recovery

DBASolved - Sun, 2014-08-03 19:40

Every once in awhile when I restart an extract, I see entries in the report file that reference “Bounded Recovery”.  What exactly is “Bounded Recovery”?

First, keep in mind that “Bounded Recovery” is only for Oracle databases!

Second, according to the documentation, “Bounded Recovery” is a component of the general extract checkpointing facility.  This component of the extract guarantees an efficient recovery after an extract stops for any reason, no matter how many uncommitted transactions are currently outstanding.  The Bounded Recovery parameter sets an upper boundary for the maximum amount of time that an extract is needed to recover to the point where it stopped before continuing normal processing.

The default settings for “Bounded Recovery” is set to 4 hours and needed recovery information is cached in the OGG_HOME/BR/<extract name> directory  This is verified when I look at the report file for my extract named EXT.


2014-07-21 17:26:30 INFO OGG-01815 Virtual Memory Facilities for: BR
 anon alloc: mmap(MAP_ANON) anon free: munmap
 file alloc: mmap(MAP_SHARED) file free: munmap
 target directories:
 /oracle/app/product/12.1.2/oggcore_1/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR = /oracle/app/product/12.1.2/oggcore_1

According to documentation, the default setting so for “Bounded Recovery” should be sufficient for most environments.  It is also noted that the “Bounded Recovery” settings shouldn’t be changed without the guidance of Oracle Support.

Now that the idea of a “Bounded Recovery” has been established, lets try to understand a bit more about how a transaction is recovered in Oracle GoldenGate with the “Bounded Recovery” feature.

At the start of a transaction, Oracle GoldenGate must cache the transaction (even if it contains no data).  The reason for this is due to the need to support future operations of a transaction.  If the extract hits a committed transaction, then the cached transaction is written to the trail file and clears the transaction from memory.  If the extract hits a rollback, then the cached transaction is discarded from memory.  As long as a an extract is processing a transaction, before a commit or rollback, the transaction is considered an open transaction and will be collected.  If the extract is stopped before it encounters a commit or rollback, the extract needs all of the cached transaction information recovered before the extract can start.  This approach applies to all transactions that were open at the time of the extract being stopped.

There are three ways that an extract performs recovery:

  1. No open transactions when extract is stopped, the recovery begins at the current extract read checkpoint (Normal recovery)
  2. Open transactions whose start points in the log were very close in time to the time when the extracted was stopped, the extract begins its recovery by re-reading the logs from the beginning of the oldest open transaction (Considered a normal recovery)
  3. One or more open transactions that extract qualified as long-running open transactions, extract begins recovery (Bounded Recovery)

What defines a long-running transaction for Oracle GoldenGate?

Transactions in Oracle GoldenGate are long-running if the transaction has been open longer than one (1) “Bounded Recovery” interval.

A “bounded recovery interval” is the amount of time between “Bounded Recovery checkpoints” which persists the current state and data of the extract to disk.  “Bounded Recovery checkpoints” are used to identify a recovery position between tow “Bounded Recovery intervals”.  The extract will pick up from the last “bounded recovery checkpoint”, instead of processing from the log position where the open long-running transaction first appeared.

What is the maximum Bounded Recovery time?

The maximum bounded recovery time is no more than twice the current “Bounded Recovery checkpoint” interval.  However, the actual recovery time will be dictated by the following:

  1. The time from the last valid Bounded Recovery interval to when the extract was stopped
  2. Utilization of the extract in that period
  3. The percent of utilization for transaction that were previously written to the trail

Now that the basic details of “Bounded Recovery” have been discussed.  How can the settings for “Bounded Recovery” be changed?

“Bounded Recovery” can be changed by updating the extract parameter file with the following parameter:


BR
[, BRDIR directory]
[, BRINTERVAL number {M | H}]
[, BRKEEPSTALEFILES]
[, BROFF]
[, BROFFONFAILURE]
[, BRRESET]

As noted, there are a few options that can be set with the BR parameter.  If I wanted to shorten my “Bound Recovery” time and change directories where the cached information is stored I can do something similar to this:


--Bound Recovery
BR BRDIR ./dirbr BRINTERVAL 20M

In the example above, I’m changing the directory to a new directory called DIRBR (created manually as part of subdirs).  I also changed the interval from 4 hours to 20 minutes.

Note: 20 minutes is the smallest accepted time for the BRINTERVAL parameter.

After adding the BR parameter with options to the extract, the extract needs to be restarted.  Once the extract is up and running, the report file for the extract can be checked to verify that the new parameters have been taken.


2014-08-02 22:20:54  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /oracle/app/product/12.1.2/oggcore_1/dirbr/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 20M
BRDIR      = ./dirbr

Hopefully, this post provided a better understanding of one least understood option within Oracle GoldenGate.

Enjoy!!

About me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Full Disclosure

Michael Feldstein - Sat, 2014-08-02 12:41

As you probably know, we run a consulting business (MindWires Consulting) and sometimes work with the companies and schools that we write about here. Consequently, we periodically remind you and update you on our conflict of interest policies. We do our best to avoid or minimize conflicts of interest where we can, but since our system isn’t perfect, we want you to understand how we handle them when they arise so that you can consider our analysis with the full context in mind. We value your trust and don’t take it for granted.

We talk a lot with each other about how to deal with conflicts of interest because we run into them a lot. On the one hand, we find that working with the vendors and schools that we write about provides us with insight that is helpful to a wide range of clients and readers. There just aren’t too many people who have the benefit of being able to see how all sides of the ed tech relationships work. But along with that perspective comes an inevitable and perpetual tension with objectivity. When we started our business together 18 months ago, we didn’t have a clear idea where these tensions would show up or how big of an issue they might turn out to be. We originally thought that our blogging was going to remain an addiction that was subsidized but somewhat disconnected from our consulting. But it turns out that more than 90% of our business comes from readers of the blog, and a significant portion of it comes out of conversations stimulated by a specific post. Now that we understand that relationship better, we’re getting a better handle on the kinds of conflict of interest that can arise and how best to mitigate them. Our particular approach in any given situation depends on lot on whether the client wants analysis or advice.

Disclosure

In many cases, clients want us to provide deeper, more heavily researched, and more tailored versions of the analysis that we’ve provided publicly on this blog. In this situation, there isn’t a strong a direct conflict of interest between working providing them with what they are asking for and writing public analysis about various aspects of their business. That said, no matter how hard we try to write objectively about an organization that is, was, or could be a client, human nature being what it is, we can’t guarantee that we will never be even subconsciously influenced in our thinking. That is why we have a policy to always disclose when we are blogging about a client. We have done this in various ways in the past. Going forward, we are standardizing on an approach in which we will insert a disclosure footnote at the end of the first sentence in the post in which the client is named. It will look like this.[1] (We are not fully satisfied that the footnote is prominent enough, so we will be investigating ways to make it a little more prominent.) We will insert these notices in all future posts on the blog, whether or not we are the authors of those posts. In cases where the company in question is not currently a client but was recently and could be again in the near future, we will note that the company “was recently a client of MindWires Consulting”.

Recusal

Sometimes the client wants not only analysis but also strategic advice. Those situations can be trickier. We want to avoid cases in which we blog in praise (or condemnation) of a company for taking an action that they paid us to tell them to take. Our policy is that we don’t blog about any decisions that a company might make based on our advice. There are some theoretical situations in which we might consider making an exception to that rule, but if they ever do come up in reality, then the disclosure principle will apply. We will let you know if, when, and why we would make the exception. Aside from that currently theoretical exception, we recuse ourselves from blogging about the results of our own consulting advice. Furthermore, when potential clients ask us for advice that we think will put us into a long-term conflict of interest regarding one of our core areas of analysis, we turn down that work. Analysis take precedence over advice.

Getting Better at This

We’re going to continue thinking about this and refining our approach as we learn more. We also have some ideas about business models that could further minimize potential conflicts in the future. We’ll share the details with you if and when we get to the point where we’re ready to move forward on them. In the meantime, we will continue to remind you of our current policy periodically so that you are in a better position to judge our analysis. And as always, we welcome your feedback.

 

  1. Full disclosure: Acme Ed Tech Company is a client of MindWires Consulting, the sponsor of e-Literate.

The post Full Disclosure appeared first on e-Literate.

RMAN Pet Peeves

Michael Dinh - Sat, 2014-08-02 12:38

Do you validate your backup and what command do you use?

Lately, I have been using restore database validate preview summary to kill 2 birds with 1 stone.

The issue is RMAN will skip validation of archived log backupset when archived log exists.

Does this seem wrong to you?

Please take a look at a test case here

What do you think?


Are You Using BULK COLLECT and FORALL for Bulk Processing Yet?

Eddie Awad - Sat, 2014-08-02 12:01

Steven Feuerstein was dismayed when he found in a PL/SQL procedure a cursor FOR loop that contained an INSERT and an UPDATE statements.

That is a classic anti-pattern, a general pattern of coding that should be avoided. It should be avoided because the inserts and updates are changing the tables on a row-by-row basis, which maximizes the number of context switches (between SQL and PL/SQL) and consequently greatly slows the performance of the code. Fortunately, this classic antipattern has a classic, well-defined solution: use BULK COLLECT and FORALL to switch from row-by-row processing to bulk processing.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

Linking of Bugs, Notes and SRs now available in SRs

Joshua Solomin - Fri, 2014-08-01 18:01

We have extended the linking capability within the body of an SR. Because of security concerns and issues with dealing with embedded HTML, we don't let SRs contain HTML directly.

But we now allow a variety of formats to LINK from Bugs, Documents and other SRs within the body of an SR.

Screen shot of links that work in SR updates

So now you can a) direct link to these items when a support engineer gives you a bug or doc to follow, or you can update the SR using one of these formats. Hopefully they are not too tough to follow.

Knowledge Documents Formats
note 1351022.2
doc id 1351022.2
document id 1351022.2

Bug Formats
bug 1351022.2

Service Request Formats
SR 3-8777412995
SR Number 3-8777412995
Service Request 3-8777412995

Hope this helps!


Best of OTN - Week of July 27th

OTN TechBlog - Fri, 2014-08-01 13:13
Systems Community - Rick Ramsey, OTN Systems Community Manager -

Tech Article -  Playing with ZFS Snapshots, by ACE Alexandre Borges -
Alexandre creates a ZFS pool, loads it with files, takes a snapshot, verifies that the snapshot worked, removes files from the pool, and finally reverts back to the snapshot file. Then he shows you how to work with snapshot streams. Great way to do backups

From OTN Garage FB - Recently a DBA at an IOUG event complained to Tales from the Data Center that they were unable to install from the Solaris 11.2 ISO. They had seen an Openstack a few weeks ago, and wanted to know how to install Solaris 11.2 in a VM. So guys… here is a step by step for you - Tales from the Datacenter.

Java Community - Tori Wieldt, OTN Java Community Manager

Tech Article: Learning Java Programming with BlueJ IDE https://blogs.oracle.com/java/entry/tech_article_learning_java_programming

The Java Source Blog - The Java Hub at JavaOne! Come see the Oracle Technology Network team and see cool demo's, interviews, etc.

Friday Funny : "An int and an int sometimes love each other very much and decide to make a long." @asz #jvmls Thanks @stuartmarks !

Database Community - Laura Ramsey, OTN Database Community Manager

OTN DBA/DEV Watercooler BlogOracle Database 12c Release 12.1.0.2 is Here! ..with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release 12.1.0.2 supports Linux and Oracle Solaris (SPARC and x86 64 bit).  Read More!

Architect Community - Bob Rhubart, OTN Architect Community Manager
Top 3 Playlists on the OTN ArchBeat YouTube Channel

July Security Alert

Paul Wright - Thu, 2014-07-31 15:25
Hi Oracle Security Folks, The July Oracle Security Alert is out. My part is smaller than last quarter as just an In-Depth Credit, but Mr David Litchfield makes a triumphal return with some excellent new research. http://www.oracle.com/technetwork/topics/security/cpujul2014-1972956.html There is a CVSS 9 and a remote unauthenticated issue in this patch so worth installing this one. [...]

Correction: PeopleSoft Interaction Hub Support Plans

PeopleSoft Technology Blog - Thu, 2014-07-31 12:25
In a recent post, we said that Extended Support for the PeopleSoft Interaction Hub was ending in October 2014.  To be clearer, Extended Support for release 9.0 is ending on that date.  Extended Support for release 9.1 and its revisions will be available until at least October 2018.  Sustaining support will be available for those releases beyond the Extended Support dates.  Look for the release of Revision 3 of the Interaction Hub soon.

MySQL 5.6.20-4 and Oracle Linux DTrace

Wim Coekaerts - Thu, 2014-07-31 09:57
The MySQL team just released MySQL 5.6.20. One of the cool new things for Oracle Linux users is the addition of MySQL DTrace probes. When you use Oracle Linux 6, or 7 with UEKr3 (3.8.x) and the latest DTrace utils/tools, then you can make use of this. MySQL 5.6 is available for install through ULN or from public-yum. You can just install it using yum.

# yum install mysql-community-server

Then install dtrace utils from ULN.

# yum install dtrace-utils

As root, enable DTrace and allow normal users to record trace information:

# modprobe fasttrap
# chmod 666 /dev/dtrace/helper

Start MySQL server.

# /etc/init.d/mysqld start

Now you can try out various dtrace scripts. You can find the reference manual for MySQL DTrace support here.

Example1

Save the script below as query.d.

#!/usr/sbin/dtrace -qws
#pragma D option strsize=1024


mysql*:::query-start /* using the mysql provider */
{

  self->query = copyinstr(arg0); /* Get the query */
  self->connid = arg1; /*  Get the connection ID */
  self->db = copyinstr(arg2); /* Get the DB name */
  self->who   = strjoin(copyinstr(arg3),strjoin("@",
     copyinstr(arg4))); /* Get the username */

  printf("%Y\t %20s\t  Connection ID: %d \t Database: %s \t Query: %s\n", 
     walltimestamp, self->who ,self->connid, self->db, self->query);

}

Run it, in another terminal, connect to MySQL server and run a few queries.

# dtrace -s query.d 
dtrace: script 'query.d' matched 22 probes
CPU     ID                    FUNCTION:NAME
  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:21 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: select @@version_comment limit 1

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: SELECT DATABASE()

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show databases

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show tables

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:31 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: select * from foo

Example 2

Save the script below as statement.d.

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-60s %-8s %-8s %-8s\n", "Query", "RowsU", "RowsM", "Dur (ms)");
}

mysql*:::update-start, mysql*:::insert-start,
mysql*:::delete-start, mysql*:::multi-delete-start,
mysql*:::multi-delete-done, mysql*:::select-start,
mysql*:::insert-select-start, mysql*:::multi-update-start
{
    self->query = copyinstr(arg0);
    self->querystart = timestamp;
}

mysql*:::insert-done, mysql*:::select-done,
mysql*:::delete-done, mysql*:::multi-delete-done, mysql*:::insert-select-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           0,
           arg1,
           this->elapsed);
    self->querystart = 0;
}

mysql*:::update-done, mysql*:::multi-update-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           arg1,
           arg2,
           this->elapsed);
    self->querystart = 0;
}

Run it and do a few queries.

# dtrace -s statement.d 
Query                                                        RowsU    RowsM    Dur (ms)
select @@version_comment limit 1                             0        1        0
SELECT DATABASE()                                            0        1        0
show databases                                               0        6        0
show tables                                                  0        2        0
select * from foo                                            0        1        0