Feed aggregator

$APPLTMP directory in R12.2 Multi Node

Senthil Rajendran - Fri, 2015-01-02 07:15

$APPLTMP directory in R12.2 if placed in a shared location in a multi node environment the there are know complications in multi node cutover. So best to leave it under INST top for each node.

Want to learn Exadata ?

Amardeep Sidhu - Fri, 2015-01-02 03:19

Many people have asked me this question that how they can learn Exadata ? It starts sounding even more difficult as a lot of people don’t have access to Exadata environments. So thought about writing a small post on the same.

It actually is not as difficult as it sounds. There are a lot of really good resources available from where you can learn about Exadata architecture and the things that work differently from any non-Exadata platform. You might be able to do lot more RnD if you have got access to an Exadata environment but don’t worry if you haven’t. Without that also there is a lot that you can explore. So here we go:

  1. I think the best reference that one can start with is Expert Oracle Exadata book by Tanel Poder, Kerry Osborne and Randy Johnson. As a traditional book covers the subject topic by topic from ground up so it makes a fun read. This book is also no different. It will teach you a lot. They are already working on the second edition. (See here).
  2. Next you can jump to whitepapers on Oracle website Exadata page, blog posts (keep an eye on OraNA.info) and whitepapers written by other folks. There is a lot of useful material out there. You just need to Google a bit.
  3. Exadata documentation (not public yet) should be your next stop if you have got access to it. Patch 10386736 on MOS if you have got the access.
  4. Try to attend an Oracle Users Group conference if there is one happening in your area. Most likely someone would be presenting on Exadata so you can use that opportunity to learn about it. Also you will get a chance to ask him questions.
  5. Lastly if you have an Exadata machine available do all the RnD you can.

Happy New Year and Happy Learning !

Categories: BI & Warehousing

Four Secrets of Success

FeuerThoughts - Thu, 2015-01-01 09:56
More than a few people think that I am pretty good at what I do, that I am successful. I respect their judgement and thought about what contributed to my success. I came up with four that form a foundation for (my) success. Since it is possible that others will find them helpful, I have decided to share my Four Secrets of Success (book in the works, film rights sold to Branjolina Films).

Follow these four recommendations, and you will be more successful in anything and everything you seek to accomplish.

1. Drink lots of water.

if you are dehydrated, nothing about you is operating optimally. By the time you realize you are thirsty, you are depleted. You are tired and listless. You think about getting another cup of coffee but your stomach complains at the thought.

No problem. Just get yourself a big glass of water, room termperature, no ice, and drink it down. You will feel the very substance of life trickle into your body and bring you back to life. Then drink another glass. 

Couldn’t hurt to try, right?

2. Work your abs.

What they say about a strong core? It’s all true. Strengthen your abdominal muscles and you will be amazed at the change in your life. I vouch for it from my own experience. 

I’m not talking about buying an Ab-Roller or going nuts with crazy crunches. Just do something every day, and see if you can do a little more every day. 

Couldn’t hurt to try, right?

3. Go outside. 

Preferably amongst trees, in a forest. 

We did not evolve to sit in front of a screen, typing. Our bodies do not like what we force them to do. Go outside and you will make your body happy. And seeing how your brain is inside your body, it will make you happy, too. Then when you get back to the screen, you will be energized, creative and ready to solve problems.

Couldn’t hurt to try, right?

How do I know these three things will make a difference? Because whenever I stop doing any of them for very long, I start to feel bad, ineffective, unfocused. 

Oh, wait a minute. I said “Four Secrets of Success”. So there’s one more. This one’s different from the others. The above three are things I suggest you do. Number Four is, in contrast, something I suggest you stop doing:

4. Turn off your TV.

By which I mean: stop looking at screens for sources of information about the world. Rely on direct experience as much as possible.

Not only is television bad for humans physically, but you essentially turn off your brain when you watch it. If, instead, you turn off the TV, you will find that you have more time (objectively and subjectively) to think about things (and go outside, and work your abs, and...).

Couldn’t hurt to try, right?

Well, actually, you might find it kind of painful to turn off your TV. It depends on how comfortable you are living inside your own mind. 

And if you are not comfortable, well, how does that make you feel?

Wishing you the best in 2015,
Steven Feuerstein
Categories: Development

Fun : 2015 !

Jean-Philippe Pinte - Wed, 2014-12-31 17:01
Excellente Année 2015 !

Dynamcially add components to an Oracle MAF AMX page & show and hide components

Shay Shmeltzer - Wed, 2014-12-31 14:18

A question I saw a couple of times about Oracle MAF AMX pages is "how can I add a component to the page at runtime?".

In this blog entry I'm going to show you a little trick that will allow you to dynamically "add" components to an AMX page at runtime, even though right now there is no API that allows you to add a component to an AMX page by coding.

Let's suppose you want to add a bunch of buttons to a page at runtime. All you'll need to have is an array that contain entries for every button you want to add to the page.

We are going to use the amx:iterator component that is bounded to the above array and simply goes over the records and renders a component for each one.

Going one step beyond that, I'm going to show how to control which components from that array actually shows up, based on another value in the array.

So this is another thing you get to see in this example and this is how to dynamically show or hide a component in an AMX page with conditional EL. Usually you'll use this EL in the rendered property of a component, but in the iterator situation we need to use another approach using the inlineStyle that you change dynamically.

You can further refine this approach to control which type of component you render - see for example this demo I did for regular ADF Faces apps and apply a similar approach. 

By the way - this demo is done with Eclipse using OEPE - but if you are using JDeveloper it should be just as easy :-) 

<p> </p>

Here is the relevant code from the AMX page:

<amx:iterator value="#{bindings.emps1.collectionModel}" var="row" id="i2">

<amx:commandButton id="c1" text="#{row.name}" inlineStyle="#{row.salary >4000 ? 'display: none;' : 'display: inline;'}">

<amx:setPropertyListener id="s1" from="#{row.name}" to="#{viewScope.title}"/>

</amx:commandButton>

</amx:iterator> 

Categories: Development

UKOUG Annual Conference (Tech 2014 Edition)

Andrew Clarke - Wed, 2014-12-31 12:37
The conferenceThis year the UKOUG's tour of Britain's post-industrial heritage brought the conference to Liverpool. The Arena & Convention Centre is based in Liverpool docklands, formerly the source of the city's wealth and now a touristic playground of museums, souvenir shops and bars. Still at least the Pumphouse functions as a decent pub, which is one more decent pub than London Docklands can boast. The weather was not so much cool in the 'Pool as flipping freezing, with the wind coming off the Mersey like a chainsaw that had been kept in a meat locker for a month. Plus rain. And hail. Which is great: nothing we Brits like more than moaning about the weather.

After last year's experiment with discrete conferences, Apps 2014 was co-located with Tech 2014; each was still a separate conference with their own exclusive agendas (and tickets) but with shared interests (Exhibition Hall, social events). Essentially DDD's Bounded Context pattern. I'll be interested to know how many delegates purchased the Rover ticket which allowed them to cross the borders. The conferences were colour-coded, with the Apps team in Blue and the Tech team in Red; I thought this was an, er, interesting decision in a footballing city like Liverpool. Fortunately the enforced separation of each team's supporters kept violent confrontation to a minimum. The sessionsThis is not all of the sessions I attended, just the ones I want to comment on. There's no place like ORACLE_HOMEI started my conference by chairing Niall Litchfield's session on Monday morning. Niall experienced every presenter's nightmare: switch on the laptop, nada, nothing, completely dead. Fortunately it turned out to be the fuse in the charger's plug, and a marvellous tech support chap was able to find a spare kettle cable. Niall coped well with the stress and delivered a wide-ranging and interesting introduction of some of the database features available to developers. It's always nice to here a DBA say difficult is the task of developers these days. I'd like to hear more acknowledge it, and more importantly being helpful rather than becoming part of the developer's burden :) The least an Oracle DBA needs to know about LinuxTurns out "the least" is still an awful lot. Martin Nash started with installing a distro and creating a file system, and moves on from there. As a developer I find I'm rarely allowed OS access to the database server these days; I suspect many enterprise DBAs also spend most of their time in OEM rather than the a shell prompt. But Linux falls into that category of things which when you need to know them you need to know them in the worst possible way. So Martin has given me a long list of commands with which to familiarize myself. Why solid SQL still delivers the best performanceRobyn Sands began her session with the shocking statement that the best database performance requires good application design. Hardware improvements won't safe us from the consequences of our shonky code. From her experience in Oracle's Real World Performance team, the top three causes of database slowness are:
  • People not using the database the way it was designed to be used
  • Sub-optimal architecture or code
  • Sub-optimal algorithm (my new favourite synonym for "bug")

The bulk of her session was devoted to some demos, racing different approaches to DML:
  • Row-by-row processing
  • Array (bulk) processing
  • Manual parallelism i.e. concurrency
  • Set-based processing i.e. pure SQL
There were a series of races, starting with a simple copying of data from one table to another and culminating in a complex transformation exercise. If you have attended any Oracle performance session in the last twenty years you'll probably know the outcome already but it was interesting to see how much faster pure SQL was compared to the other approaches. in fact the gap between the set-based approach and the row-based approach widened with each increase in complexity of the task. What probably surprised many people (including me) was how badly manual parallelism fared: concurrent threads have a high impact on system resource usage, because of things like index contention. Enterprise Data Warehouse Architecture for Big DataDai Clegg was at Oracle for a long time and has since worked for a couple of startups which used some of the new-fangled Big Data/NoSQL products. This mix of experience has given him a breadth of insight which is not common in the Big Data discussion.

His first message is one of simple economics: these new technologies solve the problem of linear scale-out at a price-point below that of Oracle. Massively parallel programs using cheap or free open source software on commodity hardware. Commodity hardware is more failure prone than enterprise tin (and having lots of the blighters actually reduces the MTTF) but these distributed frameworks are designed to handle node failures; besides, commodity hardware has gotten a lot more reliable over the years. So, it's not that we couldn't implement most Big Data applications using relational databases, it's just cheaper not to.

Dai's other main point addressed the panoply of products in the Big Data ecosystem. Even in just the official Hadoop stack there are lots of products with similar or overlapping capabilities: do we need Kafka or Flume or both? There is no one Big Data technology which is cheaper and better for all use cases. Therefore it is crucial to understand the requirements of the application before starting on the architecture. Different applications will demand different permutations from the available options. Properly defined use cases (which don't to be heavyweight - Dai hymned the praises of the Agile-style "user story") will indicate which kinds of products are required. Organizations are going to have to cope with heterogeneous environments. Let's hope they save enough on the licensing fees to pay for the application wranglers. How to write better PL/SQLAfter last year's fiasco with shonky screen rendering and failed demos I went extremely low tech: I could have my presentation from the PDF on a thumb-drive. Fortunately that wasn't necessary. My session was part of the Beginners' Track: I'm not sure how many people in the audience were actual beginners; I hope the grizzled veterans got something out of it.

One member of the audience turned out to be a university lecturer; he was distressed by my advice to use pure SQL rather than PL/SQL whenever possible. Apparently his students keep doing this and he has to tell them to use PL/SQL features instead. I'm quite heartened to hear that college students are familiar with the importance of set-based programming. I'm even chuffed to have my prejudice confirmed that it is university lecturers who are teach people to write what is bad code in the real world. I bet he tells them to use triggers as well :) Oracle Database 12c New Indexing FeaturesI really enjoy Richard Foote's presenting style: it is breezily Aussie in tone, chatty and with the occasional mild cuss word. If anybody can make indexes entertaining it is Richard (and he did).

His key point is that indexes are not going away. Advances in caching and fast storage will not remove the need for indexed reads, and the proof is Oracle's commitment to adding further capabilities. In fact, there are so many new indexing features that Ricahrd's presentation was (for me) largely a list of things I need to go away and read about. Some of these features are quite arcane: an invisible index? on an invisible column? Hmmmm. I'm not sure I understand when I might want to implement partial indexing on a partitioned table. What I'm certain about is that most DBAs these days are responsible for so many databases that they don't have the time to acquire the requisite understanding of individual applications and their data; so it seems to me unlikely that they will be able to decide which partitions need indexing. This is an optimization for the consultants. Make your data models singIt was one of the questions in the Q&A section of Susan Duncan's talk which struck me. The questioner talked about their "legacy" data warehouse. How old did that make me feel? I can remember when Data Warehouses were new and shiny and going to solve very enterprises data problems.

The question itself dealt with foreign keys: as is a common practice the data warehouse had no defined foreign keys. Over the years it had sprawled across several hundred tables, without the documentation keeping up. Is it possible, the petitioner asked, to reverse engineer the data model with foreign keys in the database? Of course the short answer is No. While it might be possible to infer relationships from common column names, there isn't any tool we were aware of which could do this. Another reminder that disabled foreign keys are better than no keys at all. Getting started with JSON in the DatabaseMarco Gralike has a new title: he is no longer Mr XMLDB he is now Mr Unstructured Data in the DB. Or at least his bailiwick has been extended to cover JSON. JSON (JavaScript Object Notation) is a lightweight data transfer mechanism: basically it's XML without the tags. All the cool kids like JSON because it's the basis of RESTful web interfaces. Now we can store JSON in the database (which probably means all the cool kids will wander off to find something else now that fusty old Oracle can do it).
The biggest surprise for me is that Oracle haven't introduced a JSON data type (apparently there were so many issues around the XMLType nobody had the appetite for another round). So that means we store JSON in VARCHAR2, CLOB, BLOB or RAW. But like XML there are operators which allow us to include JSON documents in our SQL. The JSON dot notation works pretty much like XPath, and we can use it to build function-based indexes on the stored documents. However, we can't (yet) update just part of a JSON doc: it is wholesale replacement only.

Error handling is cute: by default invalid JSON syntax in a query produces null in result set rather than an exception. Apparently that's how the cool kids like it. For those of us that prefer our exceptions hurled rather than swallowed there is an option to override this behaviour. SQL is the best development language for Big DataThis was Tom Kyte giving the obverse presentation to Dai Clegg: Oracle can do all this Big Data stuff, and has been doing it for some time. He started with two historical observations:
  • XML data stores were going to kill off relational databases. Which didn't happen.
  • Before relational databases and SQL there was NoSQL, literally no SQL. Instead there were things like PL/1, which was a key-value data store.
Tom had a list of features in Oracle which support Big Data applications. They were:
  • Analytic functions which have enabled ordered array semantics in SQL since the last century.
  • SQL Developer's support for Oracle Data Mining.
  • The MODEL clause (for those brave enough to use it).
  • Advanced pattern matching with the MATCH RECOGNIZE clause in 12c
  • External tables with their support for extracting data from flat files, including from HDFS (with the right connectors)
  • Support for JSON documents (see above).
He could also have discussed document storage with XMLType and Oracle Text, Enterprise R, In-Memory columnar storage, and so on. We can even do Map/Reduce in PL/SQL if we feel so inclined. All of these are valid assertions; the problem is (pace Dai Clegg) simply one of licensing. Too many of the Big Data features are chargeable extras on top of Enterprise Edition licenses. Big Data technology is suited to a massively parallel world where all processors are multi-core and Oracle's licensing policy isn't. Five hints for efficient SQLThis was an almost philosophical talk from Jonathan Lewis, in which he explained how he uses certain hints to fix poorly performing queries. The optimizer takes a left-deep approach, which can lead to a bad choice of transformation, bad estimates (but check your stats as well!) and bad join orders. His strategic solution is to shape the query with hints so that Oracle's execution plan meets our understanding of the data. <

So his top five hints are:
  • (NO_)MERGE

  • (NO_)PUSH_PRED

  • (NO_)UNNEST

  • (NO_)PUSH_SUBQ

  • DRIVING_SITE

Jonathan calls these strategic hints, because advise the optimizer how to join tables or how to transform a sub-query. They don't hard-code paths in the way that say the INDEX hint does.

Halfway through the presentation Jonathan's laptop slid off the lectern and slammed onto the stage floor. End of presentation? Luckily not. Apparently his laptop is made of the same stuff they use for black box flight recorders, because after a few anxious minutes it rebooted successfully and he was able to continue with his talk. I was struck by how unflustered he was by the situation (even though he didn't have a backup due to last minute tweaking of the slides). A lovely demonstration of grace under pressure.

I have an idea for an app, what's next?

Bradley Brown - Tue, 2014-12-30 18:44
This is a question that I get asked quite often.  My first thoughts are:

1. An app isn't necessarily a business.
2. Can you generate enough revenue to pay for the development?
3. There is usually more to an app than just the app.
4. Which platforms?

I thought I'd take this opportunity to address each of these points in more detail.  Before I do this, I think it's important to say that I don't consider myself to be the world's leading authority on apps, so I should explain why I get asked this question.

In 2008, when I saw my first Android phone, I was very intrigued by the ability to write an app for a phone.  I had this thought - what if I could develop an app that would I would sell and it paid for my lunch every day.  How cool would that be?  I was a Java developer (not a great one, but I could write Java code) and the Android devices used a Java development stack as their base.  So the learning curve wasn't huge for me.  More importantly, I only had to invest time, not money to write an app.

I was very much into Web Services at the time and Yahoo had (still has actually) some really cool Web Services that are available for public use.  These are based on what they call YQL (Yahoo Query Language) and since I'm a SQL (Structured Query Language) guy at heart, YQL and Web Services were right up my alley.

One of the uses of YQL included providing a location and getting all of the local events in a specified radius from that location.  So I thought I should create an app that would allow anyone to find their local events that they were interested in.  I created my first "Local Events" app and put in the market.  Not many people downloaded the app (it wasn't paying for lunch), so I started thinking about how people searched for apps.  I figured they would search for the events they were interested in - singles, beer, crafting, technical, etc.  So I created "Local Beer Events" and "Local Singles Events" and many other apps.

Another YQL search that Yahoo provides is for local businesses - again, from a specific location.  So my "second" app was centered around local businesses.  One again, I thought about how people searched for apps and I created a local Starbucks app, local Panera, local Noodles, etc.  The downside of this app was that Starbucks and many others didn't like me using their name in app name due to trademark infringements.

Back to my story of paying for lunch - I quickly paid for lunch each day and my goal became to generate $100 a day, then $1000 a day.  I did generate over $1000 in many days.  I experimented with pricing and learned a lot.

In the end, I ended up taking all of those apps off the market...or actually Google took them off the market for me.  Likely due to my app names or because I had spammed the market with over 500 apps or who knows why.

I wrote a book for Apress book on the business of writing Android apps and I spoke at numerous conferences on the topic.

It was at that point that I decided to rethink my app strategy.  What could I build that would actually be a business?  Could I charge for the app or did I need to offer an entire service?

So back to my questions above:
An app isn't necessarily a businessIf you're a developer and you can develop 100% of the app with no costs, this may not apply to you.  Most people have to pay for developers and servers to deploy an app.  A business is typically defined as an entity that makes a profit.  So income minus expenses is profit.  What will your income be from your app?  Do people actually pay for apps today?  I believe they do, but not often...i.e. there must be a LOT of value to pay for an app...especially to have enough people paying for your app.  Let's say you price your app at $2.  How many copies do you have to sell just to pay for the development?  What about the ongoing support costs?  If you paid $20k to develop the app, you would have to sell 10,000 copies to "break even."  But...you'll have to support the app, keep it running, upgrade it, etc.  Most apps (like books) never sell 10,000 copies.  So...just creating an app isn't necessarily a business.
Can you generate enough revenue to pay for the developmentLike I said above, generating revenue for an app is tough.  Paying for the development of the app is tough.  Maybe you can generate revenue other ways?  Think about this a LOT before you decide to proceed with developing your app.
There is usually more to an app than just the appMost apps aren't standalone apps.  Sure, my "Local Starbucks" app was "standalone" in some regards, but it wasn't in other regards.  It relied on the Yahoo Web Service to deliver current Starbucks locations.  I had someone approach me about a "Need a Loo" (find a local bathroom in London) app.  They had the data for all of the bathrooms...but this changed frequently.  Could I have built the app and had the data be included in the app?  Yes, but...when the Loo locations changed, I would have had to update the app, which isn't an ideal solution.  So I had to build a database and a Web application that allowed them to maintain Loo locations.  Then I had to build web services that looked up the current Loo locations from the database.  In other words, most apps involve databases, web services, and back end systems to maintain the data.  All of these imply additional costs...which imply additional revenue that must be generated to sustain your business.

I wrote 5 books for Oracle press on the topic of web applications, web services and the like.  I know how to build the backend of apps, this was the easy part for me!
Which Platforms?When you think about an app, you might be thinking of an iPhone app if you have an iPhone or an iPad.  You might be thinking of an Android app if you have an Android phone or tablet.  There are SO many development platforms today.  iOS, Android, Mac, Windows, Apple TV, Kindle Fire TV, and literally about 100 more.  There are cross platform development tools, but they tend to be what I call "least common denominator" solutions.  In other words, they will alienate someone.  If it looks like an iOS (iPhone/iPad) app, it's going to alienate the Android users...or visa-versa.  For this reason, native apps are in vogue now.

Every platform is about $30k or more in our world.  Again, all of these are expenses...that must be recouped.
Why InteliVideo?
I thought long and hard about my next generation of apps that I wanted to create.  That's when I determined that I needed to create a business...that had apps, not an app that was a business.  The video business was a natural progression for me.  I wanted to have the ability to sell my educational material (Oracle training) and deliver it in an app.  We have a LOT more than an app.  We have an entire business - that has apps.  So when you think about developing an app...think about the business, not the app.

Can A Background Process Impact A Foreground Process And Its Database Time?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Can A Background Process Impact A Foreground Process And Its Database Time?
Have you ever heard someone say, "Background processes do not impact foreground processes because they run in the background and in parallel with foreground processes." I've heard this hundreds of times!

While doing some performance research I came across a great example of how an Oracle Database background process can directly and significantly impact a foreground process.

The above quote represents a masterfully constructed lie; it contains both a lie and a truth. The mix of a truth and a lie make understanding the reality of the situation difficult. In this post, I'll explain the truth, delve into the lie and relate it all to foreground process database time.

By the way, I am in no way saying there is something wrong with or incorrect about DB Time. I want to ensure this is clear from the very beginning of this post.

Just so there is no confusion, an Oracle foreground process is sometimes also called a server process or a shadow process. These can terms can be used interchangeably in this post.

The Truth
Clearly background and foreground processes operate in parallel. I don't think any DBA would deny this. As I frequently say, "serialization is death and parallelism is life!" A simple "ps" command will visually show both Oracle background and foreground processes at work. But this in no way implies they do not impact each other's activity and performance.

In fact, we hope they do impact each other! Can you imagine what performance would be with the background processes NOT running in parallel?! What a performance nightmare that would be. But this where the "no impact" lie lives.

The Lie
Most senior DBAs can point to a specific situation where Oracle cache buffer chain latch contention affected multiple foreground sessions. In this situation, foreground sessions were franticly trying to acquire a popular cache buffer chain latch. But this is a foreground session versus foreground session situation. While this is example is important, this post is about when a background process impacts a foreground process.

Have you every committed a transaction and it hangs while the foreground process is waiting on "log file switch (checkpoint incomplete)" or even worse "log file switch (archiving needed)" event? All the foreground process knows is that its statement can't finish because a required log switch has not occurred because a checkpoint is incomplete. What the server process does not know is the checkpoint (CKPT), the database writer (DBWR) and the log writer (LGWR) background processes are involved. There is a good chance the database writer is frantically writing dirty buffers to the database (dbf) files so the LGWR can safely overwrite the associated redo in the next online redo log.

For example, if a server process issued a commit during the checkpoint, it will wait until the checkpoint is complete and the log writer has switched and can write into the next redo log. So, while the log writer background processes is probably waiting on "log file parallel write" and the database writer is burning CPU and waiting on "db file parallel write", the foreground processes are effectively hung.

This is a classic example of how a background process can impact the performance of a foreground process.

A Demonstration Of The Lie
Here's a quick demonstration of the above situation. On an existing database in my lab, I created two 4MB redo logs and dropped all the other redo logs. I started a DML intensive workload. According to the alert.log file, the redo logs where switching every couple of seconds! Take a look at this:
$ tail -f /home/oracle/base/diag/rdbms/prod30/prod30/trace/alert*log
Thread 1 cannot allocate new log, sequence 2365
Checkpoint not complete
Current log# 4 seq# 2364 mem# 0: /home/oradata/prod30/redoA1.log
Mon Dec 29 11:02:09 2014
Thread 1 advanced to log sequence 2365 (LGWR switch)
Current log# 5 seq# 2365 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 cannot allocate new log, sequence 2366
Checkpoint not complete
Current log# 5 seq# 2365 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 advanced to log sequence 2366 (LGWR switch)
Current log# 4 seq# 2366 mem# 0: /home/oradata/prod30/redoA1.log
Thread 1 cannot allocate new log, sequence 2367
Checkpoint not complete
Current log# 4 seq# 2366 mem# 0: /home/oradata/prod30/redoA1.log
Thread 1 advanced to log sequence 2367 (LGWR switch)
Current log# 5 seq# 2367 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 cannot allocate new log, sequence 2368
Checkpoint not complete
Current log# 5 seq# 2367 mem# 0: /home/oradata/prod30/redoA2.log
Mon Dec 29 11:02:20 2014

Obviously not what you want to see on a production Oracle system! (But my guess many of you have.)

Using my OSM realtime session sampler tool (rss.sql - related blog posting HERE) I sampled the log writer every half a second. (There is only one log writer background process because this is an Oracle 11g database, not an Oracle Database 12c system.) If the log writer session showed up in v$session as an active session, it would be picked up by rss.sql.  Both "ON CPU" and "WAIT" states are collected. Here is a sample of the output.


It's very obvious the log writer is doing some writing. But we can't tell from the above output if the process is impacting other sessions. It would have also been very interesting to sample the database writer also, but I didn't do that. To determine if the background processes are impacting other sessions, I needed to find a foreground session that was doing some commits. I noticed that session 133, a foreground process was busy doing some DML and committing as it processed its work. Just as with the log writer background process, I sampled this foreground process once every 0.5 second. Here's a sample of the output.


Wow. The foreground process is waiting a lot for the current checkpoint to be completed! So... this means the foreground process is being effectively halted until the background processes involved with the checkpoint have finished their work.

This is a great example of how Oracle background processes can impact the performance of an Oracle foreground process.

But let's be clear. Without the background processes, performance would be even worse. Why? Because all that work done in parallel and in the background would have to be done by each foreground process AND all that work would have to be closely controlled and coordinated. And that, would be a performance nightmare!

DB Time Impact On The Foreground Process

Just for the fun of it, I wrote a script to investigate DB Time, CPU consumption, non-idle wait time and the wait time for the "log file switch wait (checkpoint incomplete)" wait event for the foreground process mentioned above (session 133). The script simply gathers some session details, sleeps for 120 seconds, again gathers some session details, calculates the differences and displays the results. You can download the script HERE. Below is the output for the foreground process, session 133.
SQL> @ckpttest.sql 133

Table dropped.

Table created.

PL/SQL procedure successfully completed.

CPU_S_DELTA NIW_S_DELTA DB_TIME_S_DELTA CHECK_IMPL_WAIT_S
----------- ----------- --------------- -----------------
2.362 117.71 119.973692 112.42

1 row selected.

Here is a quick description of the output columns.

  • CPU_S_DELTA is the CPU seconds consumed by session 133, which is the time model statistic DB CPU.
  • NIW_S_DELTA is the non-idle wait time for session 133, in seconds.
  • DB_TIME_S_DELTA is the DB Time statistic for session 133, which is the time model statistic DB Time.
  • CHECK_IMPL_WAIT_S is the wait time only for event "log file switch (checkpoint incomplete)" for session 133, in seconds.

Does the time fit together as we expect? The "log file switch..." wait time is part of the non-idle wait time. The DB Time total is very close to the CPU time plus the non-idle wait time. Everything seems to add up nicely.

To summarize: Oracle background processes directly impacted the database time for a foreground process.

In Conclusion...
First, for sure Oracle foreground and background processes impact each other...by design for increased performance. Sometimes on real production Oracle Database systems things get messy and work that we hoped would be done in parallel must become momentarily serialized. The log file switch example above, is an example of this.

Second, the next time someone tells you that an Oracle background process does not impact the performance of a foreground process, ask them if they have experienced a "log file switch checkpoint incomplete" situation. Pause until they say, "Yes." Then just look at them and don't say a word. After a few seconds you may see a "oh... I get it." look on their face. But if not, simply point them to this post.

Thanks for reading and enjoy your work!

Craig.




Categories: DBA Blogs

Compliance and File Monitoring in EM12c

Fuad Arshad - Mon, 2014-12-29 14:36
I was recently asked to help a customer set up File Monitoring in Enterprise Manager and I thought since I haven’t blogged in a while, this could be a good way to start back up again..
Enterprise manager 12c provides a very nice Compliance and File Monitoring Framwork. There are many Built in Frameworks include for PCI DSS and STIG but this How-to will only focus on a custom file monitoring framework.
Prior to Setting up Compliance features . Ensure that Privilege Delegation is set to sudo or whatever Privilege delegation provider you are using.  and Credentials for Realtime Monitoring are setup for hosts. All the Prereqs are explained here http://docs.oracle.com/cd/E24628_01/em.121/e27046/install_realtime_ccc.htm#EMLCM12307
Also important in the above link is how every OS interacts with these features.


Go To Enterprise -→ Compliance → Library

Create a New Compliance Standard



Name and Describe the Framework


You will see  the Framework Created


Now lets add some Facets to monitor > In this example I selected a tnsnames from my rdbms home


Below is a finished facet


Next lets create a rule that uses that facet

After Selecting the right rule lets Add more color

Lets add the facet that defined what file(s) will be monitored

For this example I will select all aspects  for testing but ensure that you have sized your respository as well as understand all the consequences  for each aspect





After defining the monitoring actions, you have the option to filtor and create monitoring rules based on specific events.
I will skip this for now
As we inch towards the end we can authorize changes and each event manually or incorporate a Change Management System that has a connector available in EM12c.

After We have completed this, we now have an opportunity to review the setting and then make this rule production.
Now lets create a Standard. We are creating a custom File Monitoring Standard With a RTM type Standard Applicable to host

We will add rules to the File Monitor . In this Case we will add the tnsnames rule we created to the Standard. You can add standard as well as rules to a Standard

Next Lets Associate Targets to this Standard.
You will be asked to confirm

Optionally now  you can add this to the compliance framework for one stop monitoring

Now that we have set everything up. Lets Test this. Here is the original tnsnames.ora
Lets add another tns entry

Prior to the change . here is that the Compliance Results Page Looks Like. As you can see the evaluation was successful. And we are 100% compliancet



Now  If If go to Compliance -> Real time observations . I can see that I didn’t install the Kernel module needed for granular control and this cannot use certain functionality

So I’m going to remove these from my rule for now.
Now I have made a whole bunch of changes including even moving the file. It is all captured .

There are many changes here and we can actually compare what changed
If you select unauthorized as the audited event  for the change the compliance score drops and you can use it for see how many violations for a given rule happen.

In Summary. Em12c Provides a very robust framework of monitoring compliance standards as well as custom created frameworks to ensure your auditors and IT Managers are happy.


Oracle Audit Vault Oracle Database Plug-In

The Oracle Audit Vault uses Plug-Ins to define data sources.  The following table summarizes several of the important facts about the Oracle Audit Vault database plug for Oracle databases –

Oracle Database Plug-In for the Oracle Audit Vault

Plug-in Specification

Description

Plug-in directory

AGENT_HOME/av/plugins/com.oracle.av.plugin.oracle

Secured Target Versions

Oracle 10g, 11g, 12c Release 1 (12.1)

Secured Target Platforms

Linux/x86-64

Solaris /x86-64

Solaris /SPARC64

AIX/Power64

Windows /86-64

HP-UX Itanium

Secured Target Location (Connect String)

jdbc:oracle:thin:@//hostname:port/service

AVDF Audit Trail Types

TABLE

DIRECTORY

TRANSACTION LOG

SYSLOG (Linux only)

EVENT LOG (Windows only)

NETWORK

Audit Trail Location

For TABLE audit trails: sys.aud$Sys.fga_log$dvsys.audit_trail$

unified_audit_trail

 

For DIRECTORY audit trails: Full path to the directory containing AUD or XML files.

 

For SYSLOG audit trails: Full path to the directory containing the syslog file.

 

For TRANSACTION LOG, EVENT LOG and NETWORK audit trails: no trail location required.

If you have questions, please contact us at mailto:info@integrigy.com

Reference
Auditing, Oracle Audit Vault, Oracle Database
Categories: APPS Blogs, Security Blogs

Packt - The $5 eBook Bonanza is here!

Surachart Opun - Tue, 2014-12-23 00:38
 The $5 eBook Bonanza is here!
Spread out news for people who are interested in reading IT books. The $5 eBook Bonanza is here! You will be able to get any Packt eBook or Video for just $5 until January 6th 2015.
Categories: DBA Blogs

Is Oracle Database Time Correct? Something Is Not Quite Right.

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Is Oracle Database Time Correct? Something Is Not Quite Right.

Oracle Database tuning and performance analysis is usually based on time. As I blogged HERE, the Oracle "database time" statistic is more interesting than simply "time spent in the database." It is the sum of CPU consumption and non-idle wait time. And Elapsed Time is the sum of all the database time related to perhaps a session or a SQL statement execution. However...

If you do the fundamental math, you'll notice the numbers don't always add up. In fact, they rarely match. In this posting, I want to demonstrate this mismatch and I want you to see this on your systems!

I'll include experimental data from a number of different systems, the statistical analysis (including pictures) and provide a tool you can download for free from OraPub.com to check out the situation on your systems.

Checking DB Time Math

DB Time is defined as "time spent in the database." This is the sum of Oracle process CPU consumption plus non-idle wait time. Usually we don't derive DB Time. The Time Model Statistics view v$sys_time_mode contains the DB Time statistic. But this view also contains the DB CPU statistic. Since there is no sum of non-idle wait time, most people just assume everything is fine.

However, if you run the simple query below on your system, it could look something this:
SQL> l
1 select db_time_s, db_cpu_s, tot_ni_wt_s
2 from (select value/1000000 db_time_s from v$sys_time_model where stat_name = 'DB time' ),
3 (select value/1000000 db_cpu_s from v$sys_time_model where stat_name = 'DB CPU' ),
4* (select sum(TIME_WAITED_MICRO_FG)/1000000 tot_ni_wt_s from v$system_event where wait_class != 'Idle' )
SQL> /

DB_TIME_S DB_CPU_S TOT_NI_WT_S
---------- ---------- -----------
330165.527 231403.925 119942.952

1 row selected.
If you add up the DB CPU and the total non-idle wait time, the value is 351,346.877. Woops! 351K does not equal 330K. What happened on my Oracle Database 12c (12.1.0.2.0)? As I have demonstrated in this POSTING (which contains videos of this) and in my online seminar training HERE, many times DB Time does nearly equal DB CPU plus the non-idle wait time. But clearly in the above situation something does not seem quite right.

Checking DB Time On Your Systems
To demonstrate the possibility of a DB Time mismatch, I created a simple plsql tool. You can download this free tool or do an OraPub.com search for "db time tool". The tool, which is easily configurable, takes a number of samples over a period of time and displays the output.


Here is an example of the output.

OraPub DB Time Test v1a 26-Sep-2014. Enjoy but use at your own risk.
.
Starting to collect 11 180 second samples now...
All displayed times are in seconds.
.
anonymer Block abgeschlossen
..........................................................................
... RAW OUTPUT (keep the output for your records and analysis)
..........................................................................
.
sample#, db_time_delta_v , db_cpu_delta_v, tot_ni_wait_delta_v, derived_db_time_delta_v, diff_v, diff_pct_v
.
1, 128,4, 128,254, ,103, 128,357266, ,043, 0
2, 22,014, 3,883, 17,731, 21,614215, ,399, 1,8
3, 1,625, 1,251, ,003, 1,253703, ,371, 22,8
4, 13,967, 12,719, 1,476, 14,194999, -,228, -1,6
5, 41,086, 41,259, ,228, 41,486482, -,4, -1
6, 36,872, 36,466, ,127, 36,593884, ,278, ,8
7, 38,545, 38,71, ,137, 38,847459, -,303, -,8
8, 37,264, 37,341, ,122, 37,463525, -,199, -,5
9, 22,818, 22,866, ,102, 22,967141, -,149, -,7
10, 30,985, 30,614, ,109, 30,723831, ,261, ,8
11, 5,795, 5,445, ,513, 5,958586, -,164, -2,8
.
The test is complete.
.
All displayed times are in seconds.

The output is formatted to make it easy to statistically analyze. The far right column is percent difference between the reported DB Time and the calculated DB Time. In the above example, they are pretty close. Get the tool and try it out on your systems.

Some Actual Examples
I want to quickly show you four examples from a variety of systems. You can download all the data in the "analysis pack" HERE. The data, for each of the four systems, contains the raw DB Time Test output (like in the section above), the statistical numeric analysis output from the statistical package "R", the actual "R" script and the visual analysis using "smooth histograms" also created using "R."

Below is the statistical numeric summary:


About the columns: Only the "craig" system is mine and other are real production or DEV/QA systems. The statistical columns all reference the far right column of the DB Time Test Tool's output, which is the percent difference between the reported DB Time and the calculated DB Time. Each sample set consists of eleven 180 second samples. The P-Value greater than 0.05 means the reported and calculated DB Time differences are normally distributed. This is not important in this analysis, but gives me clues if there is a problem with the data collection.

As you can easily see, two of the system's "DB Times" difference is greater than 10% and one of them was over 20%. The data collected shows that something is not quite right... but that's about it.

What Does This Mean In Our Work?
Clearly something is not quite right. There are a number of possible reasons and this will be focus of my next few articles.

However, I want to say that even though the numbers don't match perfectly and sometimes they are way off, this does not negate the value of a time based analysis. Remember, we not trying to land a man on the moon. We try diagnosing performance to derive solutions that (usually) aim to reduce the database time. I suspect that in all four cases I show, we would not be misled.

But this does highlight the requirement to also analysis performance from a non-Oracle database centric perspective. I always look at the performance situation from an operating system perspective, an Oracle centric perspective and an application (think: SQL, processes, user experience, etc.) perspective. This "3 Circle" analysis will reduce the likelihood of making a tuning diagnosis mistake. So in case DB Time is completely messed up, by diagnosing performance from the other two "circles" you will know something is not right.

If you want to learn more about my "3-Circle" analysis, here are two resources:
  1. Paper. Total Performance Management. Do an OraPub search for "3 circle" and you'll find it.
  2. Online Seminar: Tuning Oracle Using An AWR Report. I go DEEP into an Oracle Time Based Analysis but keeping it day-to-day production system practical.
In my next few articles I will drill down into why there can be a "DB Time mismatch," what to do about it and how to use this knowledge to our advantage.

Enjoy your work! There is nothing quite like analyzing performance and tuning Oracle database systems!!

Craig.





Categories: DBA Blogs

New features in ksplice uptrack-upgrade tools for Oracle Linux

Wim Coekaerts - Mon, 2014-12-22 14:03
We have many, many happy Oracle Linux customers that use and rely on the Oracle Ksplice service to keep their kernels up to date with all the critical CVEs/bugfixes that we release as zero downtime patches.

There are 2 ways to use the Ksplice service :

  • Online edition/client
  • The uptrack tools (the Ksplice utilities you install on an Oracle Linux server to start applying ksplice updates) connect directly with the Oracle server to download updates. This model gives the most flexibility in terms of providing information of patches and detail of what is installed because we have a website on which you can find your servers and detailed patch status.

  • Offline edition/client
  • Many companies cannot or do not register all servers remotely with our system so they can rely on the offline client to apply updates. In this mode, the ksplice patches are packaged in RPMs for convenience. For each kernel that is shipped by Oracle for Oracle Linux, we provide a corresponding uptrack-update RPM for that specific kernel version. This RPM contains all the updates that have been released since that version was released.

    The RPM is updated whenever a new ksplice patch becomes available. So you always have 1 RPM installed for a given kernel, and this RPM gets updated. This was standard yum / rpm commands can be used to update your server(s) with ksplice patches as well and everything is nicely integrated.

    The standard model is that an uptrack-upgrade command will apply all updates to current/latest on your server. This is of course the preferred way of applying security fixes on your running system, it's best to be on the latest version. However, in some cases, customers want more fine-grained control than latest.

    We just did an update of the ksplice offline tools to add support for updating to a specific "kernel version". This way, if you are on kernel version x, you would like to go to kernel version y (effective patches/security fixes) but latest is kernel version z, you can tell uptrack-upgrade to go to kernel version y. Let me give a quick and simple example below. I hope this is a useful addition to the tools.

    happy holidays and happy ksplicing!

    To install the tools, make sure that your server(s) has access to the ol6_x86_64_ksplice channel (if it's OL6) :$ yum install uptrack-offline

    Now, in my example, I have Oracle Linux 6 installed with the following version of UEK3 :

    $ uname -r3.8.13-44.1.1.el6uek.x86_64

    Let's check if updates are available :

    $ yum search uptrack-updates-3.8.13-44.1.1Loaded plugins: rhnplugin, securityThis system is receiving updates from ULN.=========== N/S Matched: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64 ===========uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch : Rebootless updates for the ...: Ksplice Uptrack rebootless kernel update service

    As I mentioned earlier, for each kernel there's a corresponding ksplice update RPM. Just install that. In this case, I run 3.8.13-44.1.1.

    $ yum install uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarchLoaded plugins: rhnplugin, securityThis system is receiving updates from ULN.Setting up Install ProcessResolving Dependencies--> Running transaction check---> Package uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0 will be installed--> Finished Dependency ResolutionDependencies Resolved================================================================================ Package Arch Version Repository Size================================================================================Installing: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64 noarch 20141216-0 ol6_x86_64_ksplice 39 MTransaction Summary================================================================================Install 1 Package(s)Total download size: 39 MInstalled size: 40 MIs this ok [y/N]: yDownloading Packages:uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.n | 39 MB 00:29 Running rpm_check_debugRunning Transaction TestTransaction Test SucceededRunning Transaction Installing : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa 1/1 The following steps will be taken:Install [b9hqohyk] CVE-2014-5077: Remote denial-of-service in SCTP on simultaneous connections.......Installing [vtujkei9] CVE-2014-6410: Denial of service in UDF filesystem parsing.Your kernel is fully up to date.Effective kernel version is 3.8.13-55.1.1.el6uek Verifying : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa 1/1 Installed: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0 Complete!

    There have been a ton of updates released since 44.1.1, and the above update gets me to effectively running 3.8.13-55.1.1. Of course, without a reboot.

    $ uptrack-uname -r3.8.13-55.1.1.el6uek.x86_64

    Now we get to the new feature. There's a new option in uptrack-upgrade that lists all effective kernel versions from the installed kernel to the latest based on the ksplice rpm installed.

    $ uptrack-upgrade --list-effectiveAvailable effective kernel versions:3.8.13-44.1.1.el6uek.x86_64/#2 SMP Wed Sep 10 06:10:25 PDT 20143.8.13-44.1.3.el6uek.x86_64/#2 SMP Wed Oct 15 19:53:10 PDT 20143.8.13-44.1.4.el6uek.x86_64/#2 SMP Wed Oct 29 23:58:06 PDT 20143.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 PST 20143.8.13-55.el6uek.x86_64/#2 SMP Mon Dec 1 11:32:40 PST 20143.8.13-55.1.1.el6uek.x86_64/#2 SMP Thu Dec 11 00:20:49 PST 2014

    So as an example, let's say I want to update from 44.1.1 to 44.1.5 instead of to 55.1.1 (for whatever reason I might have). All I have to do, is run uptrack-upgrade to go to that effective kernel version.

    Let's start with removing the installed updates and go back from 55.1.1 to 44.1.1 and then upgrade again to 44.1.5 :

    $ uptrack-remove --all...$ uptrack-upgrade --effective="3.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 PST 2014"......Effective kernel version is 3.8.13-44.1.5.el6uek

    And that's it.

    New features in ksplice uptrack-upgrade tools for Oracle Linux

    Wim Coekaerts - Mon, 2014-12-22 14:03
    We have many, many happy Oracle Linux customers that use and rely on the Oracle Ksplice service to keep their kernels up to date with all the critical CVEs/bugfixes that we release as zero downtime patches.

    There are 2 ways to use the Ksplice service :

  • Online edition/client
  • The uptrack tools (the Ksplice utilities you install on an Oracle Linux server to start applying ksplice updates) connect directly with the Oracle server to download updates. This model gives the most flexibility in terms of providing information of patches and detail of what is installed because we have a website on which you can find your servers and detailed patch status.

  • Offline edition/client
  • Many companies cannot or do not register all servers remotely with our system so they can rely on the offline client to apply updates. In this mode, the ksplice patches are packaged in RPMs for convenience. For each kernel that is shipped by Oracle for Oracle Linux, we provide a corresponding uptrack-update RPM for that specific kernel version. This RPM contains all the updates that have been released since that version was released.

    The RPM is updated whenever a new ksplice patch becomes available. So you always have 1 RPM installed for a given kernel, and this RPM gets updated. This was standard yum / rpm commands can be used to update your server(s) with ksplice patches as well and everything is nicely integrated.

    The standard model is that an uptrack-upgrade command will apply all updates to current/latest on your server. This is of course the preferred way of applying security fixes on your running system, it's best to be on the latest version. However, in some cases, customers want more fine-grained control than latest.

    We just did an update of the ksplice offline tools to add support for updating to a specific "kernel version". This way, if you are on kernel version x, you would like to go to kernel version y (effective patches/security fixes) but latest is kernel version z, you can tell uptrack-upgrade to go to kernel version y. Let me give a quick and simple example below. I hope this is a useful addition to the tools.

    happy holidays and happy ksplicing!

    To install the tools, make sure that your server(s) has access to the ol6_x86_64_ksplice channel (if it's OL6) :

    $ yum install uptrack-offline
    

    Now, in my example, I have Oracle Linux 6 installed with the following version of UEK3 :

    $ uname -r
    3.8.13-44.1.1.el6uek.x86_64
    

    Let's check if updates are available :

    $ yum search uptrack-updates-3.8.13-44.1.1
    Loaded plugins: rhnplugin, security
    This system is receiving updates from ULN.
    =========== N/S Matched: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64 ===========
    uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch : Rebootless updates for the
         ...: Ksplice Uptrack rebootless kernel update service
    

    As I mentioned earlier, for each kernel there's a corresponding ksplice update RPM. Just install that. In this case, I run 3.8.13-44.1.1.

    $ yum install uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch
    Loaded plugins: rhnplugin, security
    This system is receiving updates from ULN.
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package                             Arch   Version    Repository          Size
    ================================================================================
    Installing:
     uptrack-updates-3.8.13-44.1.1.el6uek.x86_64
                                         noarch 20141216-0 ol6_x86_64_ksplice  39 M
    
    Transaction Summary
    ================================================================================
    Install       1 Package(s)
    
    Total download size: 39 M
    Installed size: 40 M
    Is this ok [y/N]: y
    Downloading Packages:
    uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.n |  39 MB     00:29     
    Running rpm_check_debug
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Installing : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa   1/1 
    The following steps will be taken:
    Install [b9hqohyk] CVE-2014-5077: Remote denial-of-service in SCTP on simultaneous connections.
    ...
    ...
    Installing [vtujkei9] CVE-2014-6410: Denial of service in UDF filesystem parsing.
    Your kernel is fully up to date.
    Effective kernel version is 3.8.13-55.1.1.el6uek
      Verifying  : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa   1/1 
    
    Installed:
      uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0               
    
    Complete!
    

    There have been a ton of updates released since 44.1.1, and the above update gets me to effectively running 3.8.13-55.1.1. Of course, without a reboot.

    $ uptrack-uname -r
    3.8.13-55.1.1.el6uek.x86_64
    

    Now we get to the new feature. There's a new option in uptrack-upgrade that lists all effective kernel versions from the installed kernel to the latest based on the ksplice rpm installed.

    $ uptrack-upgrade --list-effective
    Available effective kernel versions:
    
    3.8.13-44.1.1.el6uek.x86_64/#2 SMP Wed Sep 10 06:10:25 PDT 2014
    3.8.13-44.1.3.el6uek.x86_64/#2 SMP Wed Oct 15 19:53:10 PDT 2014
    3.8.13-44.1.4.el6uek.x86_64/#2 SMP Wed Oct 29 23:58:06 PDT 2014
    3.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 PST 2014
    3.8.13-55.el6uek.x86_64/#2 SMP Mon Dec 1 11:32:40 PST 2014
    3.8.13-55.1.1.el6uek.x86_64/#2 SMP Thu Dec 11 00:20:49 PST 2014
    

    So as an example, let's say I want to update from 44.1.1 to 44.1.5 instead of to 55.1.1 (for whatever reason I might have). All I have to do, is run uptrack-upgrade to go to that effective kernel version.

    Let's start with removing the installed updates and go back from 55.1.1 to 44.1.1 and then upgrade again to 44.1.5 :

    $ uptrack-remove --all
    ...
    $ uptrack-upgrade --effective="3.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 
    PST 2014"
    ...
    ...
    Effective kernel version is 3.8.13-44.1.5.el6uek
    

    And that's it.

    eDVD

    Bradley Brown - Mon, 2014-12-22 11:27
    We've struggled to figure out what to call this next generation of video delivery.  Is it "video on demand?"  The industry insiders are very specific in calling it TVOD and SVOD, which stands for transactional and subscription video on demand respectively.  Transactional video on demand means that consumers can buy or rent a video and watch it on their device.  Subscription video on demand means that consumers buy a subscription and they can watch a grouping of videos as a part of their subscription.

    But what does all of this have to do with consumers and how they talk about "video on demand?"  I certainly don't hear consumers using that term.  In fact, when my son recently posted a video on Facebook, my mother-in-law (his grandma) said "that was a really cool DVD Austin."  Later she asked "that was a DVD, right?"  You could hear her questioning herself about the use of the term DVD.  Austin was a little taken back by the question, paused and said "yes grandma."  He didn't want to get into the details that a DVD is a physical implementation of storage, not a method of playing a video on Facebook.

    That got me thinking about what I originally labelled this new technology as - i.e. an eDVD.  This would allow people to continue to referring to online videos as DVDs - specifically eDVDs.  So how do we change the world's view of these terms and get everyone to start calling them eDVDs?  Now that I've declare it, the world knows, right?  Well...not quite yet, but I'm sure very soon :)  Spread the word!

    Happy Holidays and a Prosperous New Year from VitalSoftTech!

    VitalSoftTech - Sun, 2014-12-21 09:01
    We at VST want to thank you, our prestigious members, for making 2014 a memorable year for us!  We are ever so grateful for your continuous support, participation and words of encouragement.  As technological mavens, you help us sustain this community with quality feedback that derives continued success for us all! How about mastering a […]
    Categories: DBA Blogs

    Digital Delivery "Badge"

    Bradley Brown - Sun, 2014-12-21 00:44
    At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).

    Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.

    Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:

    Screenshot 2014-12-16 19.39.25.png

    On our client's web page, it looks something like this:

    Screenshot 2014-12-17 14.01.11.png

    The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:

    Screenshot 2014-12-17 16.31.54.png

    What do your client's customers need to know about in order to help you sell more?

    Log Buffer #402, A Carnival of the Vanities for DBAs

    Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
    This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

    Oracle:

    EM12c and the Optimizer Statistics Console.
    SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
    OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
    Oracle 12.1.0.2 Bundle Patching.
    Performance Issues with the Sequence NEXTVAL Call.

    SQL Server:

    GUIDs GUIDs everywhere, but how is my data unique?
    Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
    Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
    Introduction to Azure SQL Database Scalability.
    What To Do When the Import and Export Wizard Fails.

    MySQL:

    Orchestrator 1.2.9 GA released.
    Making HAProxy 1.5 replication lag aware in MySQL.
    Monitor MySQL Performance Interactively With VividCortex.
    InnoDB’s multi-versioning handling can be Achilles’ heel.
    Memory summary tables in Performance Schema in MySQL 5.7.

    Also published here.
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator