Feed aggregator

UKOUG Annual Conference (Tech 2014 Edition)

Andrew Clarke - Wed, 2014-12-31 12:37
The conferenceThis year the UKOUG's tour of Britain's post-industrial heritage brought the conference to Liverpool. The Arena & Convention Centre is based in Liverpool docklands, formerly the source of the city's wealth and now a touristic playground of museums, souvenir shops and bars. Still at least the Pumphouse functions as a decent pub, which is one more decent pub than London Docklands can boast. The weather was not so much cool in the 'Pool as flipping freezing, with the wind coming off the Mersey like a chainsaw that had been kept in a meat locker for a month. Plus rain. And hail. Which is great: nothing we Brits like more than moaning about the weather.

After last year's experiment with discrete conferences, Apps 2014 was co-located with Tech 2014; each was still a separate conference with their own exclusive agendas (and tickets) but with shared interests (Exhibition Hall, social events). Essentially DDD's Bounded Context pattern. I'll be interested to know how many delegates purchased the Rover ticket which allowed them to cross the borders. The conferences were colour-coded, with the Apps team in Blue and the Tech team in Red; I thought this was an, er, interesting decision in a footballing city like Liverpool. Fortunately the enforced separation of each team's supporters kept violent confrontation to a minimum. The sessionsThis is not all of the sessions I attended, just the ones I want to comment on. There's no place like ORACLE_HOMEI started my conference by chairing Niall Litchfield's session on Monday morning. Niall experienced every presenter's nightmare: switch on the laptop, nada, nothing, completely dead. Fortunately it turned out to be the fuse in the charger's plug, and a marvellous tech support chap was able to find a spare kettle cable. Niall coped well with the stress and delivered a wide-ranging and interesting introduction of some of the database features available to developers. It's always nice to here a DBA say difficult is the task of developers these days. I'd like to hear more acknowledge it, and more importantly being helpful rather than becoming part of the developer's burden :) The least an Oracle DBA needs to know about LinuxTurns out "the least" is still an awful lot. Martin Nash started with installing a distro and creating a file system, and moves on from there. As a developer I find I'm rarely allowed OS access to the database server these days; I suspect many enterprise DBAs also spend most of their time in OEM rather than the a shell prompt. But Linux falls into that category of things which when you need to know them you need to know them in the worst possible way. So Martin has given me a long list of commands with which to familiarize myself. Why solid SQL still delivers the best performanceRobyn Sands began her session with the shocking statement that the best database performance requires good application design. Hardware improvements won't safe us from the consequences of our shonky code. From her experience in Oracle's Real World Performance team, the top three causes of database slowness are:
  • People not using the database the way it was designed to be used
  • Sub-optimal architecture or code
  • Sub-optimal algorithm (my new favourite synonym for "bug")

The bulk of her session was devoted to some demos, racing different approaches to DML:
  • Row-by-row processing
  • Array (bulk) processing
  • Manual parallelism i.e. concurrency
  • Set-based processing i.e. pure SQL
There were a series of races, starting with a simple copying of data from one table to another and culminating in a complex transformation exercise. If you have attended any Oracle performance session in the last twenty years you'll probably know the outcome already but it was interesting to see how much faster pure SQL was compared to the other approaches. in fact the gap between the set-based approach and the row-based approach widened with each increase in complexity of the task. What probably surprised many people (including me) was how badly manual parallelism fared: concurrent threads have a high impact on system resource usage, because of things like index contention. Enterprise Data Warehouse Architecture for Big DataDai Clegg was at Oracle for a long time and has since worked for a couple of startups which used some of the new-fangled Big Data/NoSQL products. This mix of experience has given him a breadth of insight which is not common in the Big Data discussion.

His first message is one of simple economics: these new technologies solve the problem of linear scale-out at a price-point below that of Oracle. Massively parallel programs using cheap or free open source software on commodity hardware. Commodity hardware is more failure prone than enterprise tin (and having lots of the blighters actually reduces the MTTF) but these distributed frameworks are designed to handle node failures; besides, commodity hardware has gotten a lot more reliable over the years. So, it's not that we couldn't implement most Big Data applications using relational databases, it's just cheaper not to.

Dai's other main point addressed the panoply of products in the Big Data ecosystem. Even in just the official Hadoop stack there are lots of products with similar or overlapping capabilities: do we need Kafka or Flume or both? There is no one Big Data technology which is cheaper and better for all use cases. Therefore it is crucial to understand the requirements of the application before starting on the architecture. Different applications will demand different permutations from the available options. Properly defined use cases (which don't to be heavyweight - Dai hymned the praises of the Agile-style "user story") will indicate which kinds of products are required. Organizations are going to have to cope with heterogeneous environments. Let's hope they save enough on the licensing fees to pay for the application wranglers. How to write better PL/SQLAfter last year's fiasco with shonky screen rendering and failed demos I went extremely low tech: I could have my presentation from the PDF on a thumb-drive. Fortunately that wasn't necessary. My session was part of the Beginners' Track: I'm not sure how many people in the audience were actual beginners; I hope the grizzled veterans got something out of it.

One member of the audience turned out to be a university lecturer; he was distressed by my advice to use pure SQL rather than PL/SQL whenever possible. Apparently his students keep doing this and he has to tell them to use PL/SQL features instead. I'm quite heartened to hear that college students are familiar with the importance of set-based programming. I'm even chuffed to have my prejudice confirmed that it is university lecturers who are teach people to write what is bad code in the real world. I bet he tells them to use triggers as well :) Oracle Database 12c New Indexing FeaturesI really enjoy Richard Foote's presenting style: it is breezily Aussie in tone, chatty and with the occasional mild cuss word. If anybody can make indexes entertaining it is Richard (and he did).

His key point is that indexes are not going away. Advances in caching and fast storage will not remove the need for indexed reads, and the proof is Oracle's commitment to adding further capabilities. In fact, there are so many new indexing features that Ricahrd's presentation was (for me) largely a list of things I need to go away and read about. Some of these features are quite arcane: an invisible index? on an invisible column? Hmmmm. I'm not sure I understand when I might want to implement partial indexing on a partitioned table. What I'm certain about is that most DBAs these days are responsible for so many databases that they don't have the time to acquire the requisite understanding of individual applications and their data; so it seems to me unlikely that they will be able to decide which partitions need indexing. This is an optimization for the consultants. Make your data models singIt was one of the questions in the Q&A section of Susan Duncan's talk which struck me. The questioner talked about their "legacy" data warehouse. How old did that make me feel? I can remember when Data Warehouses were new and shiny and going to solve very enterprises data problems.

The question itself dealt with foreign keys: as is a common practice the data warehouse had no defined foreign keys. Over the years it had sprawled across several hundred tables, without the documentation keeping up. Is it possible, the petitioner asked, to reverse engineer the data model with foreign keys in the database? Of course the short answer is No. While it might be possible to infer relationships from common column names, there isn't any tool we were aware of which could do this. Another reminder that disabled foreign keys are better than no keys at all. Getting started with JSON in the DatabaseMarco Gralike has a new title: he is no longer Mr XMLDB he is now Mr Unstructured Data in the DB. Or at least his bailiwick has been extended to cover JSON. JSON (JavaScript Object Notation) is a lightweight data transfer mechanism: basically it's XML without the tags. All the cool kids like JSON because it's the basis of RESTful web interfaces. Now we can store JSON in the database (which probably means all the cool kids will wander off to find something else now that fusty old Oracle can do it).
The biggest surprise for me is that Oracle haven't introduced a JSON data type (apparently there were so many issues around the XMLType nobody had the appetite for another round). So that means we store JSON in VARCHAR2, CLOB, BLOB or RAW. But like XML there are operators which allow us to include JSON documents in our SQL. The JSON dot notation works pretty much like XPath, and we can use it to build function-based indexes on the stored documents. However, we can't (yet) update just part of a JSON doc: it is wholesale replacement only.

Error handling is cute: by default invalid JSON syntax in a query produces null in result set rather than an exception. Apparently that's how the cool kids like it. For those of us that prefer our exceptions hurled rather than swallowed there is an option to override this behaviour. SQL is the best development language for Big DataThis was Tom Kyte giving the obverse presentation to Dai Clegg: Oracle can do all this Big Data stuff, and has been doing it for some time. He started with two historical observations:
  • XML data stores were going to kill off relational databases. Which didn't happen.
  • Before relational databases and SQL there was NoSQL, literally no SQL. Instead there were things like PL/1, which was a key-value data store.
Tom had a list of features in Oracle which support Big Data applications. They were:
  • Analytic functions which have enabled ordered array semantics in SQL since the last century.
  • SQL Developer's support for Oracle Data Mining.
  • The MODEL clause (for those brave enough to use it).
  • Advanced pattern matching with the MATCH RECOGNIZE clause in 12c
  • External tables with their support for extracting data from flat files, including from HDFS (with the right connectors)
  • Support for JSON documents (see above).
He could also have discussed document storage with XMLType and Oracle Text, Enterprise R, In-Memory columnar storage, and so on. We can even do Map/Reduce in PL/SQL if we feel so inclined. All of these are valid assertions; the problem is (pace Dai Clegg) simply one of licensing. Too many of the Big Data features are chargeable extras on top of Enterprise Edition licenses. Big Data technology is suited to a massively parallel world where all processors are multi-core and Oracle's licensing policy isn't. Five hints for efficient SQLThis was an almost philosophical talk from Jonathan Lewis, in which he explained how he uses certain hints to fix poorly performing queries. The optimizer takes a left-deep approach, which can lead to a bad choice of transformation, bad estimates (but check your stats as well!) and bad join orders. His strategic solution is to shape the query with hints so that Oracle's execution plan meets our understanding of the data. <

So his top five hints are:
  • (NO_)MERGE

  • (NO_)PUSH_PRED

  • (NO_)UNNEST

  • (NO_)PUSH_SUBQ

  • DRIVING_SITE

Jonathan calls these strategic hints, because advise the optimizer how to join tables or how to transform a sub-query. They don't hard-code paths in the way that say the INDEX hint does.

Halfway through the presentation Jonathan's laptop slid off the lectern and slammed onto the stage floor. End of presentation? Luckily not. Apparently his laptop is made of the same stuff they use for black box flight recorders, because after a few anxious minutes it rebooted successfully and he was able to continue with his talk. I was struck by how unflustered he was by the situation (even though he didn't have a backup due to last minute tweaking of the slides). A lovely demonstration of grace under pressure.

I have an idea for an app, what's next?

Bradley Brown - Tue, 2014-12-30 18:44
This is a question that I get asked quite often.  My first thoughts are:

1. An app isn't necessarily a business.
2. Can you generate enough revenue to pay for the development?
3. There is usually more to an app than just the app.
4. Which platforms?

I thought I'd take this opportunity to address each of these points in more detail.  Before I do this, I think it's important to say that I don't consider myself to be the world's leading authority on apps, so I should explain why I get asked this question.

In 2008, when I saw my first Android phone, I was very intrigued by the ability to write an app for a phone.  I had this thought - what if I could develop an app that would I would sell and it paid for my lunch every day.  How cool would that be?  I was a Java developer (not a great one, but I could write Java code) and the Android devices used a Java development stack as their base.  So the learning curve wasn't huge for me.  More importantly, I only had to invest time, not money to write an app.

I was very much into Web Services at the time and Yahoo had (still has actually) some really cool Web Services that are available for public use.  These are based on what they call YQL (Yahoo Query Language) and since I'm a SQL (Structured Query Language) guy at heart, YQL and Web Services were right up my alley.

One of the uses of YQL included providing a location and getting all of the local events in a specified radius from that location.  So I thought I should create an app that would allow anyone to find their local events that they were interested in.  I created my first "Local Events" app and put in the market.  Not many people downloaded the app (it wasn't paying for lunch), so I started thinking about how people searched for apps.  I figured they would search for the events they were interested in - singles, beer, crafting, technical, etc.  So I created "Local Beer Events" and "Local Singles Events" and many other apps.

Another YQL search that Yahoo provides is for local businesses - again, from a specific location.  So my "second" app was centered around local businesses.  One again, I thought about how people searched for apps and I created a local Starbucks app, local Panera, local Noodles, etc.  The downside of this app was that Starbucks and many others didn't like me using their name in app name due to trademark infringements.

Back to my story of paying for lunch - I quickly paid for lunch each day and my goal became to generate $100 a day, then $1000 a day.  I did generate over $1000 in many days.  I experimented with pricing and learned a lot.

In the end, I ended up taking all of those apps off the market...or actually Google took them off the market for me.  Likely due to my app names or because I had spammed the market with over 500 apps or who knows why.

I wrote a book for Apress book on the business of writing Android apps and I spoke at numerous conferences on the topic.

It was at that point that I decided to rethink my app strategy.  What could I build that would actually be a business?  Could I charge for the app or did I need to offer an entire service?

So back to my questions above:
An app isn't necessarily a businessIf you're a developer and you can develop 100% of the app with no costs, this may not apply to you.  Most people have to pay for developers and servers to deploy an app.  A business is typically defined as an entity that makes a profit.  So income minus expenses is profit.  What will your income be from your app?  Do people actually pay for apps today?  I believe they do, but not often...i.e. there must be a LOT of value to pay for an app...especially to have enough people paying for your app.  Let's say you price your app at $2.  How many copies do you have to sell just to pay for the development?  What about the ongoing support costs?  If you paid $20k to develop the app, you would have to sell 10,000 copies to "break even."  But...you'll have to support the app, keep it running, upgrade it, etc.  Most apps (like books) never sell 10,000 copies.  So...just creating an app isn't necessarily a business.
Can you generate enough revenue to pay for the developmentLike I said above, generating revenue for an app is tough.  Paying for the development of the app is tough.  Maybe you can generate revenue other ways?  Think about this a LOT before you decide to proceed with developing your app.
There is usually more to an app than just the appMost apps aren't standalone apps.  Sure, my "Local Starbucks" app was "standalone" in some regards, but it wasn't in other regards.  It relied on the Yahoo Web Service to deliver current Starbucks locations.  I had someone approach me about a "Need a Loo" (find a local bathroom in London) app.  They had the data for all of the bathrooms...but this changed frequently.  Could I have built the app and had the data be included in the app?  Yes, but...when the Loo locations changed, I would have had to update the app, which isn't an ideal solution.  So I had to build a database and a Web application that allowed them to maintain Loo locations.  Then I had to build web services that looked up the current Loo locations from the database.  In other words, most apps involve databases, web services, and back end systems to maintain the data.  All of these imply additional costs...which imply additional revenue that must be generated to sustain your business.

I wrote 5 books for Oracle press on the topic of web applications, web services and the like.  I know how to build the backend of apps, this was the easy part for me!
Which Platforms?When you think about an app, you might be thinking of an iPhone app if you have an iPhone or an iPad.  You might be thinking of an Android app if you have an Android phone or tablet.  There are SO many development platforms today.  iOS, Android, Mac, Windows, Apple TV, Kindle Fire TV, and literally about 100 more.  There are cross platform development tools, but they tend to be what I call "least common denominator" solutions.  In other words, they will alienate someone.  If it looks like an iOS (iPhone/iPad) app, it's going to alienate the Android users...or visa-versa.  For this reason, native apps are in vogue now.

Every platform is about $30k or more in our world.  Again, all of these are expenses...that must be recouped.
Why InteliVideo?
I thought long and hard about my next generation of apps that I wanted to create.  That's when I determined that I needed to create a business...that had apps, not an app that was a business.  The video business was a natural progression for me.  I wanted to have the ability to sell my educational material (Oracle training) and deliver it in an app.  We have a LOT more than an app.  We have an entire business - that has apps.  So when you think about developing an app...think about the business, not the app.

Can A Background Process Impact A Foreground Process And Its Database Time?

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Can A Background Process Impact A Foreground Process And Its Database Time?
Have you ever heard someone say, "Background processes do not impact foreground processes because they run in the background and in parallel with foreground processes." I've heard this hundreds of times!

While doing some performance research I came across a great example of how an Oracle Database background process can directly and significantly impact a foreground process.

The above quote represents a masterfully constructed lie; it contains both a lie and a truth. The mix of a truth and a lie make understanding the reality of the situation difficult. In this post, I'll explain the truth, delve into the lie and relate it all to foreground process database time.

By the way, I am in no way saying there is something wrong with or incorrect about DB Time. I want to ensure this is clear from the very beginning of this post.

Just so there is no confusion, an Oracle foreground process is sometimes also called a server process or a shadow process. These can terms can be used interchangeably in this post.

The Truth
Clearly background and foreground processes operate in parallel. I don't think any DBA would deny this. As I frequently say, "serialization is death and parallelism is life!" A simple "ps" command will visually show both Oracle background and foreground processes at work. But this in no way implies they do not impact each other's activity and performance.

In fact, we hope they do impact each other! Can you imagine what performance would be with the background processes NOT running in parallel?! What a performance nightmare that would be. But this where the "no impact" lie lives.

The Lie
Most senior DBAs can point to a specific situation where Oracle cache buffer chain latch contention affected multiple foreground sessions. In this situation, foreground sessions were franticly trying to acquire a popular cache buffer chain latch. But this is a foreground session versus foreground session situation. While this is example is important, this post is about when a background process impacts a foreground process.

Have you every committed a transaction and it hangs while the foreground process is waiting on "log file switch (checkpoint incomplete)" or even worse "log file switch (archiving needed)" event? All the foreground process knows is that its statement can't finish because a required log switch has not occurred because a checkpoint is incomplete. What the server process does not know is the checkpoint (CKPT), the database writer (DBWR) and the log writer (LGWR) background processes are involved. There is a good chance the database writer is frantically writing dirty buffers to the database (dbf) files so the LGWR can safely overwrite the associated redo in the next online redo log.

For example, if a server process issued a commit during the checkpoint, it will wait until the checkpoint is complete and the log writer has switched and can write into the next redo log. So, while the log writer background processes is probably waiting on "log file parallel write" and the database writer is burning CPU and waiting on "db file parallel write", the foreground processes are effectively hung.

This is a classic example of how a background process can impact the performance of a foreground process.

A Demonstration Of The Lie
Here's a quick demonstration of the above situation. On an existing database in my lab, I created two 4MB redo logs and dropped all the other redo logs. I started a DML intensive workload. According to the alert.log file, the redo logs where switching every couple of seconds! Take a look at this:
$ tail -f /home/oracle/base/diag/rdbms/prod30/prod30/trace/alert*log
Thread 1 cannot allocate new log, sequence 2365
Checkpoint not complete
Current log# 4 seq# 2364 mem# 0: /home/oradata/prod30/redoA1.log
Mon Dec 29 11:02:09 2014
Thread 1 advanced to log sequence 2365 (LGWR switch)
Current log# 5 seq# 2365 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 cannot allocate new log, sequence 2366
Checkpoint not complete
Current log# 5 seq# 2365 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 advanced to log sequence 2366 (LGWR switch)
Current log# 4 seq# 2366 mem# 0: /home/oradata/prod30/redoA1.log
Thread 1 cannot allocate new log, sequence 2367
Checkpoint not complete
Current log# 4 seq# 2366 mem# 0: /home/oradata/prod30/redoA1.log
Thread 1 advanced to log sequence 2367 (LGWR switch)
Current log# 5 seq# 2367 mem# 0: /home/oradata/prod30/redoA2.log
Thread 1 cannot allocate new log, sequence 2368
Checkpoint not complete
Current log# 5 seq# 2367 mem# 0: /home/oradata/prod30/redoA2.log
Mon Dec 29 11:02:20 2014

Obviously not what you want to see on a production Oracle system! (But my guess many of you have.)

Using my OSM realtime session sampler tool (rss.sql - related blog posting HERE) I sampled the log writer every half a second. (There is only one log writer background process because this is an Oracle 11g database, not an Oracle Database 12c system.) If the log writer session showed up in v$session as an active session, it would be picked up by rss.sql.  Both "ON CPU" and "WAIT" states are collected. Here is a sample of the output.


It's very obvious the log writer is doing some writing. But we can't tell from the above output if the process is impacting other sessions. It would have also been very interesting to sample the database writer also, but I didn't do that. To determine if the background processes are impacting other sessions, I needed to find a foreground session that was doing some commits. I noticed that session 133, a foreground process was busy doing some DML and committing as it processed its work. Just as with the log writer background process, I sampled this foreground process once every 0.5 second. Here's a sample of the output.


Wow. The foreground process is waiting a lot for the current checkpoint to be completed! So... this means the foreground process is being effectively halted until the background processes involved with the checkpoint have finished their work.

This is a great example of how Oracle background processes can impact the performance of an Oracle foreground process.

But let's be clear. Without the background processes, performance would be even worse. Why? Because all that work done in parallel and in the background would have to be done by each foreground process AND all that work would have to be closely controlled and coordinated. And that, would be a performance nightmare!

DB Time Impact On The Foreground Process

Just for the fun of it, I wrote a script to investigate DB Time, CPU consumption, non-idle wait time and the wait time for the "log file switch wait (checkpoint incomplete)" wait event for the foreground process mentioned above (session 133). The script simply gathers some session details, sleeps for 120 seconds, again gathers some session details, calculates the differences and displays the results. You can download the script HERE. Below is the output for the foreground process, session 133.
SQL> @ckpttest.sql 133

Table dropped.

Table created.

PL/SQL procedure successfully completed.

CPU_S_DELTA NIW_S_DELTA DB_TIME_S_DELTA CHECK_IMPL_WAIT_S
----------- ----------- --------------- -----------------
2.362 117.71 119.973692 112.42

1 row selected.

Here is a quick description of the output columns.

  • CPU_S_DELTA is the CPU seconds consumed by session 133, which is the time model statistic DB CPU.
  • NIW_S_DELTA is the non-idle wait time for session 133, in seconds.
  • DB_TIME_S_DELTA is the DB Time statistic for session 133, which is the time model statistic DB Time.
  • CHECK_IMPL_WAIT_S is the wait time only for event "log file switch (checkpoint incomplete)" for session 133, in seconds.

Does the time fit together as we expect? The "log file switch..." wait time is part of the non-idle wait time. The DB Time total is very close to the CPU time plus the non-idle wait time. Everything seems to add up nicely.

To summarize: Oracle background processes directly impacted the database time for a foreground process.

In Conclusion...
First, for sure Oracle foreground and background processes impact each other...by design for increased performance. Sometimes on real production Oracle Database systems things get messy and work that we hoped would be done in parallel must become momentarily serialized. The log file switch example above, is an example of this.

Second, the next time someone tells you that an Oracle background process does not impact the performance of a foreground process, ask them if they have experienced a "log file switch checkpoint incomplete" situation. Pause until they say, "Yes." Then just look at them and don't say a word. After a few seconds you may see a "oh... I get it." look on their face. But if not, simply point them to this post.

Thanks for reading and enjoy your work!

Craig.




Categories: DBA Blogs

Compliance and File Monitoring in EM12c

Fuad Arshad - Mon, 2014-12-29 14:36
I was recently asked to help a customer set up File Monitoring in Enterprise Manager and I thought since I haven’t blogged in a while, this could be a good way to start back up again..
Enterprise manager 12c provides a very nice Compliance and File Monitoring Framwork. There are many Built in Frameworks include for PCI DSS and STIG but this How-to will only focus on a custom file monitoring framework.
Prior to Setting up Compliance features . Ensure that Privilege Delegation is set to sudo or whatever Privilege delegation provider you are using.  and Credentials for Realtime Monitoring are setup for hosts. All the Prereqs are explained here http://docs.oracle.com/cd/E24628_01/em.121/e27046/install_realtime_ccc.htm#EMLCM12307
Also important in the above link is how every OS interacts with these features.


Go To Enterprise -→ Compliance → Library

Create a New Compliance Standard



Name and Describe the Framework


You will see  the Framework Created


Now lets add some Facets to monitor > In this example I selected a tnsnames from my rdbms home


Below is a finished facet


Next lets create a rule that uses that facet

After Selecting the right rule lets Add more color

Lets add the facet that defined what file(s) will be monitored

For this example I will select all aspects  for testing but ensure that you have sized your respository as well as understand all the consequences  for each aspect





After defining the monitoring actions, you have the option to filtor and create monitoring rules based on specific events.
I will skip this for now
As we inch towards the end we can authorize changes and each event manually or incorporate a Change Management System that has a connector available in EM12c.

After We have completed this, we now have an opportunity to review the setting and then make this rule production.
Now lets create a Standard. We are creating a custom File Monitoring Standard With a RTM type Standard Applicable to host

We will add rules to the File Monitor . In this Case we will add the tnsnames rule we created to the Standard. You can add standard as well as rules to a Standard

Next Lets Associate Targets to this Standard.
You will be asked to confirm

Optionally now  you can add this to the compliance framework for one stop monitoring

Now that we have set everything up. Lets Test this. Here is the original tnsnames.ora
Lets add another tns entry

Prior to the change . here is that the Compliance Results Page Looks Like. As you can see the evaluation was successful. And we are 100% compliancet



Now  If If go to Compliance -> Real time observations . I can see that I didn’t install the Kernel module needed for granular control and this cannot use certain functionality

So I’m going to remove these from my rule for now.
Now I have made a whole bunch of changes including even moving the file. It is all captured .

There are many changes here and we can actually compare what changed
If you select unauthorized as the audited event  for the change the compliance score drops and you can use it for see how many violations for a given rule happen.

In Summary. Em12c Provides a very robust framework of monitoring compliance standards as well as custom created frameworks to ensure your auditors and IT Managers are happy.


Oracle Audit Vault Oracle Database Plug-In

The Oracle Audit Vault uses Plug-Ins to define data sources.  The following table summarizes several of the important facts about the Oracle Audit Vault database plug for Oracle databases –

Oracle Database Plug-In for the Oracle Audit Vault

Plug-in Specification

Description

Plug-in directory

AGENT_HOME/av/plugins/com.oracle.av.plugin.oracle

Secured Target Versions

Oracle 10g, 11g, 12c Release 1 (12.1)

Secured Target Platforms

Linux/x86-64

Solaris /x86-64

Solaris /SPARC64

AIX/Power64

Windows /86-64

HP-UX Itanium

Secured Target Location (Connect String)

jdbc:oracle:thin:@//hostname:port/service

AVDF Audit Trail Types

TABLE

DIRECTORY

TRANSACTION LOG

SYSLOG (Linux only)

EVENT LOG (Windows only)

NETWORK

Audit Trail Location

For TABLE audit trails: sys.aud$Sys.fga_log$dvsys.audit_trail$

unified_audit_trail

 

For DIRECTORY audit trails: Full path to the directory containing AUD or XML files.

 

For SYSLOG audit trails: Full path to the directory containing the syslog file.

 

For TRANSACTION LOG, EVENT LOG and NETWORK audit trails: no trail location required.

If you have questions, please contact us at mailto:info@integrigy.com

Reference
Auditing, Oracle Audit Vault, Oracle Database
Categories: APPS Blogs, Security Blogs

Packt - The $5 eBook Bonanza is here!

Surachart Opun - Tue, 2014-12-23 00:38
 The $5 eBook Bonanza is here!
Spread out news for people who are interested in reading IT books. The $5 eBook Bonanza is here! You will be able to get any Packt eBook or Video for just $5 until January 6th 2015.
Categories: DBA Blogs

Is Oracle Database Time Correct? Something Is Not Quite Right.

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Is Oracle Database Time Correct? Something Is Not Quite Right.

Oracle Database tuning and performance analysis is usually based on time. As I blogged HERE, the Oracle "database time" statistic is more interesting than simply "time spent in the database." It is the sum of CPU consumption and non-idle wait time. And Elapsed Time is the sum of all the database time related to perhaps a session or a SQL statement execution. However...

If you do the fundamental math, you'll notice the numbers don't always add up. In fact, they rarely match. In this posting, I want to demonstrate this mismatch and I want you to see this on your systems!

I'll include experimental data from a number of different systems, the statistical analysis (including pictures) and provide a tool you can download for free from OraPub.com to check out the situation on your systems.

Checking DB Time Math

DB Time is defined as "time spent in the database." This is the sum of Oracle process CPU consumption plus non-idle wait time. Usually we don't derive DB Time. The Time Model Statistics view v$sys_time_mode contains the DB Time statistic. But this view also contains the DB CPU statistic. Since there is no sum of non-idle wait time, most people just assume everything is fine.

However, if you run the simple query below on your system, it could look something this:
SQL> l
1 select db_time_s, db_cpu_s, tot_ni_wt_s
2 from (select value/1000000 db_time_s from v$sys_time_model where stat_name = 'DB time' ),
3 (select value/1000000 db_cpu_s from v$sys_time_model where stat_name = 'DB CPU' ),
4* (select sum(TIME_WAITED_MICRO_FG)/1000000 tot_ni_wt_s from v$system_event where wait_class != 'Idle' )
SQL> /

DB_TIME_S DB_CPU_S TOT_NI_WT_S
---------- ---------- -----------
330165.527 231403.925 119942.952

1 row selected.
If you add up the DB CPU and the total non-idle wait time, the value is 351,346.877. Woops! 351K does not equal 330K. What happened on my Oracle Database 12c (12.1.0.2.0)? As I have demonstrated in this POSTING (which contains videos of this) and in my online seminar training HERE, many times DB Time does nearly equal DB CPU plus the non-idle wait time. But clearly in the above situation something does not seem quite right.

Checking DB Time On Your Systems
To demonstrate the possibility of a DB Time mismatch, I created a simple plsql tool. You can download this free tool or do an OraPub.com search for "db time tool". The tool, which is easily configurable, takes a number of samples over a period of time and displays the output.


Here is an example of the output.

OraPub DB Time Test v1a 26-Sep-2014. Enjoy but use at your own risk.
.
Starting to collect 11 180 second samples now...
All displayed times are in seconds.
.
anonymer Block abgeschlossen
..........................................................................
... RAW OUTPUT (keep the output for your records and analysis)
..........................................................................
.
sample#, db_time_delta_v , db_cpu_delta_v, tot_ni_wait_delta_v, derived_db_time_delta_v, diff_v, diff_pct_v
.
1, 128,4, 128,254, ,103, 128,357266, ,043, 0
2, 22,014, 3,883, 17,731, 21,614215, ,399, 1,8
3, 1,625, 1,251, ,003, 1,253703, ,371, 22,8
4, 13,967, 12,719, 1,476, 14,194999, -,228, -1,6
5, 41,086, 41,259, ,228, 41,486482, -,4, -1
6, 36,872, 36,466, ,127, 36,593884, ,278, ,8
7, 38,545, 38,71, ,137, 38,847459, -,303, -,8
8, 37,264, 37,341, ,122, 37,463525, -,199, -,5
9, 22,818, 22,866, ,102, 22,967141, -,149, -,7
10, 30,985, 30,614, ,109, 30,723831, ,261, ,8
11, 5,795, 5,445, ,513, 5,958586, -,164, -2,8
.
The test is complete.
.
All displayed times are in seconds.

The output is formatted to make it easy to statistically analyze. The far right column is percent difference between the reported DB Time and the calculated DB Time. In the above example, they are pretty close. Get the tool and try it out on your systems.

Some Actual Examples
I want to quickly show you four examples from a variety of systems. You can download all the data in the "analysis pack" HERE. The data, for each of the four systems, contains the raw DB Time Test output (like in the section above), the statistical numeric analysis output from the statistical package "R", the actual "R" script and the visual analysis using "smooth histograms" also created using "R."

Below is the statistical numeric summary:


About the columns: Only the "craig" system is mine and other are real production or DEV/QA systems. The statistical columns all reference the far right column of the DB Time Test Tool's output, which is the percent difference between the reported DB Time and the calculated DB Time. Each sample set consists of eleven 180 second samples. The P-Value greater than 0.05 means the reported and calculated DB Time differences are normally distributed. This is not important in this analysis, but gives me clues if there is a problem with the data collection.

As you can easily see, two of the system's "DB Times" difference is greater than 10% and one of them was over 20%. The data collected shows that something is not quite right... but that's about it.

What Does This Mean In Our Work?
Clearly something is not quite right. There are a number of possible reasons and this will be focus of my next few articles.

However, I want to say that even though the numbers don't match perfectly and sometimes they are way off, this does not negate the value of a time based analysis. Remember, we not trying to land a man on the moon. We try diagnosing performance to derive solutions that (usually) aim to reduce the database time. I suspect that in all four cases I show, we would not be misled.

But this does highlight the requirement to also analysis performance from a non-Oracle database centric perspective. I always look at the performance situation from an operating system perspective, an Oracle centric perspective and an application (think: SQL, processes, user experience, etc.) perspective. This "3 Circle" analysis will reduce the likelihood of making a tuning diagnosis mistake. So in case DB Time is completely messed up, by diagnosing performance from the other two "circles" you will know something is not right.

If you want to learn more about my "3-Circle" analysis, here are two resources:
  1. Paper. Total Performance Management. Do an OraPub search for "3 circle" and you'll find it.
  2. Online Seminar: Tuning Oracle Using An AWR Report. I go DEEP into an Oracle Time Based Analysis but keeping it day-to-day production system practical.
In my next few articles I will drill down into why there can be a "DB Time mismatch," what to do about it and how to use this knowledge to our advantage.

Enjoy your work! There is nothing quite like analyzing performance and tuning Oracle database systems!!

Craig.





Categories: DBA Blogs

New features in ksplice uptrack-upgrade tools for Oracle Linux

Wim Coekaerts - Mon, 2014-12-22 14:03
We have many, many happy Oracle Linux customers that use and rely on the Oracle Ksplice service to keep their kernels up to date with all the critical CVEs/bugfixes that we release as zero downtime patches.

There are 2 ways to use the Ksplice service :

  • Online edition/client
  • The uptrack tools (the Ksplice utilities you install on an Oracle Linux server to start applying ksplice updates) connect directly with the Oracle server to download updates. This model gives the most flexibility in terms of providing information of patches and detail of what is installed because we have a website on which you can find your servers and detailed patch status.

  • Offline edition/client
  • Many companies cannot or do not register all servers remotely with our system so they can rely on the offline client to apply updates. In this mode, the ksplice patches are packaged in RPMs for convenience. For each kernel that is shipped by Oracle for Oracle Linux, we provide a corresponding uptrack-update RPM for that specific kernel version. This RPM contains all the updates that have been released since that version was released.

    The RPM is updated whenever a new ksplice patch becomes available. So you always have 1 RPM installed for a given kernel, and this RPM gets updated. This was standard yum / rpm commands can be used to update your server(s) with ksplice patches as well and everything is nicely integrated.

    The standard model is that an uptrack-upgrade command will apply all updates to current/latest on your server. This is of course the preferred way of applying security fixes on your running system, it's best to be on the latest version. However, in some cases, customers want more fine-grained control than latest.

    We just did an update of the ksplice offline tools to add support for updating to a specific "kernel version". This way, if you are on kernel version x, you would like to go to kernel version y (effective patches/security fixes) but latest is kernel version z, you can tell uptrack-upgrade to go to kernel version y. Let me give a quick and simple example below. I hope this is a useful addition to the tools.

    happy holidays and happy ksplicing!

    To install the tools, make sure that your server(s) has access to the ol6_x86_64_ksplice channel (if it's OL6) :$ yum install uptrack-offline

    Now, in my example, I have Oracle Linux 6 installed with the following version of UEK3 :

    $ uname -r3.8.13-44.1.1.el6uek.x86_64

    Let's check if updates are available :

    $ yum search uptrack-updates-3.8.13-44.1.1Loaded plugins: rhnplugin, securityThis system is receiving updates from ULN.=========== N/S Matched: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64 ===========uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch : Rebootless updates for the ...: Ksplice Uptrack rebootless kernel update service

    As I mentioned earlier, for each kernel there's a corresponding ksplice update RPM. Just install that. In this case, I run 3.8.13-44.1.1.

    $ yum install uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarchLoaded plugins: rhnplugin, securityThis system is receiving updates from ULN.Setting up Install ProcessResolving Dependencies--> Running transaction check---> Package uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0 will be installed--> Finished Dependency ResolutionDependencies Resolved================================================================================ Package Arch Version Repository Size================================================================================Installing: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64 noarch 20141216-0 ol6_x86_64_ksplice 39 MTransaction Summary================================================================================Install 1 Package(s)Total download size: 39 MInstalled size: 40 MIs this ok [y/N]: yDownloading Packages:uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.n | 39 MB 00:29 Running rpm_check_debugRunning Transaction TestTransaction Test SucceededRunning Transaction Installing : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa 1/1 The following steps will be taken:Install [b9hqohyk] CVE-2014-5077: Remote denial-of-service in SCTP on simultaneous connections.......Installing [vtujkei9] CVE-2014-6410: Denial of service in UDF filesystem parsing.Your kernel is fully up to date.Effective kernel version is 3.8.13-55.1.1.el6uek Verifying : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa 1/1 Installed: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0 Complete!

    There have been a ton of updates released since 44.1.1, and the above update gets me to effectively running 3.8.13-55.1.1. Of course, without a reboot.

    $ uptrack-uname -r3.8.13-55.1.1.el6uek.x86_64

    Now we get to the new feature. There's a new option in uptrack-upgrade that lists all effective kernel versions from the installed kernel to the latest based on the ksplice rpm installed.

    $ uptrack-upgrade --list-effectiveAvailable effective kernel versions:3.8.13-44.1.1.el6uek.x86_64/#2 SMP Wed Sep 10 06:10:25 PDT 20143.8.13-44.1.3.el6uek.x86_64/#2 SMP Wed Oct 15 19:53:10 PDT 20143.8.13-44.1.4.el6uek.x86_64/#2 SMP Wed Oct 29 23:58:06 PDT 20143.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 PST 20143.8.13-55.el6uek.x86_64/#2 SMP Mon Dec 1 11:32:40 PST 20143.8.13-55.1.1.el6uek.x86_64/#2 SMP Thu Dec 11 00:20:49 PST 2014

    So as an example, let's say I want to update from 44.1.1 to 44.1.5 instead of to 55.1.1 (for whatever reason I might have). All I have to do, is run uptrack-upgrade to go to that effective kernel version.

    Let's start with removing the installed updates and go back from 55.1.1 to 44.1.1 and then upgrade again to 44.1.5 :

    $ uptrack-remove --all...$ uptrack-upgrade --effective="3.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 PST 2014"......Effective kernel version is 3.8.13-44.1.5.el6uek

    And that's it.

    New features in ksplice uptrack-upgrade tools for Oracle Linux

    Wim Coekaerts - Mon, 2014-12-22 14:03
    We have many, many happy Oracle Linux customers that use and rely on the Oracle Ksplice service to keep their kernels up to date with all the critical CVEs/bugfixes that we release as zero downtime patches.

    There are 2 ways to use the Ksplice service :

  • Online edition/client
  • The uptrack tools (the Ksplice utilities you install on an Oracle Linux server to start applying ksplice updates) connect directly with the Oracle server to download updates. This model gives the most flexibility in terms of providing information of patches and detail of what is installed because we have a website on which you can find your servers and detailed patch status.

  • Offline edition/client
  • Many companies cannot or do not register all servers remotely with our system so they can rely on the offline client to apply updates. In this mode, the ksplice patches are packaged in RPMs for convenience. For each kernel that is shipped by Oracle for Oracle Linux, we provide a corresponding uptrack-update RPM for that specific kernel version. This RPM contains all the updates that have been released since that version was released.

    The RPM is updated whenever a new ksplice patch becomes available. So you always have 1 RPM installed for a given kernel, and this RPM gets updated. This was standard yum / rpm commands can be used to update your server(s) with ksplice patches as well and everything is nicely integrated.

    The standard model is that an uptrack-upgrade command will apply all updates to current/latest on your server. This is of course the preferred way of applying security fixes on your running system, it's best to be on the latest version. However, in some cases, customers want more fine-grained control than latest.

    We just did an update of the ksplice offline tools to add support for updating to a specific "kernel version". This way, if you are on kernel version x, you would like to go to kernel version y (effective patches/security fixes) but latest is kernel version z, you can tell uptrack-upgrade to go to kernel version y. Let me give a quick and simple example below. I hope this is a useful addition to the tools.

    happy holidays and happy ksplicing!

    To install the tools, make sure that your server(s) has access to the ol6_x86_64_ksplice channel (if it's OL6) :

    $ yum install uptrack-offline
    

    Now, in my example, I have Oracle Linux 6 installed with the following version of UEK3 :

    $ uname -r
    3.8.13-44.1.1.el6uek.x86_64
    

    Let's check if updates are available :

    $ yum search uptrack-updates-3.8.13-44.1.1
    Loaded plugins: rhnplugin, security
    This system is receiving updates from ULN.
    =========== N/S Matched: uptrack-updates-3.8.13-44.1.1.el6uek.x86_64 ===========
    uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch : Rebootless updates for the
         ...: Ksplice Uptrack rebootless kernel update service
    

    As I mentioned earlier, for each kernel there's a corresponding ksplice update RPM. Just install that. In this case, I run 3.8.13-44.1.1.

    $ yum install uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch
    Loaded plugins: rhnplugin, security
    This system is receiving updates from ULN.
    Setting up Install Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package                             Arch   Version    Repository          Size
    ================================================================================
    Installing:
     uptrack-updates-3.8.13-44.1.1.el6uek.x86_64
                                         noarch 20141216-0 ol6_x86_64_ksplice  39 M
    
    Transaction Summary
    ================================================================================
    Install       1 Package(s)
    
    Total download size: 39 M
    Installed size: 40 M
    Is this ok [y/N]: y
    Downloading Packages:
    uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.n |  39 MB     00:29     
    Running rpm_check_debug
    Running Transaction Test
    Transaction Test Succeeded
    Running Transaction
      Installing : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa   1/1 
    The following steps will be taken:
    Install [b9hqohyk] CVE-2014-5077: Remote denial-of-service in SCTP on simultaneous connections.
    ...
    ...
    Installing [vtujkei9] CVE-2014-6410: Denial of service in UDF filesystem parsing.
    Your kernel is fully up to date.
    Effective kernel version is 3.8.13-55.1.1.el6uek
      Verifying  : uptrack-updates-3.8.13-44.1.1.el6uek.x86_64-20141216-0.noa   1/1 
    
    Installed:
      uptrack-updates-3.8.13-44.1.1.el6uek.x86_64.noarch 0:20141216-0               
    
    Complete!
    

    There have been a ton of updates released since 44.1.1, and the above update gets me to effectively running 3.8.13-55.1.1. Of course, without a reboot.

    $ uptrack-uname -r
    3.8.13-55.1.1.el6uek.x86_64
    

    Now we get to the new feature. There's a new option in uptrack-upgrade that lists all effective kernel versions from the installed kernel to the latest based on the ksplice rpm installed.

    $ uptrack-upgrade --list-effective
    Available effective kernel versions:
    
    3.8.13-44.1.1.el6uek.x86_64/#2 SMP Wed Sep 10 06:10:25 PDT 2014
    3.8.13-44.1.3.el6uek.x86_64/#2 SMP Wed Oct 15 19:53:10 PDT 2014
    3.8.13-44.1.4.el6uek.x86_64/#2 SMP Wed Oct 29 23:58:06 PDT 2014
    3.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 PST 2014
    3.8.13-55.el6uek.x86_64/#2 SMP Mon Dec 1 11:32:40 PST 2014
    3.8.13-55.1.1.el6uek.x86_64/#2 SMP Thu Dec 11 00:20:49 PST 2014
    

    So as an example, let's say I want to update from 44.1.1 to 44.1.5 instead of to 55.1.1 (for whatever reason I might have). All I have to do, is run uptrack-upgrade to go to that effective kernel version.

    Let's start with removing the installed updates and go back from 55.1.1 to 44.1.1 and then upgrade again to 44.1.5 :

    $ uptrack-remove --all
    ...
    $ uptrack-upgrade --effective="3.8.13-44.1.5.el6uek.x86_64/#2 SMP Wed Nov 12 14:23:31 
    PST 2014"
    ...
    ...
    Effective kernel version is 3.8.13-44.1.5.el6uek
    

    And that's it.

    eDVD

    Bradley Brown - Mon, 2014-12-22 11:27
    We've struggled to figure out what to call this next generation of video delivery.  Is it "video on demand?"  The industry insiders are very specific in calling it TVOD and SVOD, which stands for transactional and subscription video on demand respectively.  Transactional video on demand means that consumers can buy or rent a video and watch it on their device.  Subscription video on demand means that consumers buy a subscription and they can watch a grouping of videos as a part of their subscription.

    But what does all of this have to do with consumers and how they talk about "video on demand?"  I certainly don't hear consumers using that term.  In fact, when my son recently posted a video on Facebook, my mother-in-law (his grandma) said "that was a really cool DVD Austin."  Later she asked "that was a DVD, right?"  You could hear her questioning herself about the use of the term DVD.  Austin was a little taken back by the question, paused and said "yes grandma."  He didn't want to get into the details that a DVD is a physical implementation of storage, not a method of playing a video on Facebook.

    That got me thinking about what I originally labelled this new technology as - i.e. an eDVD.  This would allow people to continue to referring to online videos as DVDs - specifically eDVDs.  So how do we change the world's view of these terms and get everyone to start calling them eDVDs?  Now that I've declare it, the world knows, right?  Well...not quite yet, but I'm sure very soon :)  Spread the word!

    Happy Holidays and a Prosperous New Year from VitalSoftTech!

    VitalSoftTech - Sun, 2014-12-21 09:01
    We at VST want to thank you, our prestigious members, for making 2014 a memorable year for us!  We are ever so grateful for your continuous support, participation and words of encouragement.  As technological mavens, you help us sustain this community with quality feedback that derives continued success for us all! How about mastering a […]
    Categories: DBA Blogs

    Digital Delivery "Badge"

    Bradley Brown - Sun, 2014-12-21 00:44
    At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).

    Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.

    Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:

    Screenshot 2014-12-16 19.39.25.png

    On our client's web page, it looks something like this:

    Screenshot 2014-12-17 14.01.11.png

    The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:

    Screenshot 2014-12-17 16.31.54.png

    What do your client's customers need to know about in order to help you sell more?

    Log Buffer #402, A Carnival of the Vanities for DBAs

    Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
    This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

    Oracle:

    EM12c and the Optimizer Statistics Console.
    SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
    OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
    Oracle 12.1.0.2 Bundle Patching.
    Performance Issues with the Sequence NEXTVAL Call.

    SQL Server:

    GUIDs GUIDs everywhere, but how is my data unique?
    Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
    Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
    Introduction to Azure SQL Database Scalability.
    What To Do When the Import and Export Wizard Fails.

    MySQL:

    Orchestrator 1.2.9 GA released.
    Making HAProxy 1.5 replication lag aware in MySQL.
    Monitor MySQL Performance Interactively With VividCortex.
    InnoDB’s multi-versioning handling can be Achilles’ heel.
    Memory summary tables in Performance Schema in MySQL 5.7.

    Also published here.
    Categories: DBA Blogs

    What does an App Cost?

    Bradley Brown - Sat, 2014-12-20 17:59
    People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

    Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

    Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:


    When I drilled into a specific product, again I see a list of videos within the product:


    I can download or play a video in a product:


    Here's what it looks like for The Dailey Method:



    Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:


    Or if I click on the title, I get to see a list of the videos in more detail:


    Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:


    Here's a specific video that talked about a technique to set your sled unstuck:


    Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:


    Looking at a specific video and it's details:


    We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

    The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!


    PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

    Javier Delgado - Fri, 2014-12-19 17:04
    One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

    Materialized Views Features
    Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:


    • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

    Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
    • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.




    In case you choose the On Demand method, the data refresh can actually be done following two different methods:


    • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.


    • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 


    How can we use them in PeopleTools?
    Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

    PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:



    We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

    This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

    Limitations
    The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

    Conclusions
    All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

    Do You Really Need a Content Delivery Network (CDN)?

    Bradley Brown - Fri, 2014-12-19 10:39
    When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

    Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

    So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content.  So...website pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

    When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

    A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

    Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

    So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

    For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

    What Do Oracle Audit Vault Collection Agents Do?

    The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

    If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

    There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

    Audit Value Collectors for Oracle Databases*

    Audit Trail Type

    How Enabled

    Collector Name

    Database audit trail

    For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

    For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.

    DBAUD

    Operating system audit trail

    For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

    For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

    For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.

    OSAUD

    Redo log files

    The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.

    REDO

     *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

    If you have questions, please contact us at mailto:info@integrigy.com

    Reference
    Auditing, Oracle Audit Vault, Oracle Database
    Categories: APPS Blogs, Security Blogs

    Elephants and Tigers - V8 of the Website

    Bradley Brown - Thu, 2014-12-18 21:54
    It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

    Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?

    Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.

    Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.

    What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...


    Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.


    Any device, anywhere, any time!  That's how your customers want the content.


    We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.


    It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.


    We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.


    I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.


    What website would be complete without a full list of client testimonials?


    If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!


    What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.


    Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: http://intelivideo.com/sign-up/ 


    We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.


    Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.


    Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...


    Once our clients sign off on everything and our implementation team does as well...it's time to buy your media, promote your products and start selling.  We handle the delivery.


    For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.


    As I said at the beginning of this blog post...it's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!

    This new version of our website should be live in the next day or so...as always, I'd love to hear your feedback!

    Pages

    Subscribe to Oracle FAQ aggregator