Skip navigation.

Feed aggregator

First ORCLAPEX New York City Meetup

Marc Sewtz - 6 hours 21 min ago
We’re excited to announce the first ever Oracle Application Express Meetup in New York City. Join us on May 23rd at the Oracle office at 120 Park Ave – right across from Grand Central. Meet other APEX developers working in the area and see what’s coming in APEX 5.0. New to APEX? Don’t worry, we’ll get you up to speed and show you what this product is all about.
As a special guest speaker, we’ll have Peter Raganitsch, from Click Click IT Solution in Vienna, Austria - known in the community for the APEXlib framework and the FOEX plug-in - show us how to use friendly URLs with your APEX applications
Lean more about our upcoming ORCLAPEX NYC meetup and sign up to join us!


SQL Server Transaction Log Basics

Chris Foot - 7 hours 15 min ago

Here at RDX, our internal and external clients can have varying levels of SQL Server knowledge. Explaining technical details to clients is, in my opinion, one of the most rewarding tasks we perform. Not only does this allow us to provide client-specific recommendations, it allows our relationships to evolve from ‘support personnel’ to a trusted team member. In my second blog post, I am addressing one of the topics that come up often – SQL Server transaction logs.

Transaction Logs and Their Internal Structure

Each database within SQL Server has at least one transaction log file. The default file extension for these is “LDF”. The transaction log files are internally split into virtual log files (VLFs). VLFs are created when a transaction log file is created and when the transaction log file grows. With improper autogrowth settings or within large transaction log files, the number of VLFs can grow to the point that performance degrades. You can identify the number of VLFs in a database by running this command and noting the number of rows returned:

DBCC LogInfo

Recovery Models and Their Effects

Some data is transient or can easily be rolled forward through your application. For these databases, the Simple recovery model can be used. Full and differential backups are the available backup types. Records in the transaction log file are cleared once the transaction completes. In other situations, you need to minimize data loss, also referred to as ‘point-in-time’ recovery. Using the Full recovery model in conjunction with full, transaction log, and, at times, differential backups allows you to meet this need. Records in the transaction log remain in the file until a transaction log backup is taken. Without transaction log backups, databases using the Full recovery model will experience exponential transaction log file growth. During large bulk operations, the bulk-logged recovery model can be used. This minimizes the amount of logging taking place in the transaction log and, therefore, your disk space requirements. Point-in-time recovery is not supported using this recovery model. If point-in-time recovery is required, transaction log backups should be taken immediately before changing the database to bulk-logged recovery and after the database is changed to use Full recovery.

How Transactions are Tracked

Within the VLFs are records of all transactions that occur within its database, marked with log sequence numbers (LSNs). These LSNs are important for recovering your data to a point in time if a disaster or failure occurs. All backups begin with creating a header containing the LSN details. This information can be used to ensure you have all of the backups you need to recover. Restoring backups out of order, skipping a restore of a transaction log backup, and restoring only transaction log backups are not supported. If a recent transaction log backup is deleted, it will affect your ability to recover, and a new full backup should be taken as soon as possible. If you suspect a transaction log backup has been deleted, you can compare the header details below.

  • FirstLSN
  •       -LSN at the beginning of the backup

  • LastLSN
  •       -LSN at the end of the backup
          -With transaction log backups, LastLSN can be compared to the FirstLSN int he suspected next transaction log backup to confirm continuity.

  • DatabaseBackupLSN
  •       -FirstLSN of the last full database backup

  • DifferentialBaseLSN
  •       -FirstLSN of the last full database backup

    Transaction logs are a critical concept to master when working with SQL Server. Without proper attention they can impact the performance of your databases or affect your ability to recover from a disaster situation. Discussions between DBAs (‘accidental’ or otherwise) and application or business decision makers need to occur periodically to ensure that minimal data is lost outside of the SLA.

    I hope this post has provided you with useful insight into your SQL Server transaction logs. In the coming weeks, Matt Nelson will be explaining how to manage the number of VLFs in your databases. Stay tuned!

    Estimate Index Size With Explain Plan (I Can’t Explain)

    Richard Foote - 14 hours 58 min ago
    I discussed recently an updated MOS note that details the needs vs. the implications of rebuilding indexes. Following is a neat little trick if you want to very quickly and cheaply estimate the size of an index if it were to be rebuilt or a new index before you actually create the thing. I meant to blog about this sometime […]
    Categories: DBA Blogs

    Oracle CloudWorld Arrives in the Windy City

    Linda Fishman Hoyle - 16 hours 18 min ago

    A Guest Post by Natalia Rachelson, Senior Director, Oracle Applications 

    Oracle CloudWorld Chicago was buzzing with amazing energy last week. The event drew 800+ customers and prospects from near and far. Some were already running Oracle Applications Cloud. Some were working on plans to move their organizations to the Cloud. Others were interested in just learning more about the Cloud. All were attentive and engaged.

    Three themes rang true throughout all sessions: 1) datafication (e.g. explosion of data, Big Data), 2) Generation C (i.e. connected generation), and 3) the importance of customers’ experiences.

    GM’s New Customer Engagement Centers


    David Mingle, Executive Director of Customer Experience at General Motors, shared how GM changed its approach to Call (Complain) Centers and transformed them into Customer Engagement Centers. The Engagement Centers are no longer cost centers to the company; each one is hyper strategic. Engagement Center agents are trained to:

    • Show empathy
    • Have a high degree of product knowledge
    • Turn a seemingly bad situation into a win for GM or a dealer


    GM’s Use of Social: Mingle said the key is connecting with customers and engaging them over the long term. To do that, GM replaced hundreds of different social tools and standardized on a single platform, Oracle Social Cloud, to drive GM’s PR, Marketing, Communications, and Customer Care divisions.

    Engage Fully: “Social is not about Likes and Tweets; it is about engagement,” Mingle said more than once. He was referring to engagement across all customer touch points―in customer care, sales, or marketing, on the phone, on the web, and even in print.

    Dell’s Success with Oracle Talent Management Cloud

    Diane Paquet, HR Technology Director at Dell, spoke about Dell’s need to reinvent itself. Instead of a hardware company, it wanted to become a technology solution provider to enterprise- and medium-size businesses, which required a different skill set from its workforce. To make the transition, Dell adopted Oracle Talent Management Cloud.

    The Benefits of Cloud: During the sales cycle, the company was looking for a functionally rich, global solution. It did not favor one software delivery model over another; however, the decision makers saw that the cloud offered deployment agility, configurability, ease of use, and a steady stream of new features delivered via frequent upgrades. They were sold on the benefits.

    Modern Technology: To stay relevant in a rapidly changing environment, Paquet said HR professionals need to stay ahead of business needs and always be enablers. That involves having a technology that is agile enough to allow for frequent configuration changes to accommodate new business processes. In addition, the technology has to keep up with the latest trends of mobile and social.

    Results with Social Sourcing: Paquet commented that the most dynamic changes for HR professionals are coming in the talent acquisition space. Candidate behaviors are changing and evolving at a rapid rate. LinkedIn and social media are becoming cornerstones for how candidates seek out new job opportunities. Recruiters are limited by the size of their own networks. By leveraging Oracle’s Social Sourcing tool, Dell has been able to extend the reach of its recruiters and engage Dell’s entire employee network. Within the first four months of using Social Sourcing, Dell experienced 7,900 job shares and 3,800 candidate applications.

    Coming up to Speed with Oracle Onboarding:
      The Oracle Onboarding tool made Paquet gush. She said the tool allows Dell to deliver a consistent and high-quality experience to all hires. Dell uses Oracle Onboarding in 75 countries around the world and has more than 200 forms configured within it. Last year alone, 20,800 new hires completed their HR “paperwork” in the tool. Using Oracle’s amazing onboarding tool instills even more confidence in new hires about Dell as an employer.

    A Word About Personal Smart Devices: Paquet said that more and more employees are using personal mobile devices for work. Companies and systems need to accommodate the BYOD trend.

    What’s Good About Oracle? To close, Paquet stated that what makes Oracle stand out from the pack is the fact that Oracle offers the cloud in bites. People can move as little or as much to the cloud as they want.

    Good information! Good insight! Go Oracle!

    The Art of Easy: Moving at the Speed of Easy (Part 4 of 6)

    Linda Fishman Hoyle - 16 hours 28 min ago

    A Guest Post by Chris Omland, Director of Product Management, Oracle Service Cloud

    The speed of change IS the speed of Easy. To meet the demands of what it takes to exceed your customers’ expectations, you have to be like Superman. “Faster than a speeding bullet! More powerful than a locomotive! Able to leap tall buildings at a single bound!” The velocity of easy isn’t stopping or slowing down; catching up isn’t going to be good enough.

    The real question is, “How do you move ahead of the speed of easy?” You can’t see into the future, but you can be ready for constant, fast moving change. Agility starts from the ground up, as the foundation will ultimately dictate your ability to change—the degrees of what can change and the speed that change can be applied.

    So under your shirt, do you have an “S”? Do you have a platform that has the tools to enable agile innovation of customer and agent experiences, the extensibility and integration options to meet the unique needs of every business and customer, and the proven reliability and security demanded by today’s modern customers? To get ahead, you need to stick an “S” on it.

    Here are five platform ‘super powers’ that you need to consider to move at the speed of easy:

    1. Be Agile
    2. Be Unique
    3. Have ONE Master
    4. Own Your Schedule
    5. Prove Reliability


    1. Be Agile: Meaning “create, deliver, and fix stuff… fast, right?” It does if that is what the business demands of you to meet the expectations of your customers. If you don’t deliver to scope, to scale, and on time, you fail at serving the business and customer needs. No pressure, right? And certainly not easy, especially with a spaghetti foundation, with complexity that has matured over time. The answer for most businesses is not “rip it out and rebuild,” even if the new CIO says that’s what is needed.

    Modern Customer Service
    requires you to develop, test, deploy, maintain, measure, and refine processes that help differentiate customer service experiences when they are needed most.  So “agile innovation” is really about your ability to establish a foundational layer that allows you to effectively deliver at the speed of change. To do that, you need a platform that is purpose built with the tools, designers, analytics, and operations that overlay all that complexity to make it easy again.

    2. Your Footprint Is Unique—Respect it! Let’s face it, your business is unique, which means you must interact with your customers in a unique way, too. You have unique systems, which are an essential part of your business processes—and your environment is constantly changing.

    This means you need solutions that can work with your unique footprint. Solutions that can integrate seamlessly into your environment and processes—and are not just bolted on. These solutions need to work with, not replace, your existing systems. They need to work on your schedule, giving you flexibility in your upgrades. They need to provide you with the ability to tailor the solution through configuration, not code. And for those times when you need to go beyond the configuration options, you need open and standards-based APIs that your developers, existing solutions, and tooling can work with on day one without learning proprietary languages and protocols.

    3. Have ONE Master: Your business already has customer data, product data, processes, and systems that you use every day. The trick is not to duplicate and reconcile later. It’s not to create another silo.

    Working with existing systems can mean synchronizing data, loading data in real-time, integrating functionality to create a unified business process, or bringing together user interfaces. The requirements and implementation will be different for every business, but what is common is the need to bring systems together to meet customer needs and the support processes you have to deliver on those needs.

    4. Own Your Schedule: You set your plans according to your business needs, because only you know when you business is ready to make changes in your solutions. You know when your support request volumes will be at a low point, when you can retrain agents, and when you can deploy new functionality. Why should any vendor tell you when you should upgrade your solution? And if you have more than one cloud vendor, how do you juggle all of the timing conflicts and disruptive schedules?

    Consolidate, standardize and have the freedom to set your own schedule. You did it with your on-premise solutions, so why shouldn’t you expect the same from your cloud solutions? Especially with solutions that support your most critical asset—your customers!

    5. Don’t TRUST It – Prove It: In the end, Modern Customer Service is about an experience and a relationship you develop with your customers. All great relationships are built upon one fundamental concept—trust. Trust means reliability, performance, and availability. Trust is not established and left alone. It’s earned, it’s proven, and it’s continually put to the test. Your customers are trusting you to protect the relationship they have with your brand. That means you have to be always available, responsive, and secure.

    To build, prove, and protect this trust with your customer, you need a solution that offers you the same level of trust. It’s easy to say a solution is proven, secure, and can deliver on your customer expectations. But, it’s another to have proven it through reference customer examples, reference implementations, the highest level of security and compliance accreditations, and a global enterprise network with 24×7.

    Put an ‘S’ under your shirt and transform to Modern Customer Service. Start with a platform that enables agile innovation, respects your unique needs, and has proven reliability to help you protect your customer relationships. Learn why all Clouds are not equal. The Oracle Service Cloud platform is built from the ground up to help your business move at the speed of easy.

    Partial Transcript: Richard Levin (new Coursera CEO) on Charlie Rose

    Michael Feldstein - 18 hours 28 min ago

    I have written two posts recently about Coursera’s appointment of the former president of Yale as the company’s new CEO, with the implicit argument that this move represents a watershed moment for commercial MOOCs. In particular, Coursera seems likely to become the third generation of Richard Levin’s dream, following AllLearn and Open Yale Courses. I’ve also argued that Levin is embellishing the history by making Internet bandwidth a primary factor in the demise of AllLearn when the lack of a viable business model was the more important issue, with even Levin arguing this point.

    Richard Levin was just interviewed by Charlie Rose, and I am including a transcript of most of the segment (starting around 3:15), highlighting some key points in bold. This interview should give us further insight into the future of commercial MOOCs, especially as we have the first non-founder CEO in one of the big three commercial MOOC providers. Follow this link to watch on CharlieRose.com and avoid the annoying Hulu ad.

    Rose: You could have gotten a government job, as an ambassador or something; maybe been Secretary of the Treasury as far as I know . . . you could have done a lot of things. But you’re out running some online education company (laughs).

    Levin: It’s a fantastic mission, it’s really the perfect job for me and for following a university president’s [job].

    Rose: Why’s that?

    Levin: One, I like running things, so it’s an opportunity to run something. But most important it’s so much an extension of what I’ve tried to do. It’s to take Yale to the world, and this is an opportunity to take 108 of the world’s greatest educational institutions (and there’ll probably be some more) and teach the planet.

    Rose: Before you go there, let’s get the landscape. At Yale you tried some online education. A couple of others have had . . . and there is a checkered past.

    Levin: Well, there was a time of experimentation in learning. When we started in 2000 with Stanford and Oxford as partners, we thought our market was our own alumni so we sort of narrowcast over the Internet. Then we opened it to the public the bandwidth wasn’t there.

    Rose: You mean the technical bandwidth?

    Levin: Yes, this was still the era of your videos were jerking around, you remember that? So it had that problem, and we just didn’t have the right model for making it work. And also it didn’t have a high degree of interactivity. Basically you watched a lecturer give a lecture, and maybe there were some visuals, but that was it.

    And then the next thing we did were “Open Yale Courses”, which basically were videos of 42 of our best lecture courses put out for free over the Internet, with support of the Hewlett Foundation. They were great, but very few people watched them from beginning to end. They were free. The materials of the course were distributed, but there were no quizzes, no exercises.

    Now what Coursera has done has sort of recognized that first of all, we have greater bandwidth, we can support lots of people at once – taking quizzes, reacting to the material, getting feedback; having professors look at the data to see what parts students are having a hard time and improving their courses. So it’s a constant feedback loop.

    It’s really terrific and the scale is immense. It’s amazing, we’ve had 7 million different people.

    Rose: Separate the landscape for me. There’s edX, there’s Sebastian Thern’s [Thrun’s] thing, Udacity. How are the three different?

    Levin: They’re all a little bit different, but those are three that are involved in this MOOC space [uses square quotes]. There’s lots of other things online. Many schools have had things online with closed enrollments for substantial tuition dollars for some time now.

    What these three are trying to do is go to a wide open public, and putting courses out for free and getting hundreds of thousands of people to sign up for them.

    Our approach and edX’s are pretty similar.

    Rose: edX is Harvard and MIT?

    Levin: edX is ‘Harvard and MIT want to do their own thing and not sign up with Coursera (laughs). At this time we have about three times as many partner institutions and three or four times the audience. It’s [edX] is a worthy effort, they’re doing a good job, and so are we, and we’re competing on what are the features offered for students. edX is open source software , which some of the computer science types like that – it means they can play with it, they can add to the features on their own.

    But we’re developing interfaces that will allow faculty to add features as well.

    I think it’s good there’s competition. I’ve studied innovative industries; before I became president of Yale it was my field. Competition is good for innovation, the products will get better.

    Rose: But is the mission the same?

    Levin: I think that edX and Coursera have very similar missions. It’s to have great universities as the partners . . . the universities develop the courses. We’re not a university, Coursera’s not a university. Coursera is a platform and a technology company that serves universities.

    Rose: Backed by venture capital?

    Levin: Yeah, but I think the key lesson here is the scale.

    The post Partial Transcript: Richard Levin (new Coursera CEO) on Charlie Rose appeared first on e-Literate.

    Kicking the Remote Presence Device Tires with Beam

    Oracle AppsLab - 19 hours 20 min ago

    Beamnoel-beam

    Remote Presence Devices or RPDs are finally becoming mainstream with products such as the Beam from Suitable Technologies. Today I kicked the tires of one (virtually of course) thanks to friend of the lab Dan Kildahl. I toured the newly renovated marketing offices at Oracle HQ. My first impression was really good. Around family and friends I am known as the clumsy game player. Yeah, I’m the one that gets constantly stuck against walls during first-person video games. But with the Beam interface I was able to easily navigate around the floor. I didn’t hit any wall, and that is good news.

    I asked Dan how it was received around the office. He mentioned mixed opinions, which is completely understandable. All these new technologies are for sure changing social norms (see Google Glass). But as a technologist I just can’t help but feel excited.

    What are your thoughts?Possibly Related Posts:

    Lead DBA Position Phoenix Arizona with PeopleSoft

    Bobby Durrett's DBA Blog - 21 hours 8 min ago

    My company has posted a Lead Oracle DBA position located in Phoenix, Arizona which is where I also live.

    You have to apply through our web site using this link:

    https://usfood.taleo.net/careersection/USF_EXTERNAL/jobdetail.ftl?lang=en&job=14002202

    We would love to get someone who has PeopleSoft skills.

    You would be joining a friendly and experienced team of Oracle and SQL Server DBAs who support a wide variety of applications.  I’ve been here eight years and the time has expanded my scope by exposing me to data warehouse and customer facing web applications that I had not previously supported.  It’s a good position for a qualified person.

    - Bobby

    Categories: DBA Blogs

    NL History

    Jonathan Lewis - Wed, 2014-04-23 11:43

    Even the simplest things change – here’s a brief history of nested loop joins, starting from 8i, based on the following query (with some hints):

    select
    	t2.n1, t1.n2
    from
    	t2,t1
    where
    	t2.n2 = 45
    and	t2.n1 = t1.n1
    ;
    
    

    There’s an index to support the join from t2 to t1, and I’ve forced an (unsuitable) index scan for the predicate on t2.

    Basic plan for 8i (8.1.7.4)

    As reported by $ORACLE_HOME/rdbms/admin/utlxpls.sql.
    Note the absence of a Predicate Information section.

    Plan Table
    --------------------------------------------------------------------------------
    | Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
    --------------------------------------------------------------------------------
    | SELECT STATEMENT          |          |   225 |    3K|   3038 |       |       |
    |  NESTED LOOPS             |          |   225 |    3K|   3038 |       |       |
    |   TABLE ACCESS BY INDEX RO|T2        |    15 |  120 |   3008 |       |       |
    |    INDEX FULL SCAN        |T2_I1     |    15 |      |      8 |       |       |
    |   TABLE ACCESS BY INDEX RO|T1        |     3K|   23K|      2 |       |       |
    |    INDEX RANGE SCAN       |T1_I1     |     3K|      |      1 |       |       |
    --------------------------------------------------------------------------------
    
    Basic plan for 9i (9.2.0.8)

    As reported by a call to a home-grown version of dbms_xplan.display_cursor() with statistics_level set to all.

    Note the “prefetch” shape of the body of the plan but the inconsistency in the numbers reported for Rows, Bytes, and Cost seem to be reporting the “traditional” 8i values transposed to match the new arrangement of the operations. There’s also a little oddity in the A-rows column in line 2 which looks as if it is the sum of its children plus 1 when the size of the rowsource is (presumably) the 225 rowids used to access the table.

    -----------------------------------------------------------------------------------------------------------
    | Id  | Operation                     |  Name       | Rows  | Bytes | Cost  | Starts  | A-Rows  | Buffers |
    -----------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |             |   225 |  3600 |  3038 |         |         |         |
    |   1 |  TABLE ACCESS BY INDEX ROWID  | T1          |    15 |   120 |     2 |     1   |    225  |   3061  |
    |   2 |   NESTED LOOPS                |             |   225 |  3600 |  3038 |     1   |    241  |   3051  |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| T2          |    15 |   120 |  3008 |     1   |     15  |   3017  |
    |   4 |     INDEX FULL SCAN           | T2_I1       |  3000 |       |     8 |     1   |   3000  |     17  |
    |*  5 |    INDEX RANGE SCAN           | T1_I1       |    15 |       |     1 |    15   |    225  |     34  |
    -----------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       5 - access("T2"."N1"="T1"."N1")
    
    
    Basic plan for 10g (10.2.0.5)

    As reported by a call to dbms_xplan.display_cursor() with statistics_level set to all.

    No change from 9i.

    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      0 |        |      0 |00:00:00.01 |       0 |
    |   1 |  TABLE ACCESS BY INDEX ROWID  | T1    |      1 |     15 |    225 |00:00:00.03 |    3061 |
    |   2 |   NESTED LOOPS                |       |      1 |    225 |    241 |00:00:00.03 |    3051 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.03 |    3017 |
    |   4 |     INDEX FULL SCAN           | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
    |*  5 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      34 |
    -------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       5 - access("T2"."N1"="T1"."N1")
    
    
    Basic plan for 11g (11.2.0.4)

    As reported by a call to dbms_xplan.display_cursor() with statisics_level set to all

    Note how the nested loop has now turned into two NESTED LOOP operations – potentially opening the way for a complete decoupling of index access and table access. This has an interesting effect on the number of starts of the table access by rowid for t1, of course. The number of buffer gets for this operation looks surprisingly low (given that it started 225 times) but can be explained by the pattern of the data distribution – and cross-checked by looking at the “buffer is pinned count” statistic which accounts for most of the table visits.

    
    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      1 |        |    225 |00:00:00.01 |    3048 |
    |   1 |  NESTED LOOPS                 |       |      1 |    225 |    225 |00:00:00.01 |    3048 |
    |   2 |   NESTED LOOPS                |       |      1 |    225 |    225 |00:00:00.01 |    3038 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.01 |    3013 |
    |   4 |     INDEX FULL SCAN           | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      13 |
    |*  5 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
    |   6 |   TABLE ACCESS BY INDEX ROWID | T1    |    225 |     15 |    225 |00:00:00.01 |      10 |
    -------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       5 - access("T2"."N1"="T1"."N1")
    
    

    There is, however, a second possible plan for 11g. The one above is the “NLJ Batching” plan, but I could have hinted the “NLJ prefetch” strategy, which takes us back to the 9i execution plan (with a very small variation in buffer visits).

    -------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    -------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      0 |        |      0 |00:00:00.01 |       0 |
    |   1 |  TABLE ACCESS BY INDEX ROWID  | T1    |      1 |     15 |    225 |00:00:00.01 |    3052 |
    |   2 |   NESTED LOOPS                |       |      1 |    225 |    241 |00:00:00.01 |    3042 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.01 |    3017 |
    |   4 |     INDEX FULL SCAN           | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
    |*  5 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
    -------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       5 - access("T2"."N1"="T1"."N1")
    
    
    Base plan for 12c (12.1.0.1)

    As reported by a call to dbms_xplan.display_cursor() with statistics_level set to all.
    Note that the table access to t2 in line 3 is described as “batched” (a feature that can be disabled by the /*+ no_batch_table_access_by_rowid(alias) */  hint) otherwise the plan matches the 11g plan.

    
    ---------------------------------------------------------------------------------------------------------
    | Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    ---------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                      |       |      1 |        |    225 |00:00:00.01 |    3052 |
    |   1 |  NESTED LOOPS                         |       |      1 |        |    225 |00:00:00.01 |    3052 |
    |   2 |   NESTED LOOPS                        |       |      1 |    225 |    225 |00:00:00.01 |    3042 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |      1 |     15 |     15 |00:00:00.01 |    3017 |
    |   4 |     INDEX FULL SCAN                   | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
    |*  5 |    INDEX RANGE SCAN                   | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
    |   6 |   TABLE ACCESS BY INDEX ROWID         | T1    |    225 |     15 |    225 |00:00:00.01 |      10 |
    ---------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       5 - access("T2"."N1"="T1"."N1")
    
    

    Of course 12c also has the “prefetch” version of the plan available; and again “batched” access appears – for both tables in this case – and again the feature can be disabled individually by hints addressed at the tables:

    
    ---------------------------------------------------------------------------------------------------------
    | Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
    ---------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT                      |       |      0 |        |      0 |00:00:00.01 |       0 |
    |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED  | T1    |      1 |     15 |    225 |00:00:00.01 |    3052 |
    |   2 |   NESTED LOOPS                        |       |      1 |    225 |    225 |00:00:00.01 |    3042 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |      1 |     15 |     15 |00:00:00.01 |    3017 |
    |   4 |     INDEX FULL SCAN                   | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |      17 |
    |*  5 |    INDEX RANGE SCAN                   | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |
    ---------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       5 - access("T2"."N1"="T1"."N1")
    
    

    In these examples the difference in work done by the different variations and versions is negligible, but there may be cases where the pattern of data distribution may change the pattern of logical I/Os and buffer pins – which may affect the physical I/O. In this light it’s interesting to note the hint /*+ cluster_by_rowid(alias) */ that was introduced in 11.2.0.4 but disappeared by 12c changing the 11g plan as follows:

    
    ----------------------------------------------------------------------------------------------------------------------------
    | Id  | Operation                     | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    ----------------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT              |       |      0 |        |      0 |00:00:00.01 |       0 |       |       |          |
    |   1 |  TABLE ACCESS BY INDEX ROWID  | T1    |      1 |     15 |    225 |00:00:00.01 |     134 |       |       |          |
    |   2 |   NESTED LOOPS                |       |      1 |    225 |    241 |00:00:00.01 |     124 |       |       |          |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| T2    |      1 |     15 |     15 |00:00:00.01 |      99 |       |       |          |
    |   4 |     SORT CLUSTER BY ROWID     |       |      1 |   3000 |   3000 |00:00:00.01 |       8 |   142K|   142K|  126K (0)|
    |   5 |      INDEX FULL SCAN          | T2_I1 |      1 |   3000 |   3000 |00:00:00.01 |       8 |       |       |          |
    |*  6 |    INDEX RANGE SCAN           | T1_I1 |     15 |     15 |    225 |00:00:00.01 |      25 |       |       |          |
    ----------------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       3 - filter("T2"."N2"=45)
       6 - access("T2"."N1"="T1"."N1")
    
    

    Note the effect appearing at line 4 – and the extraordinary effect this has on the buffer visits (so significant that I did a follow-up check on v$mystat to see if the figures were consistent). This type of rowid sorting is, of course, an important fix for an Exadata issue I described some time ago, and I had assumed that the “batched” concept in the 12c plan was in some way enabling it – although the 12c rowsource execution stats don’t seem to bear that idea out.

     


    Windows XP switching to Linux?

    Chris Foot - Wed, 2014-04-23 11:17

    To the chagrin of Microsoft executives, a number of business professionals still using the now unsupported Windows XP operating system are considering switching to Linux, which offers free software spawned by the open source development process. Though not necessarily the most popular OS solution, database experts are now praising Linux as both operable and effective. 

    Just make the upgrade
    InfoWorld contributor and avid Microsoft enthusiast Peter Bruzzese wrote that, although he understands why XP users are reluctant to move to Windows 8, there's little reason why they should avoid transitioning to Windows 7. For one thing, the user interfaces of the latter deployment and XP are virtually the same – it appears as if an XP user made a simple, rudimentary update. Essentially, there's already going to be a level of familiarity with Windows 7, and companies don't have to worry about employees getting acclimated to a new level of operability.

    Although a number of database administration and IT professionals have lauded Linux for vastly improving its modern, signature OS, Bruzzese isn't convinced. For one thing, those used to working with software compatible with Windows deployments will have to switch to new deployments former XP users would be vastly unfamiliar with. This means that training in specific programs may be required – an investment a number of companies are unable to make.

    Taking a chance?
    Yet there are some professionals, such as San Francisco Gate contributor David Einstein, who believe there's untapped opportunity in Linux. Despite the fact that deployment runs many large enterprise and Web servers and is the basis for Android, Linux hasn't gained a lot of traction with those looking for a desktop OS, primarily due to two factors:

    1. Past models weren't as easy to use as Mac OS or Windows
    2. Not enough software was produced to run on it.

    However, Einstein noted that a particular version of Linux is customized to replace both Windows XP and 7. Under the brand name Zorin, the OS mimics the user interfaces of the latter two solutions, contrasting Bruzzese's conclusion that XP users won't be able to adapt as easily to any Linux OS. Linux provides prospective users with the chance to download Zorin onto hard disks for free. Furthermore, the system offers user-friendly software such as a Microsoft-compatible office suite, a photo editing program and the Chrome browser.

    Due to the liberal pricing and familiar formatting, there's a good chance enterprises will choose Zorin to replace their unprotected XP deployments.

    The Evolution of Enterprise Content in the Mobile Era

    WebCenter Team - Wed, 2014-04-23 09:47
    Q&A interview with Mitchell Palski, Oracle WebCenter Sales Consultant.
    Q: It seems like the term “mobile” is becoming increasingly popular with the use of mobile devices in our personal and business lives. Can you briefly describe how enterprise content has evolved in this new mobile era? A: This may seem like an obvious statement, but I think it’s important that we establish that when we talk about the term “mobile” we are talking about mobile access to all enterprise resources. From the top-down, those resources include business intelligence, applications, processes, content and data.
    What’s changing (and what will continue to change) is which types of devices our users access content and metadata from. The availability and the popularity of mobile devices can influence how enterprise content is captured, stored, viewed, and retrieved.
    The goal for any organization should be to identify how the use of mobile access to its content can add value to their business and help them reach their objectives.
    Q: And how can mobile access to enterprise content help an organization reach its goals? A: There are a couple ways that mobility can help your organization if implemented correctly:
    • Geographical data can be captured and analyzed.
      • Where was a user when they stored content?
      • Where was a user when they viewed content?
    • Deliver content based on geographical location
      • Content delivery engines can leverage geographical data to make determinations on what types of content to display
      • Customized experience for citizens or mobile-employees
    • Improve accuracy of metadata
      • Users can upload, access, and modify content in real-time rather than be forced to go back to their workstation
    • Improve the accessibility of content for users
      • By leveraging an Enterprise Content Management system your users can ensure that they are accessing a “single source of truth”
      • Coupling mobile access with an Enterprise Content Management system allows users to ensure that they are accessing the right content at the right time, and enables access from any location
    Q: Who really benefits from a mobile-enabled Enterprise Content Management system? A: The key to serving citizens is to create an optimized user experience. Citizens don’t just interact in the public sector space; they also visit commercial webpages and use content sharing products that are specifically tailored for their mobile devices. Their age, backgrounds, and technical expertise will range beyond the scope of your employees and are much more difficult to capture. If your organization is primarily serving citizens, you are probably already surfacing your content through a mobile-friendly user interface.
    Where cost savings and efficiencies are actually realized are when mobile capabilities are extended to employees:
    • Consultants
    • Case workers
    • Delivery drivers
    • Field service employees
    • Assistants
    Rather than addressing the needs of the user, with employees you are addressing wants. Mobile device enabled ECM systems:
    1. Enhance the productivity and readiness of your mobile workforce
    2. Provide your organization with real-time updates to content
    3. Allow for greater and newer types of metrics to be captured
    Q: And what would you say are the keys to successfully implementing a mobility-enabled content management system? A: Emphasize the user experience!
    • Access
      • Browser-based web interface with responsive design
      • Browser-based web interface with device-specific design
      • Mobile application
    • Security
      • Force authentication
      • Local file encryptions
      • Ensure access restrictions are enforced through mobile interface
    • Simplicity of design
      • Less screen space to work with
      • Be conscious of page-load times
    • Metadata
      • Index for searching
      • Categorize for filtering
    Thank you, Mitchell for sharing your thoughts on The Evolution of Enterprise Content in the Mobile Era.  For more information on this topic, we invite you to listen to Mitchell provide an overview in this podcast.

    Data toons: Cirque du DBA

    Pythian Group - Wed, 2014-04-23 09:09

    It’s not uncommon for database administrators (DBAs) to feel like ring masters at the circus. But what happens when you free up in-house DBAs by outsourcing database management?
    Cirque cartoon

    This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Based on a work at pythian.com/blog.

    Categories: DBA Blogs

    RHEL7 and Oracle Linux 7 beta

    Tim Hall - Wed, 2014-04-23 03:15

    Nearly two weeks ago, Oracle announced the Oracle Linux 7 Beta 1. Being the Linux fanboy I am, I downloaded it straight away from here.

    Oracle Linux is a clone of the Red Hat Enterprise Linux (RHEL) distribution. The RHEL7 beta, and therefore OL7 beta, distro is based on a cut of Fedora 19, although depending on who you ask, it’s possibly more a mix of Fedora 18, 19 and 20… Suffice to say, there are a lot of changes compared to the RHEL6/OL6 distribution.

    As I’ve mentioned several times before, my desktop at home is running Fedora 20, so I’m pretty used to most of the changes, but I’ve not written much about them, apart from the odd blog post. It’s not a high priority for me, since I’m not a sysadmin, but I’ll be updating/rewriting a few of the Linux articles on the site to include the new stuff.

    When Surachart Opun mentioned having to look at systemd and firewalld, it seemed like the perfect opportunity to update my firewall and services articles. You can see the new versions here.

    RHEL7/OL7 is only in beta, and even after the production release I’m sure it will be a long time before Oracle actually certify any products against it, but if you are not a Fedora user, it’s probably worth you having a play around with this stuff.

    Cheers

    Tim…

    RHEL7 and Oracle Linux 7 beta was first posted on April 23, 2014 at 10:15 am.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    Oracle seeks to build Oregon health insurance exchange

    Chris Foot - Wed, 2014-04-23 01:28

    Among a list of other reformations, the United States Affordable Care Act requires each state to construct a health insurance exchange to broker and manage care packages, measure eligibility, automate enrollment and electronically transfer patient information between participating entities. Oregon officials have relied on Oracle's database experts to build such a system, but may be considering a more consultative approach.  

    According to a report released by Oracle, Oregon's Department of Human Services perceived ACA mandates as an opportunity to upgrade its IT infrastructure and improve the organization's delivery of services. Before the health care legislation was signed, DHS was already in the process of upgrading programs that assisted impoverished families with dependent children, but was using an inadequate legacy system to determine citizen eligibility for Medicaid.

    Oracle's assistance
    In addition, the federal legislature obligated the DHS to connect patients with coordinated care organizations – networks of physicians, dentists, mental health professionals and other treatment providers who have collaborated to support consumers who will receive benefits from insurance exchanges. It was evident that such an operation required the expertise of database administration professionals. 

    As a result, the DHS used the Oracle Enterprise Architecture Framework, as well as the IT developer's Architecture Development Process. Both of these elements provided a structure for updating the state's legacy infrastructure and ensured that any future developments would be adequately supported. Bob Ladouceur, information systems applications manager at Oregon DHS, claimed that the enterprise architecture enabled professionals to coordinate multiple activities within a versatile initiative to stay focused. 

    Mitigating contention 
    The result of the project was the development of a health insurance exchange website appropriately named Cover Oregon, which according to Oracle officials isn't fully operational. According to Statesman Journal, the state has blamed the database support services company for not meeting contract expectations even though Oracle stated that Cover Oregon would not be ready for launch until October 1. Despite warnings issued by Oracle President and Chief Financial Officer Safra Catz, the state launched the website's services several months before the scheduled release date. 

    Although Oregonian officials claimed that Oracle continued to reassure DHS that the website was complete, Oracle responded by noting that the state rushed the process. This level of contention has led DHS to sever its partnership with the database architecture company, a move that many IT professionals perceive as unwise. As opposed to resolving the issue, Oregon is searching for database administration services to help them proceed. 

    Missing Password for Database Link Bug

    Michael Dinh - Tue, 2014-04-22 21:50

    So there I was, working on another database duplication project, the requirement is to save the existing database links.

    Sounds pretty easy, right?

    SELECT OWNER, DB_LINK, DBMS_METADATA.GET_DDL('DB_LINK',DB_LINK,OWNER) as DDL FROM DBA_DB_LINKS;
    

    Wrong and I know why I am getting bald. Pulling my hair out.

    After searching for hours, I found DBMS_METADATA.GET_DDL database link password missing

    Another 11.2.0.4 Bug.

    I believe the bug was introduced when I modified user’s password as shown below since  everything was working fine just hours ago.

    alter user SCOTT identified by values 'S:EDCCC6A91707D978B7D49476BCA228BC7D702C135557F41154ACBF744645;F894844C34402B67';
    

    Got desperate and restored the database which did not help.

    Now what and how is one suppose to save database links info?

    Find out more here


    Missing Password for Database Link Bug

    Michael Dinh - Tue, 2014-04-22 21:50

    So there I was, working on another database duplication project, the requirement is to save the existing database links.

    Sounds pretty easy, right?

    SELECT OWNER, DB_LINK, DBMS_METADATA.GET_DDL('DB_LINK',DB_LINK,OWNER) as DDL FROM DBA_DB_LINKS;
    

    Wrong and I know why I am getting bald. Pulling my hair out.

    After searching for hours, I found DBMS_METADATA.GET_DDL database link password missing

    Another 11.2.0.4 Bug.

    I believe the bug was introduced when I modified user’s password as shown below since  everything was working fine just hours ago.

    alter user SCOTT identified by values 'S:EDCCC6A91707D978B7D49476BCA228BC7D702C135557F41154ACBF744645;F894844C34402B67';
    

    Got desperate and restored the database which did not help.

    Now what and how is one suppose to save database links info?

    Find out more here


    Securing ObFormLoginCookie in OAM 10g

    Online Apps DBA - Tue, 2014-04-22 20:13
    We usually secure ObSSOCookie to pass this cookie in SSL environment and to avoid non-SSL applications to access. This is a very good feature to improve security in OAM. However if you also want to secure ObFormLoginCookie although you don’t find any sensitive information in this cookie, you can do so. Securing ObFormLoginCookie will allow [...]

    This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
    Categories: APPS Blogs

    Presenting at ODTUG Kscope14 Conference in Seattle June 22-26 2014

    Richard Foote - Tue, 2014-04-22 19:21
      Just a short note to say I’ll be presenting at the Oracle Development Tools User Group (ODTUG) Kaleidoscope 14 Conference this year in beautiful Seattle, Washington on June 22-26 2014. I had a fantastic time when I attended this conference a few years ago when it was held in Monterey so I’m really looking forward to […]
    Categories: DBA Blogs

    Result Cache concept and benefits

    DBA Scripts and Articles - Tue, 2014-04-22 15:46

    This feature was first introduced in Oracle 11g and was meant to increase performance of repetitive queries returning the same data. This feature is interesting if your application always look for static data, or data that is rarely updated, for these reasons, it is firstly destinated to Data Warehouses databases (OLAP) as many users will [...]

    The post Result Cache concept and benefits appeared first on Oracle DBA Scripts and Articles (Montreal).

    Categories: DBA Blogs