Feed aggregator

Questions about the upper limit of records in one database block

Tom Kyte - Tue, 2016-08-16 19:26
Hi, team I was told that the last 3 characteristics of ROWID stands for the row number of database block. Theoretically, every database block could store as many as 65536 rows. I was wondering how many records could stores in one block (with the s...
Categories: DBA Blogs

Performance tuning for report query

Tom Kyte - Tue, 2016-08-16 19:26
Dear Tom, Thanks for your valuable info which is being provided by you for various questions on oracle to all the users which is also helping us in lot of ways... Now we have a problem with a report query which is almost taking 2 minutes 40 sec...
Categories: DBA Blogs

Fit for Work? Wearables in the Workplace Research Conclusions and Recommendations

Usable Apps - Tue, 2016-08-16 18:54

Oracle PartnerNetwork members innovating with Oracle Cloud services, emerging tech, and the Oracle Applications User Experience (OAUX) Glance, Scan, Commit design philosophy need to be aware of the factors that drive adoption of new ideas and how to best plan with their customers for a successful rollout of a solution.

“Wearables in the Workplace” is an outstanding piece of research now available in report form, thanks to postgrad students in the London School of Economics and Political Science.

Wearables in the Workplace Report

Wearables in the Workplace Research Report (Click to read).

The shaping of the research was guided by HCM Cloud strategy evangelist and UX champ Andy Campbell (@axcampbe), who told me how Oracle provides Masters in Management candidates with ideas, advice, access to information, and so on, for their final project. A project theme is agreed that is of use and interest to all parties, and then off they go!

This year's project theme was wearable technology in the workplace.

The resulting report based on qualitative and quantitative research methods offers some valuable insights. I encourage you to read the entire report, but for now, the main findings about wearables in the workplace are:

  • Yes, there is huge potential for wearable technology in the enterprise, but whereas the intention to use such devices by employees may be strong, the actual perseverance in doing so is weaker.
  • Most focus is on fitness and health-related (or "wellness" as our dear U.S. friends like to put it) uses for wearable technology in work.
  • People in work are very guarded about the data privacy issue and how their personal data gathered from such technology might be used. 
  • Incentives are a must to adoption of the technology. For example, employees respond positively to flexible working arrangements, benefits, and so on. 

 And the report's recommendations should be borne in mind too:  

  • Build upfront awareness amongst the workforce about what wearable technology can do. Yep, it takes time (Diffusion Theory). 
  • Employees should be offered a choice of whether they want to opt in or out of any wearable technology program in work. 
  • Health and wellbeing use cases are the spearhead to a wider adoption of wearables. Plus, there are direct HCM and business benefits related to these areas. 
  • Understand those users. Then, pick an incentive approach that fits their profile.

Wearable technology is an important part of the OAUX mobility strategy. The technology offers users the means to perform micro-transactions on-the-go using data in the Oracle Cloud that can then be used across other devices as the context suits, or to automate what they hate doing and augment what they love. Partners and customers together need to understand the solution use stakeholders and the adoption dynamics of the proposed solution so that they can design for it accordingly. And so, it was great to see this report presented to partners and customers at the HCM Cloud update day in Oracle's London office recently. The lessons were well received. 

Looking forward to more such collaborative research in the future!

If you have any feedback on the conclusions, recommendations, or adoption of wearable or emerging technology in the workplace, then find the comments box! 

Oracle JET and Application Container Cloud (with MCS Integration)

Andrejus Baranovski - Tue, 2016-08-16 13:44
I have deployed JET application with Mobile Cloud Service (MCS) integration into Oracle Application Container Cloud service. It works perfectly well, you can try to access it yourself (until my trial cloud account expires on 15th of August) by URL: https://rsjetnodejsapp-deoracleem99369.apaas.us2.oraclecloud.com.

Follow excellent blog article written by Lucas Jellema on details how to deploy JET into Application Container Cloud - Deploying an Oracle JET application to Application Container Cloud and running on Node.js.

First step is to generate empty Node.js application, you can do this from command line (install Node.js and express generator before):

express RSJETNodeJSApp

Navigate into generated application folder and add dependencies using npm:

npm install

Node.js application is created and you could run it by:

npm start

You could test JET app locally with the same command, as soon as you add JET content into Node.js app.

Once you access deployed JET application by URL: https://rsjetnodejsapp-deoracleem99369.apaas.us2.oraclecloud.com. You should get into Home tab, where static chart data is rendered (no integration with MCS yet):


I did a test to check how same screen looks on mobile device. JET renders perfectly, template is adjusted, as well as chart itself:


Go to People tab, this is where integration with MCS is done. I have implemented oAuth security authentication in JET, against MCS service (I will describe it in the next post). You can login with redsam/Wel_come1 user:


Same login screeen is rendered on mobile device nicely:


Once login is successful, JET app executes REST call to MCS endpoint and fetches data (originally returned by ADF BC REST service deployed on Java Cloud and accessed by MCS Connector):


Mobile device view doesn't dissapoint again, same page UI looks great:


JET implementation code is copied into Node.js structure public folder. You can open app in NetBeans, this would simplify editing:


I have deployed this app to Application Container Cloud by creating zip archive (make sure to create manifest.json file). Don't include root folder into archive, include application folders/files directly. Deployment is straightforward, create new Node.js application in Application Container Cloud:


Upload archive, give application name and wait few minutes to initialize. All done:


Here you can download complete example of Node.js app with JET content (and all libraries) I was using to deploy to Application Container Cloud - RSJETNodeJSApp.zip.

rlwrap with #GoldenGate GGSCI

DBASolved - Tue, 2016-08-16 10:45

Since I published posts on how to retrieve command history within GGSCI using the Oracle provided commands of “history” and “fc”, I was pinged a couple of times by some friends asking about “rlwrap” usage with GoldenGate. The answer is a simple yes, rlwrap can be used with Oracle GoldenGate.

What exactly is “rlwrap”?

According to the readme file at http://utopia.knoware.nl/~hlub/uck/rlwrap/#rlwrap, rlwrap is a ‘read-one wrapper’. Basically, it allows for the editing of keyboard input for any command.

How to install?

There are two ways to install “rlwrap”. The first way is manually, which requires you to run the configure and make commands; however, the easiest way I’ve found is by using a yum repository. Tim Hall (a.k.a Oracle-Base) documents this process quite effortlessly over at his blog. Thanks Tim!

Usage?

After you have “rlwrap” installed, you simply have to create an alias in your profile to use it. You can use it from the command line as well; however, the alias approach ensure that it is executed every time you run ggsci. The below example is what I have running in my .bash_profile for the Oracle user.

alias ggsci='rlwrap $OGG_HOME/ggsci'

The next time I login to use ggsci, rlwrap will automatically be ran and then I will be able to scroll through the commands I typed while in ggsci. Another nice thing about “rlwrap” is that when I logout of ggsci, and then back in, my command history is available still.

Although the “history” and “fc” commands are handy, it would be nice to see Oracle include “rlwrap” into the core product of Oracle GoldenGate …

Enjoy!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Cloud Marpetplace part 2

Pat Shuff - Tue, 2016-08-16 09:31
The Oracle Cloud Marketplace is a location where customers can discover partner provided applications and services that complement the Oracle Public Cloud services. It is also a location where partners can list and promote cloud based applications and services that extend, integrate with, or build on Oracles Public Cloud services. This is an interesting concept that has various ways of joining customers to partners and giving partners view into what customers are looking at, downloading, and executing. There are over 1000 apps, both commercial and public domain compiled source, available from over 1500 partners and system integrators. Some applications are helper apps like CloudBerry Explorer that assists you in consuming cloud storage services. Other applications are Oracle provided deployments of E-Business Suite running on infrastructure as a service for development and test. The Marketplace has system integrators listed that specialize in cloud services. Deloitte is an example of a Cloud Elite Systems Integrator who provides a variety of consulting services to help build custom cloud solutions for customers.

For partners, there are tools that allow you to develop, market, and sell applications. The development allows you to build binaries that you can download to use as tools to access cloud services, bundles to upload to the cloud, or images that can be launched in the cloud. The tools also exist to help you market your application once it is built and uploaded. You can look at page views, who downloaded your application, and geographic data about the customers to help target marketing campaigns. There are also tools to help with revenue capture and making money from your application either through the Marketplace page or redirection to your own page. There are lead generation integrations into the partner portal to help with follow up and calling campaigns for customers that are downloading and using applications. Partners must be gold level or above and get this service for free by signing up through the Oracle Partner Portal. Partners also have access to BI reports to look at trending, usage, and utilization of the applications that they have listed. These reports are designed with partners like Bitnami who lists hundreds of public domain compiled images on the Oracle Cloud for compute resources. The reports help them look at the most popular download, packages that are trending, feedback from customers both positive and negative, as well as the age of packages and ones that might need updating.

Customers can search for applications based on key words, categories of applications, and company names. You do need to enable the Compute_Operations role before you deploy an image into the Oracle Cloud as well as going into the Properties tab at the top right of the Cloud Console and creating a linkage between the Marketplace and your cloud account.

In summary, tools exist for customers and partners to help link people who want to create applications with people who want to use applications. There is a review mechanism to help with feedback as well as notifications of updates and changes to applications. This tool is constantly being updated and changing so don't be surprised if the screen shots that you see here are a little different when you visit the pages.

HttpSessionListener with Spring Boot Application

Pas Apicella - Tue, 2016-08-16 07:55
I had a requirement to implement a HttpSessionListener in my Spring Boot application which has no web.xml. To achieve this I did the following

1. My HttpSessionListener was defined as follows
 
package com.pivotal.pcf.mysqlweb.utils;

import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSessionListener;

import org.apache.log4j.Logger;

public class SessionListener implements HttpSessionListener
{
protected static Logger logger = Logger.getLogger("controller");
private HttpSession session = null;

public void sessionCreated(HttpSessionEvent event)
{
// no need to do anything here as connection may not have been established yet
session = event.getSession();
logger.info("Session created for id " + session.getId());
}

public void sessionDestroyed(HttpSessionEvent event)
{
session = event.getSession();
/*
* Need to ensure Connection is closed from ConnectionManager
*/

ConnectionManager cm = null;

try
{
cm = ConnectionManager.getInstance();
cm.removeConnection(session.getId());
logger.info("Session destroyed for id " + session.getId());
}
catch (Exception e)
{
logger.info("SesssionListener.sessionDestroyed Unable to obtain Connection", e);
}
}
}
2. Register the listener from a @Configration class as shown below<br />
  
package com.pivotal.pcf.mysqlweb;

import com.pivotal.pcf.mysqlweb.utils.SessionListener;
import org.springframework.boot.context.embedded.ServletListenerRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import javax.servlet.http.HttpSessionListener;

@Configuration
public class ApplicationSessionConfiguration
{
@Bean
public ServletListenerRegistrationBean<HttpSessionListener> sessionListener()
{
return new ServletListenerRegistrationBean<HttpSessionListener>(new SessionListener());
}
}
Thats all you have to do to achieve this


Categories: Fusion Middleware

Month End

Jonathan Lewis - Tue, 2016-08-16 07:04

A question about parallel query and cardinality estimates appeared on OTN a little while ago that prompted me to write this note about helping the optimizer do the best job with the least effort.  (A critical point in the correct answer to the original question is that parallel query may lead to “unexpected” dynamic sampling, which can make a huge difference to the choice of execution plans, but that’s another matter.)

The initial cardinality error in the plan came from the following predicate on a “Date dimension” table:


      AR_DATE.CALENDAR_DT   = AR_DATE.MONTH_END_DT 
AND   AR_DATE.CALENDAR_DT  >= ADD_MONTHS(TRUNC(SYSDATE,'fmyy'), -36) 
AND   AR_DATE.MONTH_END_DT >= ADD_MONTHS(TRUNC(SYSDATE,'fmyy'), -36)

In the parallel plan the estimated number of rows on a full tablescan of the table was 742, while on the serial plan the same tablescan produced a cardinality of 1. You will appreciate that having an estimate of 1 (or less) that is nearly three orders of magnitude wrong is likely to lead to a very bad execution plan.

My first thought when I saw this was (based on a purely intuitive interpretation): “there’s one day every month that’s the last day of the month and we’re looking at roughly that last 36 months so we might predict a cardinality of about 36”. That’s still a long way off the 742 estimate and 1,044 actual for the parallel query, but it’s a warning flag that the serial estimate is probably an important error – it’s also an example of the very simple “sanity checking” mental exercises that can accompany almost any execution plan analysis.

My second thought (which happened to be wrong, and would only have been right some time well before version 10.2.0.5) was that the optimizer would treat the add_months() expressions as unknown values and assign a selectivity of 5% to each of the predicates, reducing the combined selectivity to 1/400th of the selectivity it gave to the first predicate. In fact the optimizer evaluates the expressions and would have used the normal (required range / total range) calculation for those two predicates.

It’s the first predicate that I want to examine, though – how does the optimizer calculate a selectivity for it ? Here’s some code to generate sample data to use for testing.


rem
rem     Script:         month_end.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem

create table t1
nologging
as
select
        rownum                                                   id,
        to_date('01-Jan-2010','dd-mon-yyyy') + rownum - 1       calendar_date,
        add_months(
                trunc(to_date('01-Jan-2010','dd-mon-yyyy') + rownum - 1 ,'MM' ),
                1
        ) - 1                                                   month_end_date
from
        dual
connect by
        level <= trunc(sysdate) - to_date('01-jan_2010','dd-mon-yyyy') + 1
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

This clunky bit of code gives me consecutive dates from 1st Jan 2010 up to “today” with the month_end_date column holding the month end date corresponding to the row’s calendar_date. So now we can check what the optimizer makes of the predciate calendar_date = month_end_date:


set autotrace on explain

select count(*) from t1 where calendar_date = month_end_date;

  COUNT(*)
----------
        79

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    16 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    16 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CALENDAR_DATE"="MONTH_END_DATE")

Looking at operation 2 we can see that, in effect, the optimizer has considered two independent predicates “calendar_date = {unknown}” and “month_end_date = {unknown}” and taken the lower of the two selectivities – which means the cardinality estimate is 1 because the calendar_date column is unique across this table.

There are various ways to work around problems like this. One of the simplest would be to tell Oracle to sample this table with the (table-level) hint /*+ dynamic_sampling(t1 1) */; in fact, since this predicate is effectively treated as an examination of two predicates the (cursor-level) hint /*+ dynamic_sampling(4) */ would also cause sampling to take place – note that level 4 or higher is required to trigger sampling for “multiple” predicates on a single table. As a general guideline we always try to minimise the risk of side effects so if this problem were embedded in a much larger query I would prefer the table-level hint over the cursor-level hint.

There are other options, though, that would allow you to bypass sampling – provided you can modify the SQL. The script I used to create this table also included the following statement:


alter table t1 add (
        date_offset1 generated always as (calendar_date - month_end_date) virtual,
        date_flag generated always as (case when calendar_date - month_end_date = 0 then 'Y' end) virtual
);

In 12c I would declare these virtual columns to be invisible to avoid problems with any SQL that didn’t use explicit column lists. For demonstration purposes I’ve set up two options – I can find the rows I want with one of two obvious predicates:

    date_offset1 = 0
    date_flag = 'Y'

In fact there’s a third predicate I could use that doesn’t need to know about the virtual columns:

    calendar_date - month_end_date = 0

Unfortunately I can’t arbitrarily swap the order of the two dates in the last predicate, and the optimizer won’t spot that it is also equivalent to “calendar_date = month_end_date”. Here are a few execution plans – for which the only significant bit is the cardinality estimate of the full tablescans:


select count(*) from t1 where date_flag = 'Y';

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     4  (25)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    79 |   158 |     4  (25)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATE_FLAG"='Y')



select count(*) from t1 where date_offset1 = 0;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    78 |   312 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATE_OFFSET1"=0)



select count(*) from t1 where calendar_date - month_end_date = 0;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    78 |   312 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."DATE_OFFSET1"=0)

It’s interesting to note that the optimizer has transformed the last predicate into the equivalent virtual column expression to do the arithmetic. You might also note that the date_flag option is slightly more accurate, but that’s because it’s based on an expression which is null for the rows we don’t want while the date_offset1 column has a value for every row and a little bit of arithmetical rounding comes into play. You might also note that there’s a small cost difference – which I’d ascribe to the CPU cost that the optimizer has added for the CASE expression being applied on top of the simple date arithmetic.

Of course, whatever else you might play around with when working around a cardinality problem like this, I think the strategic aim for a data warehouse system would be to get a REAL flag column on the table and populate it at data loading time if month-end dates played an important part in the proceedings – though I have to say that the virtual flag column is one I do rather fancy.

 


dbms_parallel_execute.run_task error when parallel_level > 1

Tom Kyte - Tue, 2016-08-16 01:06
Hi, I am trying to use dbms_parallel_execute to execute a procedure in multiple threads. when the value of parallel_level = 1, the below code just works fine, but when the value of parallel_level > 1 then this below code fails with the error shown b...
Categories: DBA Blogs

Eliminating rows on condition

Tom Kyte - Tue, 2016-08-16 01:06
I have a table for Vehicle owners as follows. The table has 3 columns, Customer id, Vehicle Identification number, and whether the customer is a Primary or Secondary driver on the vehicle. Customer_ID, Vehicle_VIN, Relationship 0001, 12345678, P...
Categories: DBA Blogs

Function rendom_id

Tom Kyte - Tue, 2016-08-16 01:06
HI We do have store procedure and CURSOR is looking for random id per month.We can see result like one visit from 08/may/2016 2 - 09/may/2016 5 - 11/may/2016 20 - 12/may/2016 2 - 18/may/2016 Can you tell me why 20 visits get selected from...
Categories: DBA Blogs

Can I send messages on mobile phone from Oracle database?

Tom Kyte - Tue, 2016-08-16 01:06
Hi TOM, We are running an application on Oracle 8.1.5 on solaris and oas 4.081. We are using basic oracle users for authentication. We want to develop a new functionality in our system by which we will be able to send messages on the user's mobile...
Categories: DBA Blogs

Lost all Redo log file

Tom Kyte - Tue, 2016-08-16 01:06
Hi Tom, I am practising Recovery . CASE 1)I have Development TEST DB in NON-ARCHIVE Mode. NEVER backed up. All Redo log files including the active one ,are dropped from the OS. How to recover such a DB. I dont mind loosing Transacti...
Categories: DBA Blogs

5 Stages of the Software Development Cycle

Complete IT Professional - Mon, 2016-08-15 16:46
This is a guest post by Saurab Prabhakar. The Software Development Lifecycle (SDLC) defines a revolution. It is fundamentally a sequence of steps that display a model for the development and lifecycle management of certain software or even an application. The procedure can differ from business to business, but the standards remain firm. The software development […]
Categories: Development

Getting to know Bob Rhubart, OTN Architect Community Manager

OTN TechBlog - Mon, 2016-08-15 11:58

Oracle ACE Mike Riley, asks OTN Architect Community Manager, Bob Rhubart about his podcasts, 2MTT videos, and about community.oracle.com. 

<p> </p>

Edit #GoldenGate commands from GGSCI

DBASolved - Mon, 2016-08-15 10:45

Ever get tired of typing the same command over and over again in Oracle GoldenGate GGSCI? Well, Oracle has provided a history recall function into GGSCI, I talked about his in an earlier post. This post on history recall can be found here.

In this post, lets take the command recall and how these commands can be edited. In Oracle GoldenGate 12c, Oracle has provided an GGSCI command called “FC”. The “FC” command is used to display/edit a previously issued GGSCI command and the execute it again. This command leverages the memory buffer the same was as the history command does within GGSCI.

Now the syntax for using this command is as follows:

FC [ n | -n | string ]

n – Displays the command from the line number provided

-n – Displays the command that was issued n lines before the current line

string – Displays the last command that starts with the specified string

Let’s take a look at an example of using “FC” within GGSCI.

In the below output, I have issued a series of commands and then listed these commands using this history function.

GGSCI (fred.acme.com) 6> history

GGSCI Command History

    1: info all
    2: stats extract egg12c
    3: info mgr
    4: info pgg12c
    5: info all
    6: history

Using the “FC” command, I can edit and execute the edited command. If you take a look at command 4 (info pgg12c), I want to replace “pgg12c” with “egg12c”. this is done by using the “r” edit command as follows.

GGSCI 7> fc 4
GGSCI 7> info pgg12c
GGSCI 7..     regg12c
GGSCI 7> info egg12c
GGSCI 7..

EXTRACT    EGG12C    Last Started 2016-08-12 17:06   Status RUNNING
Description          'Integrated Extract'
Checkpoint Lag       00:00:08 (updated 00:00:05 ago)
Process ID           24082
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2016-08-15 10:14:45
                     SCN 0.51017268 (51017268)

By executing “fc 4” (fc n), I’m telling GGSCI to pull the fourth command from the memory buffer for editing. Then I use the “r” command to provide the text that I want to replace in the command. Notice the position of the “r”, it is lined up directly under the string I want to replace. Then I provide the replacement string. Before execution GGSCI provides you with an output of the changed command. After verifying the command, the command can be executed providing the output for the correct process.

Note: For more editing options with FC, you can find these here.

If you want to know what command was executed n commands before the previous command, you can use the -n option to “FC”. This makes the “FC” command act like the history command but only displays the command at the -n position. This can be seen if you do a history command prior to a fc -n command.

GGSCI (fred.acme.com) 11> history

GGSCI Command History

    2: stats extract egg12c
    3: info mgr
    4: info pgg12c
    5: info all
    6: history
    7: info egg12c
    8: info mgr
    9: history
   10: info egg12c
   11: history

GGSCI (fred.acme.com) 12> fc -4
GGSCI (fred.acme.com) 12> info mgr
GGSCI (fred.acme.com) 12..

Manager is running (IP port fred.acme.com.15000, Process ID 12377).

You will notice, that I have eleven commands in the command history. By using “fc -4”, I’m able to retrieve the info mgr command and then execute it by simply by hitting return. Before hitting return, I could still edit the command as was shown previously.

Lastly, you can retrieve a previous command by searching for a specific string. The string has to be at the beginning of the command. In the below example, I’m searching for the last stats command that was ran.

GGSCI (fred.acme.com) 16> history

GGSCI Command History

    7: info egg12c
    8: info mgr
    9: history
   10: info egg12c
   11: history
   12: info mgr
   13: info all
   14: history
   15: stats extract egg12c
   16: history


GGSCI (fred.acme.com) 17> fc sta
GGSCI (fred.acme.com) 17> stats extract egg12c
GGSCI (fred.acme.com) 17..

Sending STATS request to EXTRACT EGG12C ...

No active extraction maps
DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
	Operations                		           2.00
	Mapped operations         		           2.00
	Unmapped operations         		           0.00
	Other operations         		           0.00
	Excluded operations         		           0.00

As you can see, there is only one stats command in the command history. The “FC” command pulls this command and provides an opportunity to edit it. After hitting return, the command is executed.

Now that you know how to recall and edit Oracle GoldenGate commands from the GGSCI command prompt, hopefully, this will make your life with Oracle GoldenGate a bit easier.

Enjoy!!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Why Digital User Experience Matters in Business

Usable Apps - Mon, 2016-08-15 09:24

Ever been asked why a digital user experience matters?

Here's a quick explanation that will answer that question!

It's based on the Oracle Applications User Experience's Oracle PartnerNetwork enablement session "Selling with UX":

 

Unplugged pluggable databases

Yann Neuhaus - Mon, 2016-08-15 08:35

When Oracle Multitenant came out in 12c, with pluggable databases, it was easy to draw them as USB sticks that you can plug and unplug to/from your Container Database (CDB). I don’t like this because it gives the false idea that an unplugged database is detached from the container.

Containers

In the Oracle documentation, the Concept book, the description of the multitenant architecture starts with an illustration of a CDB.
CDB with two PDBs where the text description starts like:
This graphic depicts a CDB as a cylinder. Inside the CDB is a box labeled “Root (CDB$ROOT).” Plugged into the box is the Seed (CDB$SEED), hrpdb, and salespdb.

Let me list what I don’t like with this description:

  1. There are 5 containers here but 3 ways to draw them. The CDB itself is a container (CDB_ID=0) and is a cylinder. The CDB$ROOT (CON_ID=1) is a container and is a box. The PDB$SEED, and the user PDBs are cylinders with USB plug.
  2. The CDB$ROOT do not look like a database (cylinder). However, physically it’s the same: SYSTEM, SYSAUX, UNDO, TEMP tablepsaces
  3. The PDB$SEED (CON_ID=1) looks like it is pluggable (USB stick) but you never unplug the PDB$SEED
  4. The USB plug is plugged inside the CDB$ROOT. That’s wrong. All containers inside a CDB are at the same level and are ‘plugged’ in the CDB (CON_ID=0) and not the CDB$ROOT(CON_ID=1). They are contained by the CDB and if they are plugged somewhere, it’s in the CDB controlfile. The root is a root for metadata and object links, not for the whole PDBs.

If I had to show pluggable databases as USB sticks it would be like that:
CaptureUSBStick

Here CDB$ROOT is a container like the pluggable databases, except that you cannot unplug it. PDB$SEED is a pluggable database but that you don’t unplug. The CDB is a container but do not look like a database. It’s the controlfile and the instance, but there’s no datafiles directly attached to the CDB.

Unplugged

However with this illustration, we can think that an unplugged pluggable database is detached from the CDB, which is wrong.

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB READ WRITE NO
SQL> alter pluggable database PDB close;
Pluggable database altered.
 
SQL> alter pluggable database PDB unplug into '/tmp/PDB.xml';
Pluggable database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB MOUNTED

Here PDB is unplugged, but still pertains to the CDB.

The CDB controlfile still addresses all the PDB datafiles:

RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 829 SYSTEM YES /u02/app/oracle/oradata/CDB/system01.dbf
3 1390 SYSAUX NO /u02/app/oracle/oradata/CDB/sysaux01.dbf
4 4771 UNDOTBS1 YES /u02/app/oracle/oradata/CDB/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB/pdbseed/system01.dbf
6 2 USERS NO /u02/app/oracle/oradata/CDB/users01.dbf
7 540 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB/pdbseed/sysaux01.dbf
12 280 PDB:SYSTEM NO /u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_system_ctcxz29m_.dbf
13 570 PDB:SYSAUX NO /u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_sysaux_ctcxz2bb_.dbf

The datafiles 12 and 13 are the ones from my unplugged PDB, still known and managed by the CDB.

Backup

An unplugged PDB has data, and data should have backups. Who is responsible for the unplugged PDB backups? It’s still the CDB:

RMAN> backup database;
 
...
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_sysaux_ctcxz2bb_.dbf
input datafile file number=00012 name=/u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_system_ctcxz29m_.dbf
...

The unplugged CDB is not detached at all and the CDB is still referencing its files and is responsible for them. This is very different from an unplugged USB stick which has no link anymore with the hosts it was plugged-in before.

Backup optimization

If you wonderwhether it’s good to backup an unplugged PDB with each CDB backup, don’t worry. RMAN knows that it is in a state where it cannot be modified (like read-only tablespaces) and do not backup it each time. Of course, you need to have BACKUP OPTIMIZATION is configured to ON:

RMAN> backup database;
 
Starting backup at 15-AUG-16
using channel ORA_DISK_1
skipping datafile 12; already backed up 2 time(s)
skipping datafile 13; already backed up 2 time(s)
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set

Unplug and DROP

From what we have seen, an unplugged PDB is like a closed PDB. There’s a difference through: an unplugged PDB is closed forever. You cannot open it again:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB MOUNTED
SQL> alter pluggable database PDB open;
alter pluggable database PDB open
*
ERROR at line 1:
ORA-65086: cannot open/close the pluggable database
 
SQL> host oerr ora 65086
65086, 00000, "cannot open/close the pluggable database"
// *Cause: The pluggable database has been unplugged.
// *Action: The pluggable database can only be dropped.

So, if you want to keep the USB stick analogy, unplugged do not mean ‘physically unplugged’ but something like what you should do before removing a USB stick:
Screenshot 2016-08-15 10.44.46

In summary:

ALTER PLUGGABLE DATABASE … UNPLUG is like the logical ‘eject’ you do to be sure that what you will remove physically was closed forever.
ALTER PLUGGABLE DATABASE … DROP … KEEP DATAFILES is the physical removal from the CDB

Because DROP is the only thing that can be done on an unplugged PDB, SQL Developer do the both when you click on ‘unplug':
CaptureUNPLUGDROP

The idea to drop it just after the unplug is probably there to prevent the risk to drop it ‘including datafiles’ after it has been plugged into another CDB. Because then it is lost.
However, keep in mind that when unplugged and dropped, nobody will backup the PDB datafiles until it is plugged into a new CDB.

Read-Only

There’s a last one more difference. A USB stick can be read-only. A plugged PDB cannot. You may want to share a database from a read-only filesystem, like you can do with transportable tablespaces. but you can’t:

SQL> drop pluggable database PDB keep datafiles;
Pluggable database dropped.
 
SQL> create pluggable database PDB using '/tmp/PDB.xml';
Pluggable database created.
 
SQL> alter pluggable database PDB open read only;
alter pluggable database PDB open read only
*
ERROR at line 1:
ORA-65085: cannot open pluggable database in read-only mode

The plugged PDB must be opened in read/write mode at least once:
SQL> host oerr ora 65085
65085, 00000, "cannot open pluggable database in read-only mode"
// *Cause: The pluggable database has been created and not opened.
// *Action: The pluggable database needs to be opened in read/write or
// restricted mode first.

And only then, it can be opened read-only:

SQL> alter pluggable database PDB open;
Pluggable database altered.
 
SQL> alter pluggable database PDB close;
Pluggable database altered.
 
SQL> alter pluggable database PDB open read only;
Pluggable database altered.

So what…

Here is the way I visualize pluggable databases:

CaptureCON_ID

Just a bunch of tablespaces, referenced by the CDB controlfile and grouped by self-contained containers. CDB$ROOT cannot be cloned nor unplugged. PDB$SEED can be cloned but not unplugged (but it’s a PDB). Other PDBs can be cloned and unplugged.

I’ll talk about multitenant at Oracle Open World, DOAG Conference and UKOUG TECH16.
There’s also a book coming, probably early 2017 (depends on 12.2 availability)

 

Cet article Unplugged pluggable databases est apparu en premier sur Blog dbi services.

Nested Loop Join Physical I/O Optimizations

Randolf Geist - Mon, 2016-08-15 07:44
Having done my mini-series on Nested Loop join logical I/O optimizations a while ago I unfortunately never managed to publish anything regarding the Nested Loop join physical I/O optimizations, which are certainly much more relevant to real-life performance.

Therefore the main purpose of this blog post is to point you to Nikolay Savvinov's (whose blog I can recommend in general) great mini-series covering various aspects of these optimizations:

Part 1
Part 2
Part 3
Summary

One point that - at least to me - isn't entirely clear when reading Nikolay's series is which specific plan shape he refers to, in particular since in 12c even more plan shapes for a Nested Loop join are possible.

Hence here is an attempt to summarize the various two table Nested Loop join plan shapes as of 12c and what kind of physical I/O optimizations one can expect from them:

1. Nested Loop Join Batching, the default in most cases since 11g and also in 12c

-----------------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | INDEX FULL SCAN | SYS_IOT_TOP_97632 |
|* 5 | INDEX RANGE SCAN | T_I6_IDX |
| 6 | TABLE ACCESS BY INDEX ROWID| T_I6 |
-----------------------------------------------------------
This is the plan shape that Nikolay calls "batch" or "batching" in his posts. It can provide batched I/O (mainly "db file parallel read" issuing multiple I/O requests in a single call, potentially asynchronous I/O) on both the INDEX RANGE SCAN as well as the TABLE ACCESS BY ROWID operation as described in Nikolay's blog post.

Please note that I deliberately write "can provide batched I/O", because, as Nikolay also points out in his posts, the runtime engine monitors the execution and can dynamically adjust the behaviour, which also means that it might decide to use conventional single block reads ("db file sequential read").

Please also note that in addition to the "db file parallel read" operation that submits multiple I/O requests in a single I/O submit call (see Frits Hoogland's blog post from some time ago about Linux internals of this I/O operation) the runtime engine might use "db file scattered read" multi-block I/Os under certain circumstances, in particular when performing "cache warmup prefetching".

Also note that as Nikolay points out when enabling SQL Trace or "rowsource statistics" the "Batched I/O" optimization for some reason gets disabled.

Also this plan shape at least in 12c seems to lead to inconsistencies in the Real-Time SQL Monitoring in several ways. First the number of "Executions" for the INDEX RANGE SCAN and TABLE ACCESS component of the inner row source is not necessarily consistent with the number of rows in the driving row source, second almost all activity and I/O volume seems to be contributed to the "INDEX RANGE SCAN" plan operation and not the "TABLE ACCESS" operation, even if it's the "TABLE ACCESS" that causes physical I/O.

2. Nested Loop Join Prefetch plan shape including BATCHED ROWID table access (only from 12c on)

------------------------------------------------------------------
| Id | Operation | Name |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T_I5 |
| 3 | NESTED LOOPS | |
| 4 | INDEX FULL SCAN | SYS_IOT_TOP_97716 |
|* 5 | INDEX RANGE SCAN | T_I5_IDX |
------------------------------------------------------------------
This is the Nested Loop prefetching plan shape introduced in Oracle 9i combined with the new TABLE ACCESS BY INDEX ROWID BATCHED introduced in 12c. It's the plan shape Nikolay refers to with "BATCHED ROWID" in his posts. In my (and Nikolay's) tests this provides the most aggressive batching of I/O (highest number of I/O requests submitted per call in "db file parallel read") for the TABLE ACCESS BY ROWID BATCHED, but didn't perform the same on the index ("db file sequential read", single block reads on the index segment), which doesn't mean that different test and data setups might provide that, too.
Note you should get this plan shape only with explicit hinting or disabling Nested Loop Join Batching via parameter.

In 12c, given a two table join between alias A and B, the following hints would be required to arrive at this plan shape:

leading(a b)
use_nl(b)
index(b)
no_nlj_batching(b)
--batch_table_access_by_rowid(b)
The "batch_table_access_by_rowid" isn't strictly necessary since it's the default behaviour in 12c.

Other variants are possible, like "use_nl_with_index" instead of "use_nl" and "index" separately or "opt_param('_nlj_batching_enabled', 0)" instead of "no_nlj_batching" to disable the batching (but then batching is disabled for the whole execution plan, not just for a particular join).

3. Classic Nested Loop Join Prefetch introduced in 9i

----------------------------------------------------------
| Id | Operation | Name |
----------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | TABLE ACCESS BY INDEX ROWID| T_I5 |
| 3 | NESTED LOOPS | |
| 4 | INDEX FULL SCAN | SYS_IOT_TOP_97716 |
|* 5 | INDEX RANGE SCAN | T_I5_IDX |
----------------------------------------------------------
This is what you get in pre-12c when preventing Nested Loop Join Batching, in 12c the BATCHED table access needs to be disabled in addition.

This plan shape provides the less aggressive table prefetching as described in Nikolay's posts (he refers to this plan shape as "Prefetch"), maximum number of requests submitted per call in a "db file parallel read" operation seems to 39.

As mentioned above, it didn't provide index prefetching in my tests.

The session statistics don't mention "Batched I/O", so although the "db file parallel read" wait event is the same the internal implementation obviously is a different code path.

In 12c, given a two table join between alias A and B, the following hints would be required to arrive at this plan shape:

leading(a b)
use_nl(b)
index(b)
no_nlj_batching(b)
no_batch_table_access_by_rowid(b)
Other variants as above.

4. Classic Nested Loop Join plan shape with BATCHED ROWID table access (only from 12c on)

-------------------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | NESTED LOOPS | |
| 3 | INDEX FULL SCAN | SYS_IOT_TOP_97716 |
| 4 | TABLE ACCESS BY INDEX ROWID BATCHED| T_I5 |
|* 5 | INDEX RANGE SCAN | T_I5_IDX |
-------------------------------------------------------------------
This is the classic Nested Loop join plan shape (pre-9i) combined with the new TABLE ACCESS BY INDEX ROWID BATCHED introduced in 12c.

In my tests it provides only "intra-loop" table prefetching, so if the INDEX RANGE SCAN of a single loop iteration points to different table blocks the TABLE ACCESS BY INDEX ROWID BATCHED can make use of "db file parallel read" to batch these I/O requests to the table blocks, and the session statistics show "Batched I/O" counters increasing.

However, it doesn't attempt to optimize "inter-loop" / multiple loop iterations, so if each loop iteration via the INDEX RANGE SCAN only points to a single table block, only single block reads ("db file sequential read") on the TABLE ACCESS BY INDEX ROWID BATCHED can be seen.

In my tests this plan shape didn't provide index prefetching / I/O optimizations and performed single block reads ("db file sequential read") on the INDEX RANGE SCAN operation.

For a two table join it needs explicit hinting in 12c to arrive at this plan shape, but it is the default plan shape for the "inner" joins in case of nested Nested Loop joins (multiple, consecutive Nested Loop joins in a row), see below for more information.

In 12c, given a two table join between alias A and B, the following hints would be required to arrive at this plan shape:

leading(a b)
use_nl(b)
index(b)
opt_param('_nlj_batching_enabled', 0)
no_nlj_prefetch(b)
--batch_table_access_by_rowid(b)
See my older posts regarding Nested Loop join logical I/O optimizations why the combination of OPT_PARAM nd NO_NLJ_PREFETCH is required to arrive at this plan shape - in short, specifying the obvious NO_NLJ_PREFETCH plus NO_NLJ_BATCHING doesn't work. Other variants as above.

5. Classic Nested Loop Join plan shape

-----------------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | NESTED LOOPS | |
| 3 | INDEX FULL SCAN | SYS_IOT_TOP_97716 |
| 4 | TABLE ACCESS BY INDEX ROWID| T_I5 |
|* 5 | INDEX RANGE SCAN | T_I5_IDX |
-----------------------------------------------------------
This is the classic Nested Loop join plan shape (pre-9i).

Interestingly in my tests at least in 12c it provides also "intra-loop" table prefetching, so if the INDEX RANGE SCAN of a single loop iteration points to different table blocks the TABLE ACCESS BY INDEX ROWID can make use of "db file parallel read" to submit multiple of these I/O requests to the table blocks in a single call, but the session statistics don't show "Batched I/O" counters increasing and the maximum number of requests seem to be 39, so it is less aggressive and looks very similar to the internal implementation used for the classic "table prefetching" plan shape above.

There is no sign of "inter-loop" optimizations and the INDEX RANGE SCAN seems to make use of single block reads only ("db file sequential read").

For pre-12c this is default plan shape used for the "inner" joins in case of nested Nested Loop joins (multiple Nested Loop joins in a row), see below for more information.

In 12c, given a two table join between alias A and B, the following hints would be required to arrive at this plan shape:

leading(a b)
use_nl(b)
index(b)
opt_param('_nlj_batching_enabled', 0)
no_nlj_prefetch(b)
no_batch_table_access_by_rowid(b)
Other variants as above.

Multiple, consecutive Nested Loop Joins
Another point that I think it is important to mention when describing these "inter-loop" prefetching and batching Nested Loop join optimization techniques is that they only apply to the "outer-most" Nested Loop join in case of multiple, consecutive Nested Loop joins, which makes their "game changing" character that Nikolay mentions in his posts less strong to me.

For example, this is the 12c default shape of a four table join using three Nested Loop joins:

----------------------------------------------------------------------
| Id | Operation | Name |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | NESTED LOOPS | |
| 5 | NESTED LOOPS | |
| 6 | INDEX FULL SCAN | SYS_IOT_TOP_97632 |
| 7 | TABLE ACCESS BY INDEX ROWID BATCHED| T_I1 |
|* 8 | INDEX RANGE SCAN | T_I1_IDX |
| 9 | TABLE ACCESS BY INDEX ROWID BATCHED | T_I2 |
|* 10 | INDEX RANGE SCAN | T_I2_IDX |
|* 11 | INDEX RANGE SCAN | T_I3_IDX |
| 12 | TABLE ACCESS BY INDEX ROWID | T_I3 |
----------------------------------------------------------------------
As it can be seen only the outer-most Nested Loop joins gets the "batching" plan shape and can benefit from the optimizations described above. The inner Nested Loop joins show the classic plan shape, in case of 12c with the ROWID BATCHED option, so they can only benefit from the "intra-loop" optimizations described above, if a single loop iteration INDEX RANGE SCAN points to several table blocks.

Note that even when using explicit, additional NLJ_BATCHING hints for the inner tables joined I wasn't able to enforce any other plan shape.

If there are multiple blocks of (consecutive) Nested Loop joins (for example a HASH JOIN in between), then in each block the outer-most Nested Loop join gets the optimization, so multiple of those are possible per execution plan:

---------------------------------------------------------------------
| Id | Operation | Name |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
|* 4 | HASH JOIN | |
| 5 | TABLE ACCESS BY INDEX ROWID BATCHED| T_I2 |
| 6 | INDEX FULL SCAN | T_I2_IDX |
| 7 | NESTED LOOPS | |
| 8 | NESTED LOOPS | |
| 9 | INDEX FULL SCAN | SYS_IOT_TOP_97632 |
|* 10 | INDEX RANGE SCAN | T_I1_IDX |
| 11 | TABLE ACCESS BY INDEX ROWID | T_I1 |
|* 12 | INDEX RANGE SCAN | T_I3_IDX |
| 13 | TABLE ACCESS BY INDEX ROWID | T_I3 |
---------------------------------------------------------------------
As it can be seen now that T_I2 is joined via a HASH JOIN both Nested Loop joins to T_I1 and T_I3 get the "batched" optimization plan shape.

I don't know why this limitation is there that only the outer-most in a block of Nested Loop joins gets the optimized plan shape, but it certainly can reduce the possible benefit.

Since I don't think there is a costing difference on CBO level for the different Nested Loop join plan shapes (so the decision which shape to use isn't cost-driven) it might be possible to see a significant performance difference depending on which join gets the optimization - there might be room for improvement by manually influencing the join order in case of multiple, consecutive Nested Loop joins.

Find duration of client times SQL Server

Tom Kyte - Mon, 2016-08-15 06:46
i have data with 3 columns: (column1,col2,col3) as (start date, incident status (Assigned/Inprogress/Resolved), incident_status_reason as (Client Action/ Client Follow) as below values: Start Date incident_status Incident_Status_Reason...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator