Skip navigation.

Feed aggregator

Documentum: some useful queries (DQL/IAPI)

Yann Neuhaus - Thu, 2014-08-07 02:37

In this blog post I want to sharing some useful DQL and IAPI queries that I am using all the time. They are more dedicated to Documentum support/debug and grouped by components. In order to use them, I recommend Qdman: it is the best tool I know to use DQL and IAPI scripts.

1. xPlore

It regroups both the search engine and the index management.

DQL Dsearchselect * from dm_document search document contains 'manual.doc'

 

This query will perform a search just as if you put the 'manual.doc' in the search field of DA. It can be used to check if the dsearch is working fine and if the indexing has been performed correctly. If not, it will return 0 results for a document that you know does exist in the docbase.

Index Waitingselect count(*) as awaiting_15 FROM dmi_queue_item a, dm_sysobject (all) b WHERE b.r_object_id = a.item_id AND a.name='dm_fulltext_index_user' AND date_sent > date(now)-(15*1/24/60) AND (a.task_state is NULL or a.task_state = ' ')

 

This query will return the number of items waiting to be indexed since 15 minutes. The parameter can be changed to 60 minutes or whatever, you just have to change the '15' in bold in the previous query.

Index In Progressselect count(*) as in_progress_15 FROM dmi_queue_item a, dm_sysobject (all) b WHERE b.r_object_id = a.item_id AND a.name='dm_fulltext_index_user' AND date_sent > date(now)-(15*1/24/60) AND a.task_state = 'acquired'

 

This one is similar to the previous, but it returns the number of 'in progress' indexing requests. Note that the parameter can still ne changed.

Index By Stateselect task_state,count(*) from dmi_queue_item where name = 'dm_fulltext_index_user' group by task_state

 

This query lists the number of indexes by state:

  • blank -> awaiting indexing
  • aquired -> in progress
  • warning
  • error
  • done

Delete Indexing Requestdelete dmi_queue_item object where item_id='09xxxxxxxxxxxxxx'

 

Sometimes I noticed there are indexing requests on deleted documents. In fact, it can happen if someone saved a document, then deleted it right after. The indexing request remains in the queue for life. Thus, you may want to delete it. First, check if the file is deleted by running the IPAI: dump,c,09xxxxxxxxxxxxxx. If an error occurs telling the document doesn't exist anymore, you can delete it.

Index Agent Infoselect fti.index_name,iac.object_name as instance_name from dm_fulltext_index fti, dm_ftindex_agent_config iac where fti.index_name = iac.index_name and fti.is_standby = false and iac.force_inactive = false

 

This query returns your configured index agent information. It is useful for the IAPI command returning the index agent status (see below).

Index Agent Statusapply,c,NULL,FTINDEX_AGENT_ADMIN,NAME,S,DOCBASE_ftindex_01,AGENT_INSTANCE_NAME,S, HOST._9200_IndexAgent,ACTION,S,status
next,c,q0
get,c,q0,status
close,c,q0

 

This script returns the Index Agent Status (Running, Stopped, and so on). Note that you have to replace the Indey Agent information in the first line by your Index Agent. You can get these information thanks to the DQL query above.

Manually Queue Index Requestqueue,c,09xxxxxxxxxxxxxx,dm_fulltext_index_user,dm_force_ftindex

 

This one simply puts an indexing request in the queue. You have to replace 09xxxxxxxxxxxxxxx by the r_object_id of the document you want to queue.

Display Dsearch Portcat $DOCUMENTUM/dsearch/admin/xplore.properties | grep port

 

For this one you have to go to the xPlore server, it shows the configured dsearch port.

2. Rendering (ADTS)

The following queries concern the rendition component. It regroups the rendition queue check and the way to manually ask a rendition through IAPI.

 

Manually Queue Rendering Requestqueue,c,09xxxxxxxxxxxxxx,dm_autorender_win31,rendition,0,F,,rendition_req_ps_pdf

 

As the indexing request, this one puts a PDF rendition request in the queue. It can be useful when scripting or in DFC programs.

Rendering By Stateselect task_state,count(*) from dmi_queue_item where name = 'dm_autorender_win31' group by task_state

 

This returns the rendition requests by state.

Rendering Queueselect * from dmi_queue_item where name ='dm_autorender_win31' order by date_sent desc

 

This query returns all documents present in the rendering queue. That means all document waiting for rendition.

Rendition Failedselect r_object_id,item_id,name,item_name,date_sent from dmi_queue_item where event ='DTS' order by date_sent desc

 

This Query returns the failed renditions. Be aware of the date_sent field, because this queue is not cleared. This means that if a rendition request failed 3 times in a row and succeed the last time, there will be 3 rows in the failed queue, but the rendition did succeed. So you should verify that the rendition did succeed and if so, you can delete the row form of the failed queue.

Check Rendition Successfulselect r_object_id from dm_document where object_name='DOCUMENT' and exists(select * from dmr_content where any parent_id=dm_document.r_object_id and full_format='pdf')

 

This query checks if a rendition is present for the given DOCUMENT name. If the pdf rendition exists, it returns its r_object_id. If no rendition is present for the given document, it returns nothing.

3. Audit Trail

 

Failed Login Since 1hselect user_name,count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure' and time_stamp > (date(now)-(60*1/24/60)) group by user_name order by 2

 

This query displays the number of failed logons in the docbase per user since 60 minutes. The parameter 60 can be changed.

Purge Logon FailureEXECUTE purge_audit WITH delete_mode='PREDICATE', dql_predicate='dm_audittrail where event_name=''dm_logon_failure'''

 

This statement purges the audit trail queue by deleting all logon failure entries. Be aware that it can take a while depending on the number of entries you have.

Number Of Logon Failureselect count(*) as logon_failure from dm_audittrail where event_name='dm_logon_failure'

 

This query simply shows the number of logon failures in the queue.

4. Miscellaneous

 

IAPI Purge Cachesflush,c,ddcache,dm_type
flush,c,ddcache,dmi_type_info
flush,c,ddcache,dm_aggr_domain
flush,c,ddcache,dm_domain
flush,c,ddcache,dm_dd_info
flush,c,ddcache,dm_nls_dd_info
flush,c,ddcache,dm_foreign_key
flush,c,persistentcache
publish_dd,c
reinit,c

 

This query flushes caches, it can be used when trying to install ADTS dars and fails due to version mismatch.

Check ADTS Installer Versionjava -cp adtsWinSuiteSetup.jar DiShowVersion
Multi-installer Suite 6.7.2000.42
Installer-Version: 6.7.2000.42 build 1
Installer-Build-Date: 1/11/13 12:28 AM

 

Go to the ADTS installer directory and issue this query. It shows the version of the installer.

Encrypt dm_bof_registry Passwordjava com.documentum.fc.tools.RegistryPasswordUtils PASSWORD

 

This one encrypts the dm_bof_registry password in order to use it in dfc.properties. Not that the encryption process is different on xPlore and ADTS but you can use it on the content server and all DFC related programs. Replace the PASSWORD in the query by your clear password.

Oracle Database 12.1.0.2 Released (22 July 2014)

Oracle in Action - Thu, 2014-08-07 00:54

RSS content

Oracle Database 12.1.0.2 has been released on 22 July 2014.

Following are some of the new features and options:

• In-memory column store
• Attribute clustering on disk
• Oracle Flashback archive capabilities for pluggable databases
• Rapid Oracle home provisioning
• Centralized security key vault capabilities
• Storage and query capabilities for nonrelational data
• Advanced Index Compression
• Oracle Big Data SQL
• Oracle JSON Document Store
• Oracle REST Data Services
• Improvements to Oracle Multitenant
• Zone Maps
• Approximate Count Distinct
• Attribute Clustering
• Full Database Caching
…..

You can download it here.

Enjoy!!!



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle Database 12.1.0.2 Released (22 July 2014)], All Right Reserved. 2014.

The post Oracle Database 12.1.0.2 Released (22 July 2014) appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

SQL Server 2014: DBCC CHECKDB and ReFS support

Yann Neuhaus - Wed, 2014-08-06 21:50

Have you noticed an interesting behavior with SQL Server 2014 when you are running a dbcc checkdb command? If not, you should have a look at this blog post. I discovered it completely by accident during some tests. If you watch carefully, you will see "weird" files appear while the dbcc checkdb command is running.

Below is a sample of the file you will find near the corresponded database file:

 

blog_15_-_dbcc_checkdb_refs_-_1

 

The file immediately disappears after completion of the dbcc checkdb command. So the question is, what is this file? The type of this file is MDF_MSSQL_DBCC10 and we can easily guess that it concerns a dbcc checkdb command, but I must admit I have never seen it in the past...

To be sure, I decided to trace the file activity of the SQL Server process while the dbcc checkdb command is running. You can see the part of the result that concerns our investigation below:

 

blog_15_-_dbcc_checkdb_refs_-2_

 

So we can now say that these files concern the dbcc checkdb activity and Kevin Farlee (SQL Program Manager at Microsoft) confirmed that this new behavior is quite normal.

Here is the explanation:

Prior to SQL Server 2014, the files for dbcc checkdb were created as alternate data streams of the regular database files. This is why we don’t see them by default (alternate data streams can be showed by using the dir command with /R parameter). But with the release of Windows Server 2012 and the new ReFS file system, the story has changed because it does not provide any more alternate data streams capabilities, which is why it was not compatible with SQL Server prior to the SQL Server 2014 version (see KB2681562).

This is a good news, because as you certainly know, ReFS provides some enhancements over NTFS. This is a more resilient file system and it has a better support for extremely large amounts of data. As a reminder, with large databases, you may face the operating system error numbers 1450 and 665 when you run dbcc checkdb command due to a NTFS limitation. These errors are reported in Microsoft KB2002606 and some fixes or workarounds are fortunately available.

Time to switch to ReFS file system with SQL Server 2014? Your thought will be appreciated!

Update on 2U: First full quarterly earnings and insight into model

Michael Feldstein - Wed, 2014-08-06 19:09

2U, the online service provider that went public in the spring, just released its financial report for the first full quarter of operations as a public company. The company beat estimates on total revenue and also lost less money than expected. Overall, it was a strong performance (see WSJ for basic summary or actual quarterly report for more details). The basics:

  • Revenue of $24.7 million for the quarter and $51.1 m for the past six months, which represents year-over-year increase of 32 and 35%;
  • EBITDA Losses of $7.1 m for the quarter and $10.9 m for the past six months, which represents year-over-year increase of -2% and 12%; and
  • Enrollment growth of 31 – 34% year-over-year.

Per the WSJ coverage of the conference call:

“I’m very pleased with our second quarter results, and that we have both the basis and the visibility to increase all of our guidance measures for 2014,” said Chip Paucek, 2U’s Chief Executive Officer and co-founder. “We’ve reached a turning point where, even with continued high investment for growth, our losses have stopped accelerating. At the midpoint of our new guidance range, we now expect our full year 2014 adjusted EBITDA loss to improve by 17% over 2013. Further, we’ve announced a schedule that meets our stated annual goal for new program launches through 2015.”

The company went public in late March at $14 / share and is still at that range ($14.21 before the quarterly earnings release – it might go up tomorrow). As one of only three ed tech companies to have gone public in the US over the past five years, 2U remains worth watching both for its own news and as a bellwether of the IPO market for ed tech.

Notes

The financials provide more insight into the world of Online Service Providers (OSP, aka Online Program Management, School-as-a-Service, Online Enablers, the market with no name). On the conference call 2U’s CEO Chip Paucek reminded analysts that they typically invest (money spent – revenue) $4 – $9 million per program in the early years and do not start to break even until years 3 – 4. 2U might be on the high side of these numbers given their focus on small class sizes at big-name schools, but this helps explain why the OSP market typically focuses on long-term contracts of 10+ years. Without such a long-term revenue-sharing contract, it would difficult for an OSP to ever break even.

As the market matures – with more competitors and with schools developing their own experiences in online programs, it will become more and more difficult for companies to maintain these commitments from schools. We have already seen signs over the past year of changes in institutional expectations.

2U, meanwhile, has positioned itself at the high-end of the market, relying on high tuitions and brand-name elite schools with small classes. The company for the most part will not even compete in a Request for Proposal process, avoiding direct competition with Embanet, Deltak, Academic Partnerships and others. Their prospects seem much stronger than the more competitive mainstream of OSP providers.

See the posts here at e-Literate for more background.

2U has changed one aspect of their strategy, as noted by Donna Murdoch on G+. At least through 2012 the company positioned itself as planning to work with one school per discipline (or vertical in their language). Pick one school for Masters of Social Work, one for MBA, etc. As described in Jan 2012:

“As we come into a new vertical, 2tor basically partners with one great school per vertical. We find one partner, one brand that is world-class. We partner with that brand over a long time period to create the market leader in that space for that discipline.”

2U now specifically plans for secondary schools in different verticals as can be seen in their press release put out today:

Programs Aug 2014

Note the duplication of Social Work between USC and Simmons, Nursing between Georgetown and Simmons, and Data Science between Berkeley and SMU. Note the new approach from page 20 of the quarterly report:

As described above, we have added, and we intend to continue to add, degree programs in a number of new academic disciplines each year, as well as to expand the delivery of existing degree programs to new clients.

View Into Model

Along with the first quarter release (which was not based on a full quarter of operations as a public company), 2U release some interesting videos that give a better view into their pedagogical approach and platform. In this video they describe their “Bi-directional Learning Tool (BLT)”:

This image is from a page on the 2U website showing their approach, with a view of the infamous Brady Bunch layout for live classes (synchronous).

Live Courses

We’ll keep watching 2U and share significant developments as we see them.

The post Update on 2U: First full quarterly earnings and insight into model appeared first on e-Literate.

Why Write-Through is still the default Flash Cache Mode on #Exadata X-4

The Oracle Instructor - Wed, 2014-08-06 12:41

The Flash Cache Mode still defaults to Write-Through on Exadata X-4 because most customers are better suited that way – not because Write-Back is buggy or unreliable. Chances are that Write-Back is not required, so we just save Flash capacity that way. So when you see this

CellCLI> list cell attributes flashcachemode
         WriteThrough

it is likely to your best :-)
Let me explain: Write-Through means that writing I/O coming from the database layer will first go to the spinning drives where it is mirrored according to the redundancy of the diskgroup where the file is placed that is written to. Afterwards, the cells may populate the Flash Cache if they think it will benefit subsequent reads, but there is no mirroring required. In case of hardware failure, the mirroring is already sufficiently done on the spinning drives, as the pictures shows:

Flash Cache Mode Write-Through

Flash Cache Mode WRITE-THROUGH

That changes with the Flash Cache Mode being Write-Back: Now writes go primarily to the Flashcards and popular objects may even never get aged out onto the spinning drives. At least that age out may happen significantly later, so the writes on flash must be mirrored now. The redundancy of the diskgroup where the object in question was placed on determines again the number of mirrored writes. The two pictures assume normal redundancy. In other words: Write-Back reduces the usable capacity of the Flashcache at least by half.

Flash Cache Mode Write-Back

Flash Cache Mode WRITE-BACK

Only databases with performance issues on behalf of writing I/O will benefit from Write-Back, the most likely symptom of which would be high numbers of the Free Buffer Waits wait-event. And Flash Logging is done with both Write-Through and Write-Back. So there is a good reason behind turning on the Write-Back Flash Cache Mode only on demand. I have explained this just very similar during my present Oracle University Exadata class in Frankfurt, by the way :-)


Tagged: exadata
Categories: DBA Blogs

OWB to ODI 12c Migration in action

Antonio Romero - Wed, 2014-08-06 11:00

The OWB to ODI 12c migration utility provides an easy to use on-ramp to Oracle's strategic data integration tool. The utility was designed and built by the same development group that produced OWB and ODI. 

Here's a screenshot from the recording below showing a project in OWB and what it looks like in ODI 12c;


There is a useful webcast that you can play and watch the migration utility in action. It takes an OWB implementation and uses the migration utility to move into ODI 12c.

http://oracleconferencing.webex.com/oracleconferencing/ldr.php?RCID=df8729e0c7628dde638847d9511f6b46

It's worth having a read of the following OTN article from Stewart Bryson which gives an overview of the capabilities and options OWB customers have moving forward.
http://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb-to-odi-2130001.html

Check it out and see what you think!

Alter Session Kill on Steroids

Pythian Group - Wed, 2014-08-06 10:27

Perhaps you have encountered something like this: A session that is consuming too many resources needs to be killed. You locate the session and use ALTER SYSTEM KILL SESSION ‘SID,SERIAL#’ to kill the session. As you continue to monitor the database you find that the status of the session in v$session is ‘KILLED’, but the session does not go away. You also notice that the SERIAL# is continually changing.

Now you find there is no OS process associated with the session, but the session continues as PMON is unable to finish cleanup for the session. Usually when this happens, the session will be holding a lock. When that happens, the only method to release the lock is to bounce the database. There are some bugs that may be responsible for this problem, such as this one described by Oracle Support:

Pmon Spins While Cleaning Dead Process (Doc ID 1130713.1)

This particular bug affects Oracle 10.2.0.1 – 11.1.0.7. I have personally seen this same behavior happen on many versions of the database from 7.0 on. To avoid these hanging sessions many DBA’s have adopted the habit of first killing the OS process with an OS utility, and if the session is still visible in v$session, issue the ALTER SYSTEM KILL command.

The OS command used on linux/unix is usually ‘kill -9′. On windows it is OraKill. This method usually avoids the problems encountered when killing a session that is holding a lock and processing DML.

I don’t know just what circumstances trigger this behavior, as I have never been able to reproduce it at will. When it does happen though, it is more than annoying as the only way to clear locks held by the recalcitrant session is to bounce the database.

Quite some time ago (at least as far back as Oracle 8i) Oracle introduced the new IMMEDIATE keyword to use with ALTER SYSTEM KILL SESSION. Using this keyword removes the need to use an OS command to kill a session – Oracle will do it for you! To test this I am using Oracle 10.2.0.4 on Oracle Linux 5.5. I have previously run these same tests in 11.2.0.3 with the same results. Had I access to an 8i or 9i database I would have run the tests there. To start with let’s see what happens when a session is killed without the immediate keyword.

Login to the session to be killed:

$ sqlplus scott/tiger@10gr2

Login as SYSDBA from another terminal and check for scott’s session:

SQL> l
  1  select
  2     s.username,
  3     s.sid,
  4     s.serial#,
  5     p.spid spid
  6  from v$session s, v$process p
  7  where s.username = 'SCOTT'
  8*    and p.addr = s.paddr
SQL> /

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 133         35 22870

1 row selected.

All that has happened at this point is that Oracle has made an internal call that has disconnected Scott’s session. (tracing that operation is a different topic.) The process on the server has not been terminated. This can be seen by the following experiment:

Logon again as Scott.

In a SYSDBA session check for Scott’s:

 SQL> @scott

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 146         81 23678

Now check for the shadow process associated with scott’s session on the server:


[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)

Kill the session and check the status:

SQL> alter system kill session '146,81';

SQL> l
  1  select
  2     s.username,
  3     s.sid,
  4     s.serial#,
  5     p.spid spid
  6  from v$session s, v$process p
  7  where s.username = 'SCOTT'
  8*    and p.addr = s.paddr
SQL>/

no rows selected

Check again on the server for the process:

[root@ora10gR2 tmp]# ps -fp 23678
UID PID PPID C STIME TTY TIME CMD
oracle 23678 1 0 16:56 ? 00:00:00 oraclejs01 (LOCAL=NO)

Interesting, isn’t it? We know the process is still alive on the server, but the session information is no longer associated with the process. This happens because Oracle has disconnected the session, which allows the process to continue until the sqlplus session is terminated. The session information is still available in v$session, but is no longer associated with a server process:

select
  2     s.username,
  3     s.status,
  4     s.sid,
  5     s.serial#
  6  from v$session s
  7* where s.username = 'SCOTT'
SQL>/

USERNAME                       STATUS          SID    SERIAL#
------------------------------ -------- ---------- ----------
SCOTT                          KILLED          146         81

1 row selected.

 1* select pid,spid from v$process where pid = 146
SQL>/

no rows selected

When exiting the Scott session, I can see that the session was killed:

SQL> exit
ERROR:
ORA-00028: your session has been killed

Let’s perform the experiment again, but this time use the IMMEDIATE keyword.

Logon as scott:

> sqlplus scott/tiger@10gr2

SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 5 17:18:53 2014

Logon as SYSDBA and check for the scott session;

SQL> @scott

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SCOTT                                 146         83 23939

1 row selected.

Before killing scott’s session:

  • get my OS PID
  • enable 10046 trace

The OS PID will be used for strace on the SYSDBA session shadow process on the server.
The 10046 trace is so we can see what is happening in the strace output.

SQL> l
1 select
2 s.username,
3 s.sid,
4 s.serial#,
5 p.spid spid
6 from v$session s, v$process p
7 where s.username is not null
8 and p.addr = s.paddr
9 and userenv('SESSIONID') = s.audsid
10* order by username, sid
SQL>/

USERNAME                              SID    SERIAL# SPID
------------------------------ ---------- ---------- ------------
SYS                                   145         65 23947

1 row selected.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

Now ssh to the db server , check for Scott session shadow process and start strace:

[root@ora10gR2 tmp]# strace -o 23947.strace -p 23947
^Z
[1]+ Stopped strace -o 23947.strace -p 23947
[root@ora10gR2 tmp]# bg
[1]+ strace -o 23947.strace -p 23947 &

[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
23939 ? 00:00:00 oracle

Now kill Scott’s session and exit the SYSDBA session:

SQL> alter system kill session '146,83' immediate;

System altered.

The strace command will now have exited on the server.

First check again for Scott’s session:

[root@ora10gR2 tmp]# ps -p 23939
PID TTY TIME CMD
[root@ora10gR2 tmp]#

So the Scott shadow process has terminated.

As the 10046 trace was enabled, the output to the oracle trace file will appear in the strace file, which allows searching for ‘alter system kill’ in the strace file.

From the strace file:

write(5, "alter system kill session '146,8"..., 44) = 44

Now searching for the PID of scott’s session 23939:

read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228
close(10) = 0
open("/proc/23939/stat", O_RDONLY) = 10
read(10, "23939 (oracle) S 1 23939 23939 0"..., 999) = 228
close(10) = 0
kill(23939, SIGKILL) = 0
kill(23939, SIGCONT) = 0
open("/proc/23939/stat", O_RDONLY) = 10
read(10, "23939 (oracle) Z 1 23939 23939 0"..., 999) = 178
close(10) = 0

From the previous text I can see that Oracle opened the status file for PID 23939.
Why it did so twice I am not sure.

What happens after that is the interesting part.

kill(23939, SIGKILL) = 0

That line means that the SIGKILL signal was successfully sent to Scott’s shadow process.

What does that mean? Run kill -l to get a list of signals:

kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

Notice that signal 9 (kill -9) is SIGKILL.

So when killing a session with ALTER SYSTEM KILL SESSION ‘PID,SERIAL#’ IMMEDIATE Oracle is actually doing the kill -9 for you, and has been for many years now.

Though not shown here, this same test was run when the session was killed without using the IMMEDIATE keyword, and there were no attempts to kill the session. This was inferred as well by the fact the the process was still running on the server up until the time the Scott sqlplus session was exited.

Categories: DBA Blogs

Disaster Recovery Simulation test - performed 30 databases failover

Syed Jaffar - Wed, 2014-08-06 05:55
Successfully switched (fail-over) the role of over 30 physical standby databases  this morning as part of the Disaster Recovery (DR) simulation test. Fortunately, there were no technical glitches and hassles during the course of testing as  anticipated. It was indeed a great test and very successful one too.

The next  big challenge to the team would be reconstructing and making in sync those 30 physical standby databases whose range from 100GB to 5TB size.

Anyways, my team loving the challenges and true enjoying every moment.





SLOB Data Loading Case Studies – Part I. A Simple Concurrent + Parallel Example.

Kevin Closson - Wed, 2014-08-06 02:21
Introduction

This is Part I in a short series of posts dedicated to loading SLOB data.  The SLOB loader is called setup.sh and it is, by default a concurrent, data loader. The SLOB configuration file parameter controlling the number of concurrent data loading threads is called LOAD_PARALLEL_DEGREE. In retrospect I should have named the parameter LOAD_CONCURRENT_DEGREE because unless Oracle Parallel Query is enabled there is no parallelism in the data loading procedure. But if LOAD_PARALLEL_DEGREE is assigned a value greater than 1 there is concurrent data loading.

Occasionally I hear of users having trouble with combining Oracle Parallel Query with the concurrent SLOB loader. It is pretty easy to overburden a system when doing something like concurrent, parallel data loading–in the absence of tools like Database Resource Management I suppose. To that end,  this series will show some examples of what to expect when performing SLOB data loading with various init.ora settings and combinations of parallel and concurrent data loading.

In this first example I’ll show an example of loading with LOAD_PARALLEL_DEGREE set to 8. The scale is 524288 SLOB rows which maps to 524,288 data blocks because SLOB forces a single row per block. Please note, the only slob.conf parameters that affect data loading are LOAD_PARALLEL_DEGREE and SCALE. The following is a screen shot of the slob.conf file for this example:

SLOB-data-load-3

The next screen shot shows the very simple init.ora settings I used during the data loading test. This very basic initialization file results in default Oracle Parallel Query, therefore  this example is a concurrent + parallel data load.

SLOB-data-load-6

The next screen shot shows that I directed setup.sh to load 64 SLOB schemas into a tablespace called IOPS. Since SCALE is 524,288 this example loaded roughly 256GB (8192 * 524288 * 64) of data into the IOPS tablespace.

SLOB-data-load-1

As reported by setup.sh the data loading completed in 1,539 seconds or a load rate of roughly 600GB/h. This loading rate by no means shows any intrinsic limit in the loader. In future posts in this series I’ll cover some tuning tips to improve data loading.  The following screen shot shows the storage I/O rates in kilobytes during a portion of the load procedure. Please note, this is a 2s16c32t 115w Sandy Bridge Xeon based server. Any storage capable of I/O bursts of roughly 1.7GB/s (i.e., 2 active 8GFC Fibre Channel paths to any enterprise class array) can demonstrate this sort of SLOB data loading throughput.

SLOB-data-load-2

 

After setup.sh completes it is good to count how many loader threads were able to successfully load the specified number of rows. As the example shows I simply grep for the value of slob.conf->SCALE from cr_tab_and_load.out. Remember, SLOB in its current form, loads a zeroth schema so the return from such a word count (-l) should be one greater than the number of schemas setup.sh was directed to load.

SLOB-data-load-4

The next screen shot shows the required execution of the procedure.sql script. This procedure must be executed after any execution of setup.sh.

SLOB-data-load-7

Finally, one can use the SLOB/misc/tsf.sql script to report the size of the tablespace used by setup.sh. As the following screenshot shows the  IOPS tablespace ended up with a  little over 270GB which can be accounted for by the size of the tables based on slob.conf, the number of schemas and a little overhead for indexes.

 

SLOB-data-load-5

Summary

This installment in the series has shown expected screen output from a simple example of data loading. This example used default Oracle Parallel Query settings, a very simple init.ora and a concurrent loading degree of 8 (slob.conf->LOAD_PARALLEL_DEGREE) to load data at a rate of roughly 600GB/h.

 

 

 


Filed under: oracle

Passed OCP 12c test

Bobby Durrett's DBA Blog - Tue, 2014-08-05 15:27

Well, I finally passed the Oracle Certified Professional (OCP) 12c upgrade exam, 1Z0-060.  I got 86% right on the new features section and 78% correct on the general DBA section.  So, that means I missed roughly 7 questions on each section which was more than I expected because I felt good about all of my answers.

I’m happy with the three resources I used which are:

  1. Self Test software – $99
  2. Matthew Morris book – $10
  3. Sam Alapati book – $44

I spent a lot of time in the 12c manuals.  Be careful, though, because the 12.1.0.2 manuals just came out and some things are different in 12.1.0.2 from the test, because the test designers based the test on 12.1.0.1.

I built two Linux virtual machines for test preparation.  On one I installed the 12.1.0.1 database software and created a CDB and a non-CDB for testing.  On the other I installed the grid software and database software and created a CDB on ASM disks.  I spent many hours testing 12c features on these two environments and three databases.

I learned a ton both about new 12c features and features that exist in earlier versions, but it was a long and tedious process and I’m glad to finally be done.

- Bobby

Categories: DBA Blogs

Standard ADF BC Passivation/Activation for Transient View Object

Andrejus Baranovski - Tue, 2014-08-05 15:20
If you want to implement transient View Object in ADF BC, you must make sure it will be passivation/activation ready, otherwise you may loose data. There are several ways how to achieve passivation/activation for such View Objects - override passivation/activation lifecycle and handle transient View Object rows programmatically or reinitialise transient View Object rows from Application Module prepareSession method. There is one more solution, I'm going to describe it in this post. It is based on dummy SQL based View Object, designed to to store transient attribute values.

Sample application - ADFTransientVOPassivationApp.zip, implements SQL based View Object with transient attributes. Instead of creating completely programmatic View Object, I have created SQL based with single Id attribute based on SQL expression and added transient attributes. This Id attribute is actually never used, it is there just for only reason - to simulate SQL based View Object. As a key attribute is selected one of the transient attributes:


SQL query doesn't fetch any rows, I'm using it only for a single reason - to simulate SQL based View Object. ADF BC knows how to passivate/activate SQL based View Objects automatically, so I'm going to use this feature and force passivation/activation for all transient attributes created for the same View Object:


Just make sure to select Passivate Including All Transient Values option, this will ensure all transient attributes will be passivates/activated automatically, without any coding intervention:


I'm going to test sample application with AM pooling off, this would simulate passivation/activation behaviour for each request:


On ADF UI side, I'm going to implement a table. To be able to enter rows successfully, for such SQL based View Object with transient attributes, I must enable ChangeEventPolicy = ppr option in Page Definition for the iterator:


Input components must be set with AutoSubmit = true option:


On runtime, when I enter rows, all data gets passivated - you can see this from the log. Three rows are entered, data for all attributes gets passivated/activated automatically:

Some Observations on Puppetrun with Foreman

Pythian Group - Tue, 2014-08-05 11:50

After joining Pythian I was introduced to several configuration management systems and Puppet was one of them. Foreman is a system management tool which can be integrated with Puppet to manage puppet modules and to initiate puppet runs on hosts from web interface. This is very useful if you want to configure large number of systems.

Puppet kick, which was previously used to initiate puppet run from foreman is deprecated now.

For initiating puppet run from foreman interface, I used mcollective. Mcollective can be used to execute parallel jobs in remote systems. There are 3 main components,

    Client – Connects to the mcollective Server and send commands.
    Server – Runs on all managed systems and execute commands.
    Middleware – A message broker like activemq.

I used mcollective puppet module from Puppetlabs for my setup.

# puppet module install puppetlabs-mcollective

My setup includes middleware(activemq) and mcollective client in the puppet server and mcollective servers in all managed systems.

After the implementation, I found that Puppet run from foreman web interface is failing for some servers.

I found following in /var/log/foreman-proxy/proxy.log,

D, [2014-04-18T07:20:54.392392 #4256] DEBUG — : about to execute: /usr/bin/sudo /usr/bin/mco puppet runonce -I server.pythian.com
W, [2014-04-18T07:20:56.167627 #4256] WARN — : Non-null exit code when executing ‘/usr/bin/sudo/usr/bin/mcopuppetrunonce-Ireg-app-02.prod.tprweb.net’
E, [2014-04-18T07:20:56.175034 #4256] ERROR — : Failed puppet run: Check Log files

You can see that mco command is trying to execute a puppet run in server.pythian.com and failing. mco command uses several sub commands called ‘applications’ to interact with all systems and ‘puppet’ is one of them.

While running the command in commandline, I received following,

# mco puppet runonce -I server.pythian.com| [ > ] 0 / 1warn 2014/04/11 08:05:34: client.rb:218:in `start_receiver’ Could not receive all responses. Expected : 1. Received : 0

Finished processing 0 / 1 hosts in 22012.79 ms

No response from:

server.pythian.com

I am able to ping the server.

When I ran ‘mco ping’ I found that the server with issue is identified with short hostnames and others with fqdn.

$ mco pingserver time=89.95 ms
server3.pythian.com time=95.26 ms
server2.pythian.com time=96.16 ms

So mcollective is exporting a short hostname when foreman is expecting an FQDN (Fully Qualified Domain Name) from this server.

Foreman takes node name information from puppet certificate name and that is used for filtering while sending mco commands.

Mcollective exports identity differently. From http://docs.puppetlabs.com/mcollective/configure/server.html#facts-identity-and-classes,

identity
The node’s name or identity. This should be unique for each node, but does not need to be.Default: The value of Ruby’s Socket.gethostname method, which is usually the server’s FQDN.
Sample value: web01.example.com
Allowed values: Any string containing only alphanumeric characters, hyphens, and dots — i.e. matching the regular expression /\A[\w\.\-]+\Z/

I passed FQDN as identity in the servers using mcollective module, which resulted in following setting,

# cat /etc/mcollective/server.cfg |grep identity
identity = server.pythian.com

This allowed the command to run successfully and getting ‘Puppet Run’ from foreman to work.

# mco puppet runonce -I server.pythian.com* [ ============================================================> ] 1 / 1

Now ‘mco ping’ looks good as well.

$ mco pingserver.pythian.com time=91.34 ms
server3.pythian.com time=91.23 ms
server2.pythian.com time=82.16 ms

Now let us check why this was happening.

mcollective identity is exported from ruby function Socket.gethostname.

From ruby source code you can see that Socket.gethostname is getting the value from gethostname().

./ext/socket/socket.c#ifdef HAVE_GETHOSTNAME
/*
* call-seq:
* Socket.gethostname => hostname
*
* Returns the hostname.
*
* p Socket.gethostname #=> “hal”
*
* Note that it is not guaranteed to be able to convert to IP address using gethostbyname, getaddrinfo, etc.
* If you need local IP address, use Socket.ip_address_list.
*/
static VALUE
sock_gethostname(VALUE obj)
{
#if defined(NI_MAXHOST)
# define RUBY_MAX_HOST_NAME_LEN NI_MAXHOST
#elif defined(HOST_NAME_MAX)
# define RUBY_MAX_HOST_NAME_LEN HOST_NAME_MAX
#else
# define RUBY_MAX_HOST_NAME_LEN 1024
#endif

char buf[RUBY_MAX_HOST_NAME_LEN+1];

rb_secure(3);
if (gethostname(buf, (int)sizeof buf – 1) < 0)
rb_sys_fail(“gethostname(3)”);

buf[sizeof buf - 1] = ”;
return rb_str_new2(buf);
}

gethostname is a glibc function which calls uname system call and copy the value from returned nodename.

So when foreman uses the FQDN value which it collects from puppet certificate name, mcollective exports the hostname returned by gethostname().

Now let us see how gethostname() gives different values in different systems.

When passing the complete FQDN in HOSTNAME parameter in /etc/sysconfig/network, we can see that Socket.gethostname is returning FQDN.

[root@centos ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=centos.pythian.com[root@centos ~]# hostname -v
gethostname()=`centos.pythian.com’
centos.pythian.com

[root@centos ~]# irb
1.9.3-p484 :001 > require ‘socket’
=> true
1.9.3-p484 :002 > Socket.gethostname
=> “centos.pythian.com”
1.9.3-p484 :003 >

The system which was having problem was having following configuration.

[root@centos ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=server[root@centos ~]# hostname -v
gethostname()=`server’
server

[root@centos ~]# irb
1.9.3-p484 :001 > require ‘socket’
=> true
1.9.3-p484 :002 > Socket.gethostname
=> “server”
1.9.3-p484 :003 >

Here ruby is only returning the short hostname for Socket.gethostname. But it was having following entry in /etc/hosts.

192.168.122.249 server.pythain.com server

This allowed system to resolve FQDN.

[root@centos ~]# hostname -f -v
gethostname()=`server’
Resolving `server’ …
Result: h_name=`server.pythain.com’
Result: h_aliases=`server’
Result: h_addr_list=`192.168.122.249′
server.pythain.com

From ‘man hostname’.

The FQDN of the system is the name that the resolver(3) returns for the
host name.Technically: The FQDN is the name gethostbyname(2) returns for the host name returned by gethost-
name(2). The DNS domain name is the part after the first dot.

As the resolver is able to resolve the hostname from /etc/hosts, puppet is able to pick up the fqdn value for certificate which it later used by foreman.
But mcollective exports the short hostname returned by gethostname().

To fix the issue in Red Hat based linux distributions, we can try any of the following,

* Pass an FQDN in /etc/sysconfig/network like below.

# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=server.pythian.com

OR

* Use a short hostname as HOSTNAME but make sure that it would not resolve to an FQDN in /etc/hosts or DNS (not really suggested).

OR

* Pass short hostname or FQDN as HOSTNAME but, make sure that there is an entry like below in /etc/hosts and mcollective is exporting fqdn as identity.

192.168.122.249 server.pythian.com server
Categories: DBA Blogs

Set Focus on Added Row Tabular

Denes Kubicek - Tue, 2014-08-05 09:19
This sounds quite simple:

"I would like to set focus on the new added row in my tabular form."

In the following example, you can se how to acomplish that.

Categories: Development

Oracle WebCenter Imaging: Avoid the Accounts Payable Zombie Apocalypse

WebCenter Team - Tue, 2014-08-05 06:43

Author: Jane Shirley, Senior Business Analyst for Aurionpro

A World without Oracle WebCenter Imaging
The Client’s Situation: Living an AP Nightmare

“I’m going to have nightmares about this for the rest of my life.” That’s what our client said as she described the company’s paper and spreadsheet-based Accounts Payable (AP) process. Her department processed about 10,000 invoices a month and employees were beginning to resemble the cast from a horror movie – complete with zombie-like AP processors shuffling through cubicle aisles in search of active invoices.

The Issues:

1) Zombification does not support ROI goals or process improvement…
Thankfully, the Aurionpro WebCenter team got to her in time!  We met our client during her company’s financial transformation planning process and spent time with her and the team analyzing requirements and prioritizing processing needs, providing insight on additional ROI realization, and identifying areas for process improvement.  While we found some opportunities for modifications, we kept the client’s Oracle WebCenter Imaging implementation timeline to a bare minimum in order to accelerate the transformation process and reduce development time.

“All implementations require at least some level of configuration and modification,” we explained.  “The trick is to identify the areas where customization is truly required that support a faster time to ROI and make the most sense for the client’s business. As for other aspects, while still important, we recommend putting them into a “Phase II Brain Fungus Antidote” that inhibits the zombification of the project and driving processes. This proven approach helps organizations get the biggest bang for their buck. 

2) Zombies have no requirements (and are usually missing a limb or two…)
We’d advised our client that all of us needed to understand the company’s existing AP System before getting started.  The Aurionpro team’s first step is always requirements gathering. This part of the process has three key objectives:
  • Train the Financial Transformation team in the details of WebCenter Imaging both through system demonstration and generic workflow presentations.
  • Understand the client’s business requirements and document those verbally and visually with reconfirmation
  • Confirm our understanding of the client’s goals.  We’ve learned the most important part of requirements gathering is ensuring that we heard our client’s concerns and expressed them accurately.
3) People who live in zombie societies hide at night and barricade themselves in…
Aurionpro understood that team leads like to do things their own way. For our client, she wanted to be left alone to contemplate things and assess whether the plan was going to get the job done on time and on budget.

“I just like taking the documentation home and studying it,” our client said. “The matrix format that Aurionpro provides allowed me to sort and re-sort the individual requirements so I could understand how each one fits with our goals and also our budget. I could even send the functional flows included in the requirements to my change management team so they could start their part of the process.”

4) Zombies can only be killed with proven methods… All of our clients have given high praise to the matrix format that Aurionpro services teams use to ensure that requirements are clear and that focus is maintained consistently throughout the lifecycle of the project.  Our deployment methodology allows our clients to literally check-off each item on their original requirements list so that they have a visualization of how they’ve come full circle from idea to reality!  Our post-UAT development then covers the team by ensuring that any post-development adjustments are made possible. Lastly, the Aurionpro implementation process for WebCenter Imaging supports companies through the “go-live” period and works to ensure that any zombification of their process are completely resolved.
5) Shufflers have no clear roadmap … “You made it so much easier to support my progress reports to my senior management,” the client told us after the project was complete.  “The weekly status reports that Aurionpro provided gave a crystal-clear snapshot of our project status against the overall timeline.  I always knew where I stood and what was going to happen next. What was a very scary process for me became quite manageable, or at least less terrifying,” she concluded.

The really good news was that once she had her Oracle WebCenter Imaging solution in place, our client’s nightmare world receded and the zombies morphed back into accessible, productive, and engaged colleagues.

About the Author:
Jane Shirley is a Senior Business Analyst for Aurionpro. She has worked in the Oracle WebCenter space on both the customer and the consulting side for large corporate enterprise-wide implementations. After serving time as a Marketing Manager, she spent several years managing invoice processing for a Fortune 50 company. She can be reached at jane.shirley@aurionpro.com.

How to Configure an Azure Point-to-Site VPN – Part 1

Pythian Group - Tue, 2014-08-05 06:24

This blog post is the first in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. Today’s post will teach you how to configure a virtual network and a dynamic routing gateway, and the following blog posts will demonstrate how to create the certificates, and how to configure the VPN client.

Nowadays we are opting to move parts of, or even entire systems to the cloud. In order to build a hybrid environment, we need to find a way to connect our enterprise/local network, also known as on-premises, and the cloud.

Currently, we have two options to connect Azure and On-Premises:

  1. Using a Point-to-Site VPN
  2. Using a Site-to-Site VPN

The first option, using a Point-to-Site VPN is the option I’ll be demonstrating. It is recommended when you need to connect only some servers of your network to Azure. On the other hand, the Site-to-Site VPN connects your entire on-premises network to Azure.

CONFIGURE A VIRTUAL NETWORK AND A DYNAMIC ROUTING GATEWAY

To start, connect to your Azure account (https://manage.windowsazure.com/) and click in the “add button”, in the bottom left corner.

    1. Now follow the options that you can see in the image, and create a custom virtual network:|Screen Shot 2014-07-29 at 23.41.53
    2. Fill the Virtual Network name and the location you want to create.Screen Shot 2014-07-29 at 23.44.36
    3. Check “Configure a Point-to-Site VPN” (DNS server is an option setting, used for name resolution between this virtual network and your on-premises network):Screen Shot 2014-07-29 at 23.45.59
    4. Set the the IP range accordingly, after verify if this range is not overlapping with your on-premises network.Screen Shot 2014-07-29 at 23.54.26
    5. Click in the “add gateway subnet” button and than in the finish button (check mark).Screen Shot 2014-07-29 at 23.57.52
    6. Now you need to wait few minutes, while the virtual network is being created.Screen Shot 2014-07-29 at 23.58.11
    7. You will see a message like this when the process is done:Screen Shot 2014-07-30 at 00.00.24
    8. At this stage, you will be able to see the network created, under the network section.Screen Shot 2014-07-30 at 00.22.20
    9. Now we need to create a “Dynamic Routing Gateway”. To complete this, click on the network you just created and go to the Dashboard.Screen Shot 2014-07-30 at 00.31.00
    10. Click on “CREATE GATEWAY” button, in the page bottom and confirm your intention by selecting “Yes”.Screen Shot 2014-07-30 at 00.58.58
    11. It may take few minutes. You will see the message “CREATING GATEWAY”, as shown in the image bellow:Screen Shot 2014-07-30 at 00.59.47
    12. After a successfully creating, you will see the following:Screen Shot 2014-07-30 at 01.22.39

At this point, we are done with the Virtual Network creation. Now we can proceed to the certificate creation steps… Stay tuned for my next two posts.

Categories: DBA Blogs

Cascading Shuttle - Keep Selected Values

Denes Kubicek - Tue, 2014-08-05 06:14
There are probably a several good ways to solve the following problem:

    1. you have a simple select list which cascades a shuttle element
    2. you select something and the shuttle is populated
    3. after that, you pick a value in the shuttle and move it to the rigt side
    4. then you change the select list again
    5. the values you selected in your shuttle are now lost


This example shows how this can be done. This question apears in the APEX Forum from time to time and I decided to provide one simple solution for it.

Categories: Development

What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time

What Is Oracle DB Time, DB CPU, Wall Time and Non-Idle Wait Time
If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

The key Oracle Database time parameters are elapsed time, database time (DB Time), non-idle wait time and server process CPU consumption (DB CPU) time.

This first post is pretty basic, yet core fundamental stuff. So in the following two posts I'll introduce elapsed time, add parallelism into the mix and revisit wall time. What initially seems simple can some take very interesting twists!

You probably know that I am all about quantitative Oracle performance analysis. I research, write, teach, and speak about it. I even have an OraPub Online Institute seminar about how to tune your Oracle Database systems from a standard AWR or Statspack report using an Oracle Time Based Analysis (OTBA) framework.

So let's get started!

Wall Time & Run Time
I'll start with Wall Time because that is close (hopefully) to what a user experiences. In fact, if there is no time gap between the Oracle Database and the user, then we can do a little math and figure out what the users are, on average, experiencing. I'll get back to wall time in the next post, where I include elapsed time and parallelism into the equation.

DB CPU
DB CPU is Oracle server/foreground/shadow process CPU consumption. Each Oracle server process gathers its own CPU consumption using the time and/or getrusage C function system call. So unless there is a major screw-up by either the operating system or the Oracle kernel developers, the time will be good... very good. The name DB CPU is taken from the actual statistic name, which is found in both v$sess_time_model and v$sys_time_model.

If you look at any AWR or Statspack report in the "Time Model" section, you will see DB CPU. The value shown will be all server process CPU consumption within the reporting snapshot interval, converted to seconds. (The raw statistic is stored in microseconds.)

Below is an example Time Model Statistics screen shot from a standard AWR report. I've highlighted DB CPU.



If you run one of my OraPub System Monitor (OSM) time related tools like ttpctx.sql or rtpctx.sql you see a CPU time statistic. That contains both the DB CPU (i.e., server process) and "background process cpu" statistics. Here's an example.

SQL> @ttpctx.sql
Remember: This report must be run twice so both the initial and
final values are available. If no output, press ENTER about 11 times.

Database: prod35 31-JUL-14 12:09pm
Report: ttpctx.sql OSM by OraPub, Inc. Page 1
Total Time Activity (142 sec interval)

Avg Time Time Wait
Time Component % TT % WT Waited (ms) (sec) Count(k)
------------------------------------- ------- ------- ----------- ----------- --------
CPU consumption: Oracle SP + BG procs 95.95 0.00 0.000 347.212 0
PX Deq: Slave Session Stats 1.45 35.74 0.113 5.240 47
library cache: mutex X 0.58 14.26 0.136 2.090 15
PX Deq: Slave Join Frag 0.43 10.57 0.067 1.550 23
PX Deq: Signal ACK EXT 0.29 7.16 0.045 1.050 23
control file parallel write 0.28 7.03 20.600 1.030 0
PX qref latch 0.27 6.75 0.012 0.990 85
latch free 0.20 4.91 0.090 0.720 8
log file parallel write 0.16 4.02 12.826 0.590 0

Non-Idle Wait Time
When an Oracle process can not consume CPU, it will pause. As an Oracle DBA, we know this as wait time. Sometimes a process waits and it's not a performance problem, so we call this Idle Wait Time. Oracle background processes typically have lots of idle wait time. However, when a user is waiting for sometime to complete and way down deep their Oracle server process is waiting to get perhaps a lock or latch, this is Non-Idle Wait Time. Obviously, when tuning Oracle we care a lot about non-idle wait time.

Below is a simple query showing wait event classifications. In this system there are 119 Idle wait events, so all the rest would be classified as non-idle wait events.

Oracle uses a variety of methods to determine wait time. I have a number of postings and educational content available about this. You'll see them if you do an OraPub or blog search for "time".

When working with non-idle wait time, remember the 80/20 rule. Most of the wait time we care about will be contained with in the largest ("top") two to four wait events. Don't waste YOUR time focusing on the 20%.

Here's an example. In the screen shot below, while not shown the total wait time is 1966 seconds.
If you add up the displayed "top" four wait events, their combined wait time is 1857. This is about 95% of all the non-idle wait time. This is a good example demonstrating that most of the wait time is found in the top two to four events.

My OSM toolkit has many wait time related tools. Most start with "sw" for "session wait" but the both ttpctx.sql or rtpctx.sql will contain the non-idle wait time and also CPU consumption. This is a good time to transition into DB Time.

DB Time
DB Time is a time model statistic that is the sum of Oracle process CPU consumption and non-idle wait time. When optimizing Oracle systems we typically focus on reducing "time", though many times database work is also part of the equation. This "time" is essentially DB Time, though sometimes I take control over what I consider idle wait time.

The name DB Time comes from the actual statistic name in both v$sess_time_model and v$sys_time_model.

If you look at any AWR or Statspack report in the "Time Model" section, you will see DB Time.
The DB time value is technically all server process CPU consumption plus the non-idle wait time within the reporting snapshot interval, converted to seconds. (The raw statistic is stored in microseconds.) Surprisingly, Oracle does not include "background cpu time" in the DB Time statistic. There are both good and not so good reasons the background CPU time is not include, but that's a topic for another posting.

A Little Math
We have enough detail to relate DB Time, DB CPU and non-idle wait time together... using a little math.

DB Time = DB CPU + non_idle_wait_time

And of course,

non_idle_wait_time = DB Time - DB CPU

This is important, because there is no single statistic that shows all the non-idle wait time. This must be derived. Shown above is one way to derive the non-idle wait time. Take a look at the AWR report snippet below.

In the Non-Idle Wait Time section above, I stated that the total non-idle wait time was 1966 seconds. I derived this from the Time Model screen shown above. I simply did:

non_idle_wait_time = DB Time - DB CPU
1966.16 = 4032.03 - 2065.87

Coming Up Next
I wanted to keep this post short, which means I left out the more interesting topics. So in the next post I'll merge into the picture elapsed time along with parallelism and revisit wall time. Then in the third post (that's my guess at this point), I'll actually demonstrate this in two different systems.

Thanks for reading,

Craig.

https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Advantages & Benefits of BI in your business: Additional Services Series pt. 2 [VIDEO]

Chris Foot - Tue, 2014-08-05 05:13

Transcript

To help our customers make the best possible business decisions, we offer a complete set of Business Intelligence support services for Microsoft’s Integration Services (SSIS), Analysis Services (SSAS), and Reporting Services (SSRS) product sets.

Why Business Intelligence you might ask? It helps you quickly identify business trends, empower your staff to use relevant information, gain insight into customer behavior for upsell opportunities – and more. Plus, the initial low-cost entry point of SSIS, SSAS, and SSRS requires no third party software and comes with everything needed to create a robust BI environment.

Microsoft’s BI product stack’s compatibility with other applications combined with having all your support provided by a single vendor helps to simplify your processes and take maximum advantage of your BI applications.

For more details on the importance of BI and how our product set can enhance your business, click on the link below to download our BI whitepaper.

In our next video, we’ll dive deeper into our specific BI products. See you there!
 

The post Advantages & Benefits of BI in your business: Additional Services Series pt. 2 [VIDEO] appeared first on Remote DBA Experts.

Get Up Offa That Thing

Doug Burns - Tue, 2014-08-05 04:00
No, no, no ... not *that* JB!
As regular readers will know, the JB who tends to get mentioned most often around these parts is John Beresniewicz who, up until recently, worked at Oracle on all the cool OEM Performance Pages and related instrumentation (alongside others, of course, such as Graham Wood, Uri Shaft, Kyle Hailey and probably a cast of thousands for all I know). Over recent years, JB has become a friend and has always posted deeply insightful comments whenever I’ve blogged about DB Time, Top Activity, Load Maps, ASH Analytics or Adaptive Thresholds. There can be few people who understand those subjects as well and he also has a great way of communicating how such powerful tools can be used to actually make things a lot simpler. (Click On The Big Stuff!, to pick one example) There can be a lot of unexpected complexity behind simplicity, believe me ;-)
So when the opportunity comes to learn from the best, I feel it’s only right I share it. The ASH Architecture and Advanced Usage presentation is a collective effort between Graham Wood, Uri Shaft and JB that has been refined over a number of years. This is the version that JB and Graham delivered at the RMOUG Training Days 2014. It’s excellent stuff from some true Oracle Performance experts.
JB might try to play the grumpiest man in California at times but, after all the work he has contributed to improving the performance analysis tools available to jobbing DBAs, I for one hope he sticks around on this Oracle Performance stuff and isn't distracted by Big Data or Anything-as-a-Service because he’d be too much of a loss (although I wouldn't have to listen to his whining so much, so maybe every cloud etc .... ;-))
Anyway – check out the presentation. It’s well worth your time. Better still, give that man a job so he doesn't have too much time on his hands and be reduced to starting to use Social Media!