Feed aggregator

Basicfile LOBs

Jonathan Lewis - Fri, 2016-08-05 10:12

I got a call to a look at a performance problem involving LOBs a little while ago. The problem was with an overnight batch that had about 40 sessions inserting small LOBs (12KB to 22KB) concurrently, for a total of anything between 100,000 and 1,000,000 LOBs per night. You can appreciate that this would eventually become a very large LOB segment – so before the batch started all LOBs older than one month were deleted.

The LOB column had the following (camouflaged) declaration:

 LOB (little_lob) STORE AS BASICFILE (
        TABLESPACE lob_ts 

The database was 11gR2, the tablespace was defined with ASSM with uniform 1MB extents and a blocksize of 8KB (so the LOBs were all 2 or 3 chunks) and the undo retention time was 900 seconds. The effect of the “enable storage in row” is that the LOBINDEX didn’t have to hold any details of current LOB chunks (for in-row, the first 12 chunks are listed in the LOB Locator in the base table).

So, examining an AWR report covering the critical interval, reviewing the captured ASH data, and checking the database, a few questions came to mind:

  • With 200 GB of current LOB data in the segment, why was the segment roughly 800GB ?
  • With no need for current LOBs to be indexed, how had the LOB Index reached 500,000 blocks in size ?
  • There had been 500,000 inserts that night – so why had Oracle done 6 Billion (cached) buffer gets on the (nocache) LOB segment ?
  • Given that the LOB Segment had not changed size during the night, why had there been millions of HW enqueue wait on the inserts ?

Knowing the stuff that I did know about basicfile LOBs it seemed likely that the most significant problem was that the segment hadn’t been created with multiple freepools which, according to the very sparse (and not entirely self-consistent) documentation, exist to allow improved concurrency. So I thought I’d search the Internet for any useful information about freepools, how they worked, what impact they might have on this problem, why their absence might produce the symptoms I’d seen, and what the best course of action would be to address the problem.

Of course the “correct” solution according to MoS would be to convert from basicfile to securefile – with a strange insistence on using online redefinition, but no explanation of why a simple CTAS or alter table move is undesirable or dangerous. Unfortunately there are a couple of notes on MoS describing performance issues with “high” levels of concurrent inserts that need to be addressed by setting hidden parameters so I’m not (yet) keen on rebuilding 700GB of a production system to produce a change that might still not work quickly enough; especially since I couldn’t find anything on MoS that could quantify the time needed to do the conversion.

To my surprise I couldn’t find a single useful piece of information about the problem. The only articles I could find seemed to be little bits of cut-n-paste from the Oracle manual pages about using multiple freepools, and the best of those actually demonstrated rebuilding or changing the freepools settings on a LOB of a few megabytes. The most significant MoS note did say that the process “could be slow” and would lock the table. But surely someone, somewhere, must have tried it on a big system and had some idea of “how slow”.

In the end I had to start building some simple models and doing a few experiments to find out what happens and where the time goes and what causes the strange results and – most importantly – how freepools might help. Fortunately, following a call to the Oak Table for any ideas or links to useful documents, Kurt van Meerbeeck directed me to the original Oracle patents which were enormously helpful in showing why freepools could help and why, in the wrong circumstances, you could still end up with a (slightly smaller) disaster on your hands.


To be continued …


Until I find time to write up the mechanisms I’ve discovered I’m happen to respond to any quick questions about this problem and I’m more than happy to hear about experiences others have had either rebuilding freepools, changing the number of freepools, or migrating from basicfile to securefile LOBs.


List usernames instead of uids with the ps command for long usernames

Yann Neuhaus - Fri, 2016-08-05 08:27

Have your ever faced such a situation. You have usernames in your /etc/passwd file with more than 8 characters. This is no problem for Linux at all, usernames may be up to 32 characters long, only your ps output might look a little scrambled.

It shows you the uid instead of the username like in the following example:

$ id

uid=20001(longuser01) gid=10002(oinstall) groups=10002(oinstall)

$ sleep 1000000 &

$ ps -ef | grep sleep | grep -v grep

20001    14069 11739  0 14:11 pts/0    00:00:00 sleep 1000000


But you want to see the username instead of the uid. The workaround is

  • Don’t use more than eight characters for your usernames  :-)
  • Or …. format your ps output the right way

You could use the following alias to get the job done.

$ alias psx='export PS_FORMAT="user:12,pid,%cpu,%mem,vsz,rss,tty,stat,start,time,command"; ps ax'

$ psx | grep sleep | grep -v grep

longuser01 14069  0.0  58940 520 pts/0 S 14:11:50 sleep 1000000


Now it looks better.

Cheers, William


Cet article List usernames instead of uids with the ps command for long usernames est apparu en premier sur Blog dbi services.

The new Identity clause doesn't increment the id

Tom Kyte - Fri, 2016-08-05 08:26
Categories: DBA Blogs

Use select query to concatinate, and remove spaces from two columns

Tom Kyte - Fri, 2016-08-05 08:26
Compare table with spaces in between data, with table without spaces Example:- Table A has three fields: ID, Cd_1, and Cd_2 Fields: Cd_1 or Cd_2 can have spaces between the codes. Below example has space between X and Y in Cd_1 ID Cd...
Categories: DBA Blogs

I've done the test and got the wrong answer but I don't know why

Tom Kyte - Fri, 2016-08-05 08:26
It's about the code, I don't know why it displays "before 5 after 5". CREATE OR REPLACE PROCEDURE plch_test AS BEGIN NULL; -- 5 lines of code END; / CREATE OR REPLACE TRIGGER plch_bca BEFORE CREATE OR ALTER ON SCHEMA DECLARE l_li...
Categories: DBA Blogs

Data Redaction and Data Pump in #Oracle 12c

The Oracle Instructor - Fri, 2016-08-05 08:07

What happens upon Data Pump Export if tables are being exported that have a Data Redaction Policy? I got that question several times in class, which is why I put the answer here , so I can refer to it subsequently.  Might also be of interest to the Oracle Community:-)

SYS@orcl > BEGIN
(object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMPSAL_POLICY',
column_name => 'SAL',
function_type => DBMS_REDACT.FULL,
expression => '1=1');

PL/SQL procedure successfully completed.

SYS@orcl > connect scott/tiger
SCOTT@orcl > select ename,sal from emp;

ENAME             SAL
---------- ----------
SMITH               0
ALLEN               0
WARD                0
JONES               0
MARTIN              0
BLAKE               0
CLARK               0
SCOTT               0
KING                0
TURNER              0
ADAMS               0
JAMES               0
FORD                0
MILLER              0

14 rows selected.

Scott doesn’t see the values of the SAL column because of the Data Redaction Policy. SYS is not subject to that policy, because SYS has the privilege EXEMPT REDACTION POLICY:

SYS@orcl > select ename,sal from scott.emp;

ENAME             SAL
---------- ----------
SMITH             800
ALLEN            1600
WARD             1250
JONES            2975
MARTIN           1250
BLAKE            2850
CLARK            2450
SCOTT            9000
KING             5000
TURNER           1500
ADAMS            1100
JAMES             950
FORD             9000
MILLER           1300

14 rows selected.

If Data Pump Export is done as a user who owns that privilege, the table is just exported with all its content, regardless of the policy:

SYS@orcl >  create directory dpdir as '/home/oracle/';
[oracle@uhesse ~]$ expdp tables=scott.emp directory=DPDIR

Export: Release - Production on Fri Aug 5 08:56:51 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Aug 5 08:57:10 2016 elapsed 0 00:00:15

If Scott tries to export the table, that raises an error message:

SYS@orcl > grant read,write on directory dpdir to scott;

Grant succeeded.

[oracle@uhesse ~]$ expdp scott/tiger tables=scott.emp directory=DPDIR

Export: Release - Production on Fri Aug 5 08:55:10 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
ORA-28081: Insufficient privileges - the command references a redacted object.
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
Job "SCOTT"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Aug 5 08:55:28 2016 elapsed 0 00:00:16

Taken from the 12c New Features class that I delivered this week in Hinckley. As always: Don’t believe it, test it:-)

Tagged: 12c New Features
Categories: DBA Blogs

How to do a Filesystem Resize (ext3/ext4) on Redhat running on VMware

Yann Neuhaus - Fri, 2016-08-05 07:43

A filesystem resize can be done in several ways, online, offline, with LVM2 or without LVM2.  However, this blog will describe how to do an online resize of ext3/ext4 filesystems where a virtual disk (vmdk) is online added to a VMware Redhat guest OS.

So let’s start with the online filesystem resize of ext3/4 filesystems on the Redhat guest OS.  A new virutal disk (preferably an eagerd zero thick on VM running Oracle) was added as a pre requirement. Adding a new virtual disk is an online operation and no downtime is required to do it.

The whole procedure in this document is described by using the command line only. There is also a graphical user interface `system-config-lvm` that can perform the job, but that tool is out of scope in this document.

Online resize a ext3/4 filesystem

There are several steps that have to be done. These are in general:

  1. Scanning for new LUN’s
  2. Partition the new LUN’s and partprobe
  3. Create the physical volume
  4. Extend the volume group and the logical volume
  5. Extend the filesystem online
Rescan for new LUN’s

Depending on the number of virtual controllers, you have to scan for your new LUN’s on each of these. In case you know on which the disk was added, then of course, you need to scan only the appropriate one.

Rescan for new LUN’s on the first SCSI Controller (LSI Logic Parallel)
# echo "- - -"  > /sys/class/scsi_host/host0/scan*
Rescan for new LUN’s on the second SCSI Controller (Paravirtualized)
# echo "- - -"  > /sys/class/scsi_host/host1/scan*
Create a Primary Partion on the new devices
# fdisk /dev/sdx??

# fdisk /dev/sdy??
Partprobe the new devices

Partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

# partprobe /dev/sdx??

# partprobe /dev/sdy??
Create the Pysical Volumes
# pvcreate /dev/sdx??

Physical volume "/dev/sdx??" successfully created
# pvcreate /dev/sdy??

Physical volume "/dev/sdy??" successfully created
Extend the Volume Group
# vgextend VGOracle /dev/sdx??

Volume group "VGOracle" successfully extended
# vgextend VGOracle /dev/sdy??

Volume group "VGOracle" successfully extended
Extend the Logical Volume
# lvextend -L 72G /dev/VGOracle/LVOracleu??

Extending logical volume LVOracleu?? to 72.00 GB

Logical volume LVOracleu01 successfully resized
Online Resize the ext3/ext4 Filesystem

After the logical volume is resized successfully, you can resize, in fact any filesystem that is online re-sizable. The following are examples for the ext3/ext4 filesystems. The syntax for ext3 and ext4 differ only slightly. For ext3 you use `resize2fs` even if its ext3 and not ext2, and in case of ext4 you use `resize4fs` were the command name is more logically.

# resize2fs /dev/VGOracle/LVOracleu??
# resize4fs /dev/VGOracle/LVOracleu??


That’s it. Now have fun with the bigger filesystem.





Cet article How to do a Filesystem Resize (ext3/ext4) on Redhat running on VMware est apparu en premier sur Blog dbi services.

Orchestration vs CloudFormation

Pat Shuff - Fri, 2016-08-05 02:07
Today we are going to do a compare and contrast with Oracle Orchestration and Amazon CloudFormation. The two have the same objectives and can perform the same operations when provisioning an instance. The key difference is the way that they both operate and define the elements needed to create an instance. In the past few days we have gone through and looked that the three files needed to provision a WordPress blog. Information on Oracle Orchestration can be found in the documentation section and tutorial section. Information on Amazon CloudFormation can be found at the home page and tutorial section. We will dive into the WordPress example and look at the json file that is used to provision the service.

The key components to the json file are

  "AWSTemplateFormatVersion" : "2010-09-09",

  "Description" : " ... ",

  "Parameters" : { ... },

  "Mappings" : { ...  },

  "Resources" : { ...  },

  "Outputs" : { ...   }

We can create a simple storage element in S3 with the following file

    "Resources" : {
        "HelloBucket" : {
            "Type" : "AWS::S3::Bucket"

Note that the only thing that we truly need is the definition of a resource. The resource has a label of "HelloBucket" and the resource consists of an element defined as "AWS::S3::Bucket". Note that the Type is very specific to AWS. We could not take this generic definition and port it to any other platform. We don't know how much storage to allocate because S3 is typically an open ended definition. This is radically different from out storage creation from a few days ago where we had to define the storage_pool, size of the disk, and properties of the instance like is it bootable, what image to boot from, and what account it is associated with. The CloudFormation interface assumes account information because it is run from a web based or command line based interface that has your account information embedded into the user interface.

We could get a little more complex and define an instance. With this instance we reference an AMI that predefines the content and operating system. We also define the security ports and connection keys for this instance in the definition.

  "Resources" : {
    "Ec2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" }, "MyExistingSecurityGroup" ],
        "KeyName" : "mykey",
        "ImageId" : "ami-7a11e213"

    "InstanceSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
        "GroupDescription" : "Enable SSH access via port 22",
        "SecurityGroupIngress" : [ {
          "IpProtocol" : "tcp",
          "FromPort" : "22",
          "ToPort" : "22",
          "CidrIp" : ""
        } ]
In this example we are going to provision an EC2 instance from ami-7a11e213. We will be using the security credentials labeled MyExistingSecurityGroup and open up port 22 for ssh access. We don't know what version the operating system unless we look up the characteristics of the ami. This is different from the Oracle Orchestration where we define the storage element and what operating system to boot from. They both define the security groups but do it a little differently but have the same effect.

We can also define some of the characteristics into the application. For CloudFormation we can configure WordPress with the following parameters

 "Parameters": {
    "KeyName": {
      "Description" : "Name of an existing EC2 KeyPair to enable SSH access into the WordPress web server",
      "Type": "AWS::EC2::KeyPair::KeyName"
    "WordPressUser": {
      "Default": "admin",
      "NoEcho": "true",
      "Description" : "The WordPress database admin account user name",
      "Type": "String",
      "MinLength": "1",
      "MaxLength": "16",
      "AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*"
    "WebServerPort": {
      "Default": "8888",
      "Description" : "TCP/IP port for the WordPress web server",
      "Type": "Number",
      "MinValue": "1",
      "MaxValue": "65535"
Note that we define these parameters based on the application and pass into the operating system as it is booted. Oracle Orchestration takes a different tactic when it comes to adding parameters to a configuration. Rather than having parameters defined for each application, customizations like this are done with a post install script that is executed at boot time. These configurations can be done from a snapshot or from a post install script based on how you like to initialize systems. This functionality started with Enterprise Manager and the scripts that you use for in house systems can be ported to the cloud without changing or updating.

In summary, the Amazon CloudFormation and Oracle Orchestration are very similar. The components that you use to define a system are done similarly. Amazon makes assumptions that you are running on AWS and gives you short cuts and shorthand that allows you to create predefined components quickly and easily. Unfortunately this configuration does not translate to any other cloud provider or an in house solution. Oracle Orchestration is a little more nuts and bolts but is designed to help you create everything from scratch and build upon the foundation for system definitions. CloudFormation has a graphical user interface that generates json files for you based on dragging and dropping components into a design pallet. Oracle takes a slightly different approach and uses the Oracle Marketplace to automatically generate the json files. There is not a graphical design tool that allows you to drag and drop components but there are tools to take a configuration that is in your data center and generate the parameter list that can be used to generate the json files for Orchestration. We are not saying that one is better than the other in this blog. We are mainly pointing out that they two tools and utilities have different target audiences and functionality. Unfortunately, you can't take one configuration and easily map it into the other configuration. Hopefully someone at some point will take these files and create a translator.

Oracle 12c: Indexing JSON in the Database Part II (Find A Little Wood)

Richard Foote - Fri, 2016-08-05 02:00
In Part I, we looked at how you can now store JSON documents within the Oracle 12c Database. For efficient accesses to JSON documents stored in the Oracle database, we can either create a function-based index based on the JSON_VALUE function or on JSON .dot notation. These indexes are useful for indexing specific JSON attributes, […]
Categories: DBA Blogs

Links for 2016-08-04 [del.icio.us]

Categories: DBA Blogs

No Fun with EM 12c

Michael Dinh - Thu, 2016-08-04 17:21

Confession: I have not been using OEM for decades since there was never a real need for it and it’s starting to take its toll on me.

Throughout the day, I keep getting paged from OEM – Message=Number of failed login attempts exceeds threshold value.

The information provided is utterly useless, e.g. what is the threshold value and what’s the error code.

What would be useful is to provide the SQL used for the check for ease of troubleshooting.

Then I found Finding the source of failed login attempts. (Doc ID 352389.1)

SQL> @pr "select username,os_username,userhost,client_id,trunc(timestamp),count(*) failed_logins from dba_audit_trail where returncode=1017 and timestamp>trunc(sysdate) group by username,os_username,userhost, client_id,trunc(timestamp) order by 5";
USERNAME              : JANE
OS_USERNAME           : oracle
FAILED_LOGINS         : 1

That wasn’t it.

SQL> @pr "select username,os_username,RETURNCODE,userhost,trunc(timestamp),count(*) failed_logins from dba_audit_trail where returncode<>0 and timestamp>trunc(sysdate) group by username,os_username,RETURNCODE,userhost,trunc(timestamp) order by 5";
USERNAME              : 
OS_USERNAME           : tomcat
RETURNCODE            : 28000
FAILED_LOGINS         : 1065
USERNAME              : JANE
OS_USERNAME           : oracle
RETURNCODE            : 1017
FAILED_LOGINS         : 1

$ oerr ora 28000
28000, 00000, "the account is locked"
// *Cause:   The user has entered wrong password consequently for maximum
//           number of times specified by the user's profile parameter
//           FAILED_LOGIN_ATTEMPTS, or the DBA has locked the account
// *Action:  Wait for PASSWORD_LOCK_TIME or contact DBA

$ oerr ora 1017
01017, 00000, "invalid username/password; logon denied"
// *Cause:
// *Action:

1065 failed logins and no one even knows about this?

Lesson learned, there are many types of failed logins.


Installing SQLcl on OEL/RHEL

Dimitri Gielis - Thu, 2016-08-04 16:55
In my previous post I talked about how SQLcl came in handy to work with JavaScript against the database.

The installation of SQLcl is easy... you just download the zip, unpack and run the executable.

But to be fair, before I got SQLcl running (especially the script part) I encountered a number of issues, so hopefully this post helps you be able to run SQLcl with all features in minutes as it's meant to be :)

Those were the error messages I received when running sql (script):

javax.script.ScriptException: sun.org.mozilla.javascript.EvaluatorException: Java class "java.util.ArrayList" has no public instance field or method named "0".

javax.script.ScriptException: sun.org.mozilla.javascript.EcmaError: ReferenceError: "Java" is not defined. (#1) in at line number 1

The solution for me was to upgrade my Java version to Java 8.

Here're the steps on my OEL/RHEL system to upgrade Java:

$ cd /opt

$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jdk-8u102-linux-x64.tar.gz"

$ tar xzf jdk-8u102-linux-x64.tar.gz 

cd jdk1.8.0_102/

alternatives --install /usr/bin/java java /opt/jdk1.8.0_102/bin/java 2
$ alternatives --config java

There are 5 programs which provide 'java'.

  Selection    Command
   1           /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java
   2           /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
   3           /usr/lib/jvm/jre-1.5.0-gcj/bin/java
*+ 4           /usr/java/jre1.8.0_101/bin/java
   5           /opt/jdk1.8.0_102/bin/java

Enter to keep the current selection[+], or type selection number: 5

$ alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_102/bin/jar 2
$ alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_102/bin/javac 2
$ alternatives --set jar /opt/jdk1.8.0_102/bin/jar
$ alternatives --set javac /opt/jdk1.8.0_102/bin/javac
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)

$ export JAVA_HOME=/opt/jdk1.8.0_102
export JRE_HOME=/opt/jdk1.8.0_102/jre
$ export PATH=$PATH:/opt/jdk1.8.0_102/bin:/opt/jdk1.8.0_102/jre/bin

Now when running SQLcl everything worked like a charm. Hurray :)

Categories: Development


Tom Kyte - Thu, 2016-08-04 14:06
How to select a last value of a column which is not in group by For eg Emp id salary j_dt 1. 100. 1-Jan 1. 200. 1-Feb 1. 130. 1-Mar select last value of amount, sum of amount, avg of amount ...
Categories: DBA Blogs

Identify overlapped intervals

Tom Kyte - Thu, 2016-08-04 14:06
Hi team, I have a table with intervals of numbers. However, my table have intervals that overlap and a number can be in more than one interval. For example, start_interval | end_interval 1 | 2 2 | 4 3 ...
Categories: DBA Blogs

Nullable Property When Copying a Table Using CTAS

Tom Kyte - Thu, 2016-08-04 14:06
Hello, In the example below, I'm using a CTAS statement with where 1=2 to copy the structure of a table. Four of the columns that are defined as NOT NULL on the source table retain that property in the new table. However, one column (patient_id) b...
Categories: DBA Blogs

Change the creator of a dbms_scheduler job

Tom Kyte - Thu, 2016-08-04 14:06
Hello, is it possible to change the creator of a dbms_scheduler job, so it is executed with the rights of another user? Regards Wolfgang
Categories: DBA Blogs

Find rows inserted yesterday

Tom Kyte - Thu, 2016-08-04 14:06
Hi, I have a table with 3 columns emp_id,last_name and date_of_birth. And each day data loading starts after 12 am . I have to generate the report on data loaded before today's load (exclude loaded today after 12 am). I was asked this questio...
Categories: DBA Blogs


Tom Kyte - Thu, 2016-08-04 14:06
HI TOM, PLEASE EXPLAIN WITH AN EXAMPLE ...BREIFLY I KNOW ONLY LITTLE BIT 1)How to debugg your code? 2)How to trace error handling? 3)Can you alter procedure with in package? 4)Is it possible to open cursor which is in package in another procr...
Categories: DBA Blogs

ORA-01000 - How to close open CURSORs?

Tom Kyte - Thu, 2016-08-04 14:06
Hi, I have a cursor leak at the application level (Java), how can I close open cursors? If I Kill the session, does this close the related cursors? Also, if I use DBMS_SHARED_POOL.PURGE to flush the query behind the issue, does this close the ...
Categories: DBA Blogs

Sessions disconnected automatically.

Tom Kyte - Thu, 2016-08-04 14:06
Hi Tom. I can't find how my database is disconnecting the inactive sessions because when I ran the following statements: select * from user_resource_limits a where a.resource_name in ('IDLE_TIME','CONNECT_TIME'); IDLE_TIME UNLIMITED CONNE...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator