Feed aggregator

Upgrading to 7.2 created a new ACS. How to remove it and test it?

Yann Neuhaus - Fri, 2016-06-24 03:46

I had this strange behavior that once upgraded from 6.7 to 7.2 a new ACS was created. I think it’s because the current ACS name didn’t fit the new ACS name pattern. Well it’s not a big issue to have 2 ACS configured. But in my case they pointed both to the same port and servlet so… I had to remove one.

Hence, how can we know which one is used?

That’s easy, just find the acs.properties file located in:



In this file you should find the line:



In fact my previous ACS was named YOUR_ACS_NAME.cACS1 that’s why I think a new one was created. So here you have the ACS used and you just have to remove the other one:

delete dm_acs_config objects where object_name = ‘YOUR_OLD_ACS_NAME';

Fine, now how can we check that the ACS is working properly?

First you can paste the ACS url i your browser to check if it’s running, it should look like this:



If you installed your method server on another port than 9080, use it.

You should see the following result (maybe with a different version):

ACS Server Is Running - Version :


If you can’t find the ACS url, login to Documentum Administrator and navigate to:
Administration -> Distributed Content Configuration -> ACS Server
If you right click on it you will see the url at the bottom of the page.

At this point the ACS is running but is documentum using it properly?

In order to verify this point a bit of configuration is needed. Login to the server on which you have DA installed, in the DA application search for a log4j.properties file and add the following lines:

log4j.logger.com.documentum.acs=DEBUG, ACS_LOG
log4j.logger.com.documentum.fc.client.impl.acs=DEBUG, ACS_LOG
log4j.appender.ACS_LOG.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c - %m%n

You may have to update the line log4j.appender.ACS_LOG.File.

Restart the tomcat or whatever webapp server you have. In order to generate logs you’ll have to open a document from DA. Let’s say we have a document called TESTDOC.doc.
Once you open it you’ll have around 3 to 4 lines in AcsServer.log. In order to verify that everything went fine, you should NOT see the following line:
INFO [Timer-161] com.documentum.acs.dfc – [DFC_ACS_LOG_UNAVAILABLE] “userName=”test”, objectId=”0903d0908010000″, objectName=”TESTDOC.doc””, skip unavailable “ACS” serverName=”YOUR_ACS_NAME_HERE” protocol “http”

Instead you must have a kind of ticket/key formed by a lot of letters/numbers. This step will validate that you have been served by the ACS.


Cet article Upgrading to 7.2 created a new ACS. How to remove it and test it? est apparu en premier sur Blog dbi services.

Training On-demand: Fusion Middleware for Implementation Specialists

In conjunction with our colleagues from Global enablement we are pleased to offer training on Demand Boot Camps for Oracle Fusion Middleware partners, these three market leading products Oracle SOA...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Exadata and Big Data Implementation Specialists Bootcamps

In conjunction with our colleagues from Global enablement we are pleased to offer training on Demand Boot Camps for Exadata and Big Data to Oracle key partners in EMEA. These cutting edge products ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

EU Referendum : Oh Hell No!

Tim Hall - Fri, 2016-06-24 02:45

If you care, you’ve probably heard the UK voted to leave the European Union (EU) yesterday. Suffice to say I’m gutted!

I’ve just deleted most of the content this post because it contained a lot of inflammatory and negative comments. I could question the motives of the leavers, but what good would that do now. IMHO this is a dark day for the UK.

For all my friends around the world, I would just like you to know I wanted to remain part of something bigger than this little island…



PS. If anyone has got an EU passport going spare it could come in really handy!

PPS. One of my colleagues just described what I’m going through as the 5 stages of grief. I think he is correct.


EU Referendum : Oh Hell No! was first posted on June 24, 2016 at 8:45 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

database option - Data Guard part 2

Pat Shuff - Fri, 2016-06-24 02:07
Normally the part two of the option pack has been a hands on tutorial or examples liberally lifted from Oracle by Example, OpenWorld hands on labs, or the GSE Demo environment. I even went to an internal repository site and the database product manager site and all of the tutorials were for an existing database or labs on a virtual machine. None of these were easy to replicate and all of them assumed a virtual machine with a pre installed instance. None of the examples were form 2015 or 2016. If anyone knows of a hands on tutorial that uses the public cloud as at least one half of the example, let me know. There is a really good DR to the Cloud whitepaper that talks about how to setup Data Guard to the cloud but it is more of a discussion than a hands on tutorial. I typically steal screen shots and scripts from demo.oracle.com but the examples that exist in the GSE demo pool use Enterprise Manager 10g, servers inside of Oracle running an early version of 11g, or require a very large virtual image download. The closest thing that I could find as a hands on tutorial is Oracle by Example - Creating a Physical Standby. For this blog we will go through this tutorial and follow along with the Oracle Public Cloud as much as possible.

Step One is to create an 11g database. We could do this on 12c but the tutorial uses 11g. If anyone wants to create a 12c tutorial, contact me and we can work on a workshop together. We might even be able to get it into the hands on labs at OpenWorld or Collaborate next year. Rather than going through all of the steps to create an 11g instance I suggest that you look at the May 4th blog entry - Database as a Service. Select 11g and High Performance Edition. We will call this database instance PRIM rather than ORCL. Your final creation screen should look like

We want to create a second database instance. We will call this one ORCL and select High Performance Edition and 11g. The name does not matter as long as it is different from the first one. I am actually cheating on the second one and using a database instance that I created weeks ago.

It is important to note while we are waiting on the database to finish that we can repeat this in Amazon but need to use EC2 and S3. We can also do this in Azure but in Azure Compute. We will need to provide a perpetual license along with Advanced Security and potentially Compression if we want to compress the change logs when we transmit them across the internet. It is also important to remember that there will be an outbound charge when going from one EC2 or Azure Compute instance to the other. If we assume that we have a 1 TB database and it changes 10% per day, we will ship 100 GB daily or being conservative and saying that we only get updates during the week and not the weekend we would expect 2 TB of outbound charges a month. Our cost for this EC2 service comes in at $320/month. If we use our calculations from our Database Options Blog post we see that the perpetual license amortized over 4 years is $2620/month. This brings the cost of the database and only Advanced Security to $2940. If we amortize this over 3 years the price jumps to $3,813/month. When we compare this to the Oracle High Performance Edition at $4K/month it is comparable but with High Performance Edition we also get eight other features like partitioning, compression, diagnostics, tuning, and others. Note in the calculator that the bulk of the processor cost is outbound data transfer. It would be cheaper to run this with un-metered compute services in the Oracle cloud at $75/month.

If we follow the instructions in DR to Oracle Cloud whitepaper we see that the steps are

  1. Subscribe to Oracle Database Cloud Service
  2. Create an Oracle instance
  3. Configure Network
  4. Encrypt Primary Database (Optional)
  5. Instantiate Data Guard Standby
  6. Perform Data Guard health check
  7. Enable Runtime Monitoring
  8. Enable Redo Transport Compression (Optional)
So far we have done steps one and two. When the database creation has finished we perform step 3 by going into the compute console and opening up port 1521 or the dblistener service. We do this by going to the compute service and looking for our database instance name. In our example we hover over the service and find the dblistener service for prs11gPRIM. We select update and enable the port. Given that these are demo accounts we really can't whitelist the ip addresses and can only open it up to the public internet or nothing. We do this for the primary and the standby database

Once we have this configured we need to look at the ip addresses for prs11gPRIM and prs11gHP. With these ip addresses we can ssh into the compute instances and create a directory for the standby log files. We can create these files with the /u02 partition with the data or the /u03 partition with the backups. I suggest that you put them in the /u04 partition with the archive and redo logs. Once we have created these directories we can follow along with the Oracle By Example Physical Data Guard example starting at step 3. The network configuration is shown on page 12 of DR to Oracle Cloud whitepaper. We can also follow along with this using prs11gPRIM as the primary and prs11gHP as the standby. Unfortunately, after step 5 the instructions stop showing commands and screen shots to finish the configuration. We are forced to go back to the OBE tutorial and modify the scripts that they give and execute the configurations with the new names.

Again, I am going to ask if anyone has a full tutorial on this using cloud services. It seems like every example goes half way and I am not going to finish it in this blog. It would be nice to see a 12c example and see how a pluggable database automatically replicates to the standby when it is plugged in. This would be a good exercise and workshop to run. My guess is this would be a half day workshop and could all be done in the cloud.

Links for 2016-06-23 [del.icio.us]

Categories: DBA Blogs

SQL Server 2016: Always Encrypted

Yann Neuhaus - Fri, 2016-06-24 01:58

One of the top new features of SQL Server 2016 is the Always Encrypted functionality. Always Encrypted provides that data, store in a database, remains encrypted the all times there are in the database. There is a complete separation between persons who own the data and person who manage it. Only persons who own the data can see plain text data and person like DBAs, sys admins or privilege logins cannot have access to the data.

Data are encrypted and decrypted in flight between the database and the client application inside a Client Driver on the client side.
The client manages encryption keys which are stored outside of SQL Server.
Let’s start to explain via a concrete example.

I have a table in my database with sensitive data and I want to encrypt those data to avoid that Database Administrator can see plain text data for Credit Card number, Account Number and Account Balance:


To enable encryption columns right click on our Customer table and select “Encrypt Columns…”:


An introduction screen appears explaining how Always Encrypted works, click Next:


The next screen shows all the columns of our table and we have to select which ones we want to encrypt. Here the Credit card number, the account balance and the account number:


We need to choose the Encryption Type between two options which are described if we click in the “Encryption Type” text:


I will choose Randomized for Credit Card Number and Account Number as I don’t want to query  on those columns and it is more secure. But I choose Deterministic for Account Balance as I want to filtering by equality with this field.
Please note that Deterministic encryption use a column collation with a binary2 sort order for character columns, so the collation for our char columns will be changed from French_CI_AS to French_BIN2 in my example.

For the column Encryption Key which are the key which will encrypt the data for each column, I will let the wizard generated one for me. I will also use the same column encryption key for all my encrypted columns:


The next screen is used for the master key configuration. The column encryption key is used to encrypt the data in the column and the column master key protect, encrypt the column encryption key. I will, here also, use an auto generated column master key which will be a self-signed certificate saved in the Windows Certificate Store:


In the Run Settings screen, first of all, a warning points the fact that if the encryption/decryption is executed during some insert statement, there could be a risk of data loss.
You could normally choose if you want to run the encryption immediately or if you want to generate a PowerShell script to do it later. For the time being the PowerShell generation could not be chosen… So I will run it now:


A summary explains the operation that will be proceeded. A column master key will be generated and saved in the Windows Certificate store, the column encryption key will be also generated and used to encrypt my three columns:


My columns have been encrypted:


Now , I go back to my query, refresh it and I see that I cannot anymore read plain text for my three columns but instead I have varbinary encrypted blobs:


There is just a problem in this demo… In fact, I have created my column master key certificate with a self-signed certificate in the context of the current user.
So, this user have access to my certificate and can decrypt the encrypted columns if we add in the connection string “Column Encryption Setting=Enabled”,  which is the change required to use Always Encrypted.


Now, as the certificate used to create the Column Master Key is available the encrypted columns appear in plain text…


We will have to separate physically the certificate used to create my column master key from the SQL Server machine used to create the Column Master Key and the Column Encryption Key.
I will show you how to do that in a future blog post.


Cet article SQL Server 2016: Always Encrypted est apparu en premier sur Blog dbi services.

Oracle – Pinning table data in the Buffer Cache

The Anti-Kyte - Thu, 2016-06-23 15:13

As I write, Euro 2016 is in full swing.
England have managed to get out of the Group Stage this time, finishing second to the mighty…er…Wales.
Fortunately Deb hasn’t mentioned this…much.

In order to escape the Welsh nationalism that is currently rampant in our house, let’s try something completely unrelated – a tale of Gothic Horror set in an Oracle Database…

It was a dark stormy night. Well, it was dark and there was a persistent drizzle. It was Britain in summertime.
Sitting at his desk, listening to The Sisters of Mercy ( required to compensate for the lack of a thunderstorm and to maintain the Gothic quotient) Frank N Stein was struck by a sudden inspiration.
“I know”, he thought, “I’ll write some code to cache my Reference Data Tables in a PL/SQL array. I’ll declare the array as a package header variable so that the data is available for the entire session. That should cut down on the amount of Physical IO my application needs to do !”

Quite a lot of code later, Frank’s creation lurched off toward Production.
The outcome wasn’t quite what Frank had anticipated. The code that he had created was quite complex and hard to maintain. It was also not particularly quick.
In short, Frank’s caching framework was a bit of a monster.

In case you’re wondering, no, this is not in any way autobiographical. I am not Frank (although I will confess to owning a Sisters of Mercy album).
I am, in fact, one of the unfortunates who had to support this application several years later.

OK, it’s almost certain that none of the developers who spawned this particular delight were named after a fictional mad doctor…although maybe they should have been.

In order to prevent others from suffering from a similar misapplication of creative genius, what I’m going to look at here is :

  • How Oracle caches table data in Memory
  • How to work out what tables are in the cache
  • Ways in which you can “pin” tables in the cache (if you really need to)

Fortunately, Oracle memory management is fairly robust so there will be no mention of leeks

Data Caching in Action

Let’s start with a simple illustration of data caching in Oracle.

To begin with, I’m going to make sure that there’s nothing in the cache by running …

alter system flush buffer_cache

…which, provided you have DBA privileges should come back with :

System FLUSH altered.

Now, with the aid of autotrace, we can have a look at the difference between retrieving cached and uncached data.
To start with, in SQL*Plus :

set autotrace on
set timing on

…and then run our query :

select *
from hr.departments

The first time we execute this query, the timing and statistics output will be something like :

27 rows selected.

Elapsed: 00:00:00.08

	106  recursive calls
	  0  db block gets
	104  consistent gets
	 29  physical reads
	  0  redo size
       1670  bytes sent via SQL*Net to client
	530  bytes received via SQL*Net from client
	  3  SQL*Net roundtrips to/from client
	  7  sorts (memory)
	  0  sorts (disk)
	 27  rows processed

If we now run the same query again, we can see that things have changed a bit…

27 rows selected.

Elapsed: 00:00:00.01

	  0  recursive calls
	  0  db block gets
	  8  consistent gets
	  0  physical reads
	  0  redo size
       1670  bytes sent via SQL*Net to client
	530  bytes received via SQL*Net from client
	  3  SQL*Net roundtrips to/from client
	  0  sorts (memory)
	  0  sorts (disk)
	 27  rows processed

The second run was a fair bit faster. This is mainly because the data required to resolve the query was cached after the first run.
Therefore, the second execution required no Physical I/O to retrieve the result set.

So, exactly how does this caching malarkey work in Oracle ?

The Buffer Cache and the LRU Algorithm

The Buffer Cache is part of the System Global Area (SGA) – an area of RAM used by Oracle to cache various things that are generally available to any sessions running on the Instance.
The allocation of blocks into and out of the Buffer Cache is achieved by means of a Least Recently Used (LRU) algorithm.

You can see details of this in the Oracle documentation but, in very simple terms, we can visualise the workings of the Buffer Cache like this :


When a data block is first read from disk, it’s loaded into the middle of the Buffer Cache.
If it’s then “touched” frequently, it will work it’s way towards the hot end of the cache.
Otherwise it will move to the cold end and ultimately be discarded to make room for other data blocks that are being read.
Sort of…

The Small Table Threshold

In fact, blocks that are retrieved as the result of a Full Table Scan will only be loaded into the mid-point of the cache if the size of the table in question does not exceed the Small Table Threshold.
The usual definition of this ( unless you’ve been playing around with the hidden initialization parameter _small_table_threshold) is a table that is no bigger than 2% of the buffer cache.
As we’re using the default Automated Memory Management here, it can be a little difficult to pin down exactly what this is.
Fortunately, we can find out (provided we have SYS access to the database) by running the following query :

select cv.ksppstvl value,
    pi.ksppdesc description
from x$ksppi pi
inner join x$ksppcv cv
on cv.indx = pi.indx
and cv.inst_id = pi.inst_id
where pi.inst_id = userenv('Instance')
and pi.ksppinm = '_small_table_threshold'

---------- ------------------------------------------------------------
589        lower threshold level of table size for direct reads

The current size of the Buffer Cache can be found by running :

select component, current_size
from v$memory_dynamic_components
where component = 'DEFAULT buffer cache'

COMPONENT                                                        CURRENT_SIZE
---------------------------------------------------------------- ------------
DEFAULT buffer cache                                                251658240

Now I’m not entirely sure about this but I believe that the Small Table Threshold is reported in database blocks.
The Buffer Cache size from the query above is definitely in bytes.
The database we’re running on has a uniform block size of 8k.
Therefore, the Buffer Cache is around 614 blocks.
This would make 2% of it 614 blocks, which is slightly more than the 589 as being reported as the Small Table Threshold.
If you want to explore further down this particular rabbit hole, have a look at this article by Jonathan Lewis.

This all sounds pretty good in theory, but how do we know for definite that our table is in the Buffer Cache ?

What’s in the Buffer Cache ?

In order to answer this question, we need to have a look at the V$BH view. The following query should prove adequate for now :

select obj.owner, obj.object_name, obj.object_type,
    count(buf.block#) as cached_blocks
from v$bh buf
inner join dba_objects obj
    on buf.objd = obj.data_object_id
where buf.class# = 1 -- data blocks
and buf.status != 'free'
and obj.owner = 'HR'
and obj.object_name = 'DEPARTMENTS'
and obj.object_type = 'TABLE'
group by obj.owner, obj.object_name, obj.object_type

OWNER                OBJECT_NAME          OBJECT_TYPE          CACHED_BLOCKS
-------------------- -------------------- -------------------- -------------
HR                   DEPARTMENTS          TABLE                            5

Some things to note about this query :

  • the OBJD column in v$bh joins to data_object_id in DBA_OBJECTS and not object_id
  • we’re excluding any blocks with a status of free because they are, in effect, empty and available for re-use
  • the class# value needs to be set to 1 – data blocks

So far we know that there are data blocks from our table in the cache. But we need to know whether all of the table is in the cache.

Time for another example…

We need to know how many data blocks the table actually has. Provided the statistics on the table are up to date we can get this from the DBA_TABLES view.

First of all then, let’s gather stats on the table…

exec dbms_stats.gather_table_stats('HR', 'DEPARTMENTS')

… and then check in DBA_TABLES…

select blocks
from dba_tables
where owner = 'HR'
and table_name = 'DEPARTMENTS'


Now, let’s flush the cache….

alter system flush buffer_cache

…and try a slightly different query…

select *
from hr.departments
where department_id = 60
------------- ------------------------------ ---------- -----------
	   60 IT				    103        1400

We can now use the block total in DBA_TABLES to tell how much of the HR.DEPARTMENTS table is in the cache …

select obj.owner, obj.object_name, obj.object_type,
    count(buf.block#) as cached_blocks,
    tab.blocks as total_blocks
from v$bh buf
inner join dba_objects obj
    on buf.objd = obj.data_object_id
inner join dba_tables tab
    on tab.owner = obj.owner
    and tab.table_name = obj.object_name
    and obj.object_type = 'TABLE'
where buf.class# = 1
and buf.status != 'free'
and obj.owner = 'HR'
and obj.object_name = 'DEPARTMENTS'
and obj.object_type = 'TABLE'
group by obj.owner, obj.object_name, obj.object_type, tab.blocks

---------- --------------- ---------- ------------- ------------
HR	   DEPARTMENTS	   TABLE		  1	       5

As you’d expect the data blocks for the table will only be cached as they are required.
With a small, frequently used reference data table, you can probably expect it to be fully cached fairly soon after the application is started.
Once it is cached, the way the LRU algorithm works should ensure that the data blocks are constantly in the hot end of the cache.

In the vast majority of applications, this will be the case. So, do you really need to do anything ?

If your application is not currently conforming to this sweeping generalisation then you probably want to ask a number of questions before taking any precipitous action.
For a start, is the small, frequently accessed table you expect to see in the cache really frequently accessed ? Is your application really doing what you think it does ?
Whilst where on the subject, are there any rogue queries running more regularly than you might expect causing blocks to be aged out of the cache prematurely ?

Once you’re satisfied that the problem does not lie with your application, or your understanding of how it operates, the next question will probably be, has sufficient memory been allocated for the SGA ?
There are many ways you can look into this. If your fortunate enough to have the Tuning and Diagnostic Packs Licensed there are various advisor that can help.
Even if you don’t, you can always take a look at V$SGA_TARGET_ADVICE.

If, after all of that, you’re stuck with the same problem, there are a few options available to you, starting with…

The Table CACHE option

This table property can be set so that a table’s data blocks are loaded into the hot end of the LRU as soon as they are read into the Buffer Cache, rather than the mid-point, which is the default behaviour.

Once again, using HR.DEPARTMENTS as our example, we can check the current setting on this table simply by running …

select cache
from dba_tables
where owner = 'HR'
and table_name = 'DEPARTMENTS'


At the moment then, this table is set to be cached in the usual way.

To change this….

alter table hr.departments cache

Table HR.DEPARTMENTS altered.

When we check again, we can see that the CACHE property has been set on the table…

select cache
from dba_tables
where owner = 'HR'
and table_name = 'DEPARTMENTS'


This change does have one other side effect that is worth bearing in mind.
It causes the LRU algorithm to ignore the Small Table Threshold and dump all of the selected blocks into the hot end of the cache.
Therefore, if you do this on a larger table, you do run the risk of flushing other frequently accessed blocks from the cache, thus causing performance degradation elsewhere in your application.

The KEEP Cache

Normally you’ll have a single Buffer Cache for an instance. If you have multiple block sizes defined in your database then you will have a Buffer Cache for each block size. However, you can define additional Buffer Caches and assign segments to them.

The idea behind the Keep Cache is that it will hold frequently accessed blocks without ageing them out.
It’s important to note that the population of the KEEP CACHE uses the identical algorithm to that of the Buffer Cache. The difference here is that you select which tables use this cache…

In order to take advantage of this, we first need to create a KEEP Cache :

alter system set db_keep_cache_size = 8m scope=both

System altered.

Note that, on my XE 11gR2 instance at least, the minimum size for the Keep Cache appears to be 8 MB ( or 1024 8k blocks).
We can now see that we do indeed have a Keep Cache…

select component, current_size
from v$memory_dynamic_components
where component = 'KEEP buffer cache'

----------------------  ------------
KEEP buffer cache       8388608

Now we can assign our table to this cache….

alter table hr.departments
    storage( buffer_pool keep)

Table altered.

We can see that this change has had an immediate effect :

select buffer_pool
from dba_tables
where owner = 'HR'
and table_name = 'DEPARTMENTS'


If we run the following…

alter system flush buffer_cache

select * from hr.departments

select * from hr.employees

…we can see which cache is being used for each table, by amending our Buffer Cache query…

select obj.owner, obj.object_name, obj.object_type,
    count(buf.block#) as cached_blocks,
    tab.blocks as total_blocks,
    tab.buffer_pool as Cache
from v$bh buf
inner join dba_objects obj
    on buf.objd = obj.data_object_id
inner join dba_tables tab
    on tab.owner = obj.owner
    and tab.table_name = obj.object_name
    and obj.object_type = 'TABLE'
where buf.class# = 1
and buf.status != 'free'
and obj.owner = 'HR'
and obj.object_type = 'TABLE'
group by obj.owner, obj.object_name, obj.object_type,
    tab.blocks, tab.buffer_pool

---------- -------------------- --------------- ------------- ------------ -------
HR         EMPLOYEES            TABLE                       5            5 DEFAULT
HR         DEPARTMENTS          TABLE                       5            5 KEEP

Once again, this approach seems rather straight forward. You have total control over what goes in the Keep Cache so why not use it ?
On closer inspection, it becomes apparent that there may be some drawbacks.

For a start, the KEEP and RECYCLE caches are not automatically managed by Oracle. So, unlike the Default Buffer Cache, if the KEEP Cache finds it needs a bit more space then it’s stuck, it can’t “borrow” some from other caches in the SGA. The reverse is also true, Oracle won’t allocate spare memory from the KEEP Cache to other SGA components.
You also need to keep track of which tables you have assigned to the KEEP Cache. If the number of blocks in those tables is greater than the size of the cache, then you’re going to run the risk of blocks being aged out, with the potential performance degradation that that entails.


Oracle is pretty good at caching frequently used data blocks and thus minimizing the amount of physical I/O required to retrieve data from small, frequently used, reference tables.
If you find yourself in a position where you just have to persuade Oracle to keep data in the cache then the table CACHE property is probably your least worst option.
Creating a KEEP Cache does have the advantage of affording greater manual control over what is cached. The downside here is that it also requires some maintenance effort to ensure that you don’t assign too much data to it.
The other downside is that you are ring-fencing RAM that could otherwise be used for other SGA memory components.
Having said that, the options I’ve outlined here are all better than sticking a bolt through the neck of your application and writing your own database caching in PL/SQL.

Filed under: Oracle, SQL Tagged: alter system flush buffer_cache, autotrace, buffer cache, dba_objects, dbms_stats.gather_table_stats, Default Buffer cache, how to find the current small table threshold, Keep Cache, lru algorithm, small table threshold, Table cache property, v$bh, v$memory_dynamic_components, what tables are in the buffer cache, x$ksppcv, x$ksppi

Unable to Retrieve sys_refcursor values from remote function

Tom Kyte - Thu, 2016-06-23 15:09
Hi, i have created a function in DB1 that returns a sys_refcursor as output which is giving the result as desired in DB1. But when other database DB2 is trying to execute the function using dblink, that cursor is not returning any values. It is no...
Categories: DBA Blogs

Want to skip record if it's length not matching with required length while loading data in oracle external table

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom, I want to load data from fixed length file to oracle external table. I have specified length for each column while creating external table so data for most records getting loaded correctly. But if record length dosent match then data gets...
Categories: DBA Blogs

how to use Connection String in VB.NET using Oracle Wallet ?

Tom Kyte - Thu, 2016-06-23 15:09
In vb.net we could use following connection string but i recently do practical on oracle wallet done successfully in SQL PLUS Tools but main question is i want to use this connection string (username and password and tnsping) using oracle wallet sto...
Categories: DBA Blogs

Tree and "Youngest Common Ancestor"

Tom Kyte - Thu, 2016-06-23 15:09
<code>Hello Tom, I could finally ask you a question... I have a table like this: create table tree(name varchar2(30), id number, pid number, primary key(id), foreign key(pid) references tree(id)); with sample data: insert into tree va...
Categories: DBA Blogs

How to split comma seperated column of clob datatype and insert distinct rows into another table?

Tom Kyte - Thu, 2016-06-23 15:09
Hi, I need to split the comma separated values of clob datatype column in one table and insert only the distinct rows in another table. The details of the table are given below. The toaddress column in Table A is of datatype CLOB. Table B has ...
Categories: DBA Blogs

Maintaining Partitioned Tables

Tom Kyte - Thu, 2016-06-23 15:09
<code>Hi Tom, I need to build a table that will hold read-only data for up to 2 months. The table will have a load (via a perl script run half hourly) of 3 million new rows a day. Queries will be using the date col in the table for data eliminati...
Categories: DBA Blogs

Handling ORA-12170: TNS:Connect timeout occurred & ORA-03114: not connected to ORACLE failures

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom, We had a scenario where the sqlplus connection failed with "ORA-12170: TNS:Connect timeout occurred" in one instance & "ORA-03114: not connected to ORACLE" in another instance while executing from a shell script, but in both the cases retu...
Categories: DBA Blogs

Single Sign on ( SSO ) in Oracle Apex

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom , I am suffering in implementing Single Sign on ( SSO ) in Oracle apex using Custom authentication scheme. i have two applications like <code> <b> App id</b> <b> Application Name</b> 101 - ...
Categories: DBA Blogs

Version Control for PLSQL

Tom Kyte - Thu, 2016-06-23 15:09
Hi Tom, A couple years ago, I saw demo of how to version your PLSQL code. I have been searching for the code, all morning and I cannot find it anywhere. Can you point me to where/how I can version my PLSQL package? Thanks, Greg
Categories: DBA Blogs

Create second Ora 12c environment on same AWS server, now CDB/PDB type

Tom Kyte - Thu, 2016-06-23 15:09
Hi, I have an Oracle 12c running on AWS on CentOS Linux rel 7.1. The database has been installed as a standalone non-CDB database. This DEV database is used for development. I have to install a second environment, for QA, on the same server. This ...
Categories: DBA Blogs

Using Elasticsearch with PeopleSoft

PeopleSoft Technology Blog - Thu, 2016-06-23 13:00

We reported in April that PeopleSoft is planning to offer Elasticsearch as an option.  Our original plan was to make Elasticsearch available with the first generally available release of PeopleTools 8.56.  We have since revised that plan.  We now plan to make Elasticsearch available with PeopleTools 8.55/Patch 10.  This will enable us to offer Elastic as an option a bit sooner.

Oracle-PeopleSoft will continue to support Oracle's Secure Enterprise Search (SES) with PeopleSoft through the support of PeopleTools 8.54 at a minimum.  We are evaluating whether to extend that support, and we'll announce further support plans in the near future.  It's important for customers to know that if they have deployed SES, they will be supported for some time until they make the transition to Elastic.  Elasticsearch will be the only option offered in PeopleTools 8.56.

As described in the original announcement, we plan to provide guidance on migration from SES to Elastic as well as deployment guidance to help with peformance tuning, load balancing and failover.  We are also planning to produce training for Elastic with PeopleSoft.  We are also presenting a session at Oracle Open World on Elasticsearch with PeopleSoft.  We want to make the move to Elasticsearch as quick and easy as possible for our customers.  Based on customer feedback, we believe Elastic will be embraced by PeopleSoft customers, and it will provide significant benefits.

Remote DBA Benefits Comparison Series- Cost Reduction

Chris Foot - Thu, 2016-06-23 10:30


I’ve been working in the IT profession for close to 30 years now. In virtually all facets related to database administration, I’ve had the good fortune of performing a fairly varied set of tasks for my employers over the last couple of decades.    


Subscribe to Oracle FAQ aggregator