Feed aggregator

LEFT JOIN on nested table and TABLE function: unexpected results

Tom Kyte - Fri, 2016-07-08 03:26
Hi! This is my first question here, so don't hesitate to tell me if I am unclear or if key information is missing. I have been getting unexpected results while trying a LEFT JOIN on a nested table. Here's my tables: <code>CREATE OR REPLACE ...
Categories: DBA Blogs

utl_http results to ORA-12541: TNS:no listener

Tom Kyte - Fri, 2016-07-08 03:26
Hi, I have 11g DB and I have url below that I can open in the browser displaying pdf. This is pdf report generated from Jasper server. http://serv-ora11g:8081/jasperserver/flow.html?_flowId=viewReportFlow&j_username=jasperadmin&j_password=jaspe...
Categories: DBA Blogs

utl_htp is begin_request is resulting to ORA-29263: HTTP protocol error

Tom Kyte - Fri, 2016-07-08 03:26
Hi, I have URL that I can access with a browser and this displays the PDF http://serv-ora11g:8081/jasperserver/flow.html?_flowId=viewReportFlow&j_username=jasperadmin&j_password=jasperadmin&reportUnit=/KVS/Collection/col1190&output=pdf I c...
Categories: DBA Blogs

Cross Platform "Endian" Conversion from Solaris to Linux Intel-Based Exadata -- Mounting NFS to Exadata Machine

Tom Kyte - Fri, 2016-07-08 03:26
Hi, Is it possible for a Solaris NFS to be mounted on a Linux Intel-Based Exadata machine? This approach seems (at least to me) to be alluded to in this section of Oracle Database Backup and User's Guide: https://docs.oracle.com/cd/E11882_01/back...
Categories: DBA Blogs

Creating tables (maybe)

Tom Kyte - Fri, 2016-07-08 03:26
How to create a table for this one. AAA BBB CCC RED 1 2 3 BLUE 4 5 6 GREEN 7 8 9 I tried on SQL DEVELOPER.It's gave me an error. And converted into an out put like.. RED BLUE GREEN AAA 1 4 ...
Categories: DBA Blogs

cursor for loop

Tom Kyte - Fri, 2016-07-08 03:26
Hi, I have a cursor for loop which does some calculation in the cursor query. With in the loop I have insert with commit for every iteration. There are not more statement apart from INSERT & COMMIT inside the loop. Example: DECLARE cursor...
Categories: DBA Blogs

Dba vs Apps DBA

Tom Kyte - Fri, 2016-07-08 03:26
What is different between Oracle DBA and Oracle apps DBA? I so many websites where some of them used term Oracle DBA and some of them Oracle apps DBA?
Categories: DBA Blogs

Undo Table space

Tom Kyte - Fri, 2016-07-08 03:26
Hi Tom, Following are the scenarios and please let me know what will happen in the listed scenarios : 1. I have a Undo table space with least size, i do a deleting of records in a table which is huge in size(a size which undo tablespace ca...
Categories: DBA Blogs

Node-oracledb 1.10 has Enhanced Metadata

Christopher Jones - Fri, 2016-07-08 03:06

Top feature: Enhanced Metadata

The changes in node-oracledb 1.10 are:
  • Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation.

    Additional metadata for query and REF CURSOR columns is available in the metaData object when the new booleanoracledb.extendedMetaData attribute or corresponding execute() option attribute extendedMetaData are true.

    For example, if the DEPARTMENTS table is like:

    SQL> desc departments Name Null? Type ----------------------------------------- -------- ---------------------------- DEPARTMENT_ID NOT NULL NUMBER(4) DEPARTMENT_NAME NOT NULL VARCHAR2(30) MANAGER_ID NUMBER(6) LOCATION_ID NUMBER(4)

    Then a query in node-oracledb would give extended metadata:

    [ { name: 'DEPARTMENT_ID', fetchType: 2002, dbType: 2, precision: 4, scale: 0, nullable: false }, { name: 'DEPARTMENT_NAME', fetchType: 2001, dbType: 1, byteSize: 30, nullable: false }, { name: 'MANAGER_ID', fetchType: 2002, dbType: 2, precision: 6, scale: 0, nullable: true }, { name: 'LOCATION_ID', fetchType: 2002, dbType: 2, precision: 4, scale: 0, nullable: true } ]

    You can see that the available attributes vary with the database type. The attributes are described in the metaData documentation.

    The commonly used column name is always available inmetaData regardless of the value ofextendedMetaData. This is consistent with previous versions.

    The metadata dbType and fetchType attributes numbers are described in new DB_TYPE_* constants and the existing node-oracledb type constants, respectively. Your code should use these constants when checking metadata types.

    Why did we make the extra metadata optional and off by default? Why do the types use numbers instead of strings? We had a lot of debate about common use cases, out-of-box experience, performance etc. and this is the way the cookie crumbled.

    I know this enhancement will make your applications easier to maintain and more powerful.

  • Fixed an issue preventing the garbage collector cleaning up when a query with LOBs is executed but LOB data isn't actually streamed.

  • Report an error earlier when a named bind object is used in a bind-by-position context. A new error NJS-044 is returned. Previously errors like ORA-06502 were given since the expected attributes were not found and bind settings ended up as defaults. You can still use unnamed objects for bind-by-position binds like:

    var sql = "begin myproc(:1, :2, :3); end;";var binds = [ id, name, { type: oracledb.STRING, dir: oracledb.BIND_OUT } ];

    Here the third array element is an unnamed object.

  • Fixed a bug where an error event could have been emitted on a QueryStream instance prior to the underlying ResultSet having been closed. This would cause problems if the user tried to close the connection in the error event handler as the ResultSet could have prevented it.

  • Fixed a bug where the public close method was invoked on the ResultSet instance that underlies the QueryStream instance if an error occurred during a call to getRows. The public method would have thrown an error had the QueryStream instance been created from a ResultSet instance via the toQueryStream method. Now the call to the C layer close method is invoked directly.

  • Updated Pool._logStats to throw an error instead of printing to the console if the pool is not valid.

  • Added GitHub Issue and Pull Request templates.

  • Updated installation instructions for OS X using the new Instant Client 12.1 release.

  • Added installation instructions for AIX and Solaris x64.

  • Some enhancements were made to the underlying DPI data access layer. These were developed in conjuction with a non- node-oracledb consumer of DPI, but a couple of changes lay groundwork for potential, user-visible, node-oracledb enhancements:

    • Allow SYSDBA connections

    • Allow session tagging

    • Allow the character set and national character set to be specified via parameters to the DPI layer

    • Support heterogeneous pools (in addition to existing homogeneous pools)

    To reiterate, these are not exposed to node-oracledb.

Resources

Issues and questions about node-oracledb can be posted on GitHub.Your input helps us schedule work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb 1.10 has Enhanced Metadata

Christopher Jones - Fri, 2016-07-08 03:06

Top feature: Enhanced Metadata

The changes in node-oracledb 1.10 are:
  • Enhanced query metadata thanks to a Pull Request from Leonardo. He kindly allowed us to take over and fine tune the implementation.

    Additional metadata for query and REF CURSOR columns is available in the metaData object when the new boolean oracledb.extendedMetaData attribute or corresponding execute() option attribute extendedMetaData are true.

    For example, if the DEPARTMENTS table is like:

    SQL> desc departments
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     DEPARTMENT_ID                             NOT NULL NUMBER(4)
     DEPARTMENT_NAME                           NOT NULL VARCHAR2(30)
     MANAGER_ID                                         NUMBER(6)
     LOCATION_ID                                        NUMBER(4)
    

    Then a query in node-oracledb would give extended metadata:

    [ { name: 'DEPARTMENT_ID',
           fetchType: 2002,
           dbType: 2,
           precision: 4,
           scale: 0,
           nullable: false },
         { name: 'DEPARTMENT_NAME',
           fetchType: 2001,
           dbType: 1,
           byteSize: 30,
           nullable: false },
         { name: 'MANAGER_ID',
           fetchType: 2002,
           dbType: 2,
           precision: 6,
           scale: 0,
           nullable: true },
         { name: 'LOCATION_ID',
           fetchType: 2002,
           dbType: 2,
           precision: 4,
           scale: 0,
           nullable: true } ]

    You can see that the available attributes vary with the database type. The attributes are described in the metaData documentation.

    The commonly used column name is always available in metaData regardless of the value of extendedMetaData. This is consistent with previous versions.

    The metadata dbType and fetchType attributes numbers are described in new DB_TYPE_* constants and the existing node-oracledb type constants, respectively. Your code should use these constants when checking metadata types.

    Why did we make the extra metadata optional and off by default? Why do the types use numbers instead of strings? We had a lot of debate about common use cases, out-of-box experience, performance etc. and this is the way the cookie crumbled.

    I know this enhancement will make your applications easier to maintain and more powerful.

  • Fixed an issue preventing the garbage collector cleaning up when a query with LOBs is executed but LOB data isn't actually streamed.

  • Report an error earlier when a named bind object is used in a bind-by-position context. A new error NJS-044 is returned. Previously errors like ORA-06502 were given since the expected attributes were not found and bind settings ended up as defaults. You can still use unnamed objects for bind-by-position binds like:

    var sql = "begin myproc(:1, :2, :3); end;";
    var binds = [ id, name, { type: oracledb.STRING, dir: oracledb.BIND_OUT } ];
    

    Here the third array element is an unnamed object.

  • Fixed a bug where an error event could have been emitted on a QueryStream instance prior to the underlying ResultSet having been closed. This would cause problems if the user tried to close the connection in the error event handler as the ResultSet could have prevented it.

  • Fixed a bug where the public close method was invoked on the ResultSet instance that underlies the QueryStream instance if an error occurred during a call to getRows. The public method would have thrown an error had the QueryStream instance been created from a ResultSet instance via the toQueryStream method. Now the call to the C layer close method is invoked directly.

  • Updated Pool._logStats to throw an error instead of printing to the console if the pool is not valid.

  • Added GitHub Issue and Pull Request templates.

  • Updated installation instructions for OS X using the new Instant Client 12.1 release.

  • Added installation instructions for AIX and Solaris x64.

  • Some enhancements were made to the underlying DPI data access layer. These were developed in conjuction with a non- node-oracledb consumer of DPI, but a couple of changes lay groundwork for potential, user-visible, node-oracledb enhancements:

    • Allow SYSDBA connections

    • Allow session tagging

    • Allow the character set and national character set to be specified via parameters to the DPI layer

    • Support heterogeneous pools (in addition to existing homogeneous pools)

    To reiterate, these are not exposed to node-oracledb.

Resources

Issues and questions about node-oracledb can be posted on GitHub. Your input helps us schedule work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Enable auditing in oracle database

Learn DB Concepts with me... - Thu, 2016-07-07 13:54

ENABLE AUDITING IN ORACLE DATABASE
SERVER SETUP FOR DB AUDITING


Auditing is a default feature available in Oracle server. The initialization parameters that influence its behaviour can be displayed using the SHOW PARAMETER SQL*Plus command.

SQL> SHOW PARAMETER AUDIT

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      C:\ORACLE\PRODUCT\10.2.0\ADMIN
                                                 \DB10G\ADUMP
audit_sys_operations                 boolean     FALSE
audit_trail                          string      NONE

SQL>

Auditing is disabled by default, but can enabled by setting the AUDIT_TRAIL static parameter, which has the following allowed values.

AUDIT_TRAIL = { NONE | os | db | db,extended | xml | xml,extended }

The following list provides a description of each setting (choose one based on your requirement):


NONE or false - Auditing is disabled.
db or true - Auditing is enabled, with all audit records stored in the database audit trial (SYS.AUD$).
db,extended - As db, but the SQL_BIND and SQL_TEXT columns are also populated.
xml - Auditing is enabled, with all audit records stored as XML format OS files.
xml,extended - As xml, but the SQL_BIND and SQL_TEXT columns are also populated.
OS - Auditing is enabled, with all audit records directed to the operating system's audit trail.


To enable auditing and direct audit records to the database audit trail, we would do the following.
SQL> ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE;
System altered.

SQL> SHUTDOWN
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> STARTUP
ORACLE instance started.

Total System Global Area  289406976 bytes
Fixed Size                  1248600 bytes
Variable Size              71303848 bytes
Database Buffers          213909504 bytes
Redo Buffers                2945024 bytes
Database mounted.
Database opened.

SQL>

Note : Enabling and Disabling the auditing in database will only take affect after a db restart.
Categories: DBA Blogs

Invisible Bug

Jonathan Lewis - Thu, 2016-07-07 11:27

At this Wednesday’s Oracle Midlands event someone asked me if Oracle would use the statistics on invisible indexes for the index sanity check. I answered that there had been a bug in the very early days of invisible indexes when the distinct_key statistic on the index could be used even though the index itself would not be considered as a candidate in the plan (and the invisible index is still used to avoid foreign key locking – even in 12c – it’s only supposed to be invisible to the optimizer).

The bug was fixed quite a long time ago – but a comment on the “Index Sanity” article has introduced me to a related bug that is still present in 11.2.0.4 where the presence of an invisible index can affect an execution plan. Here’s a little model (run under 11.2.0.4) to demonstrate:

rem
rem     Script:         invisible_index_bug.sql
rem     Author:         Jonathan Lewis
rem

execute dbms_random.seed(0)

drop table t2;
drop table t1;

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        cast(mod(rownum,1000) as number(8,0))           n1,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;

create table t2
as
select
        rownum id,
        trunc(dbms_random.value(0,10000)) n1
from
        dual
connect by
        level <= 100
;
begin 
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

column n1 new_value m_n1
select n1 from t2 where id = 50;
clear columns

set autotrace traceonly explain

select
        t1.*
from
        t1, t2
where
        t2.n1 = &m_n1
;

create unique index t2_i1 on t2(n1)
-- invisible
;

select
        t1.*
from
        t1, t2
where
        t2.n1 = &m_n1
;

set autotrace off

All I’ve done is create a couple of tables then do a join that we might expect to see executed as a cartesian merge join; at one point I was going to make the data more complicated and include a join condition, but decided to keep things small and simple so it’s a silly example but it is sufficient to make the point. The funny little bit about selecting an n1 value from t2 was also in anticipation of a more complex example but it does, at least, ensure I query for a value that is in range.

Here are the two execution plans from 11.2.0.4 – the key feature is that the plan changes after the invisible index is created:


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  1000K|   119M|  2263   (3)| 00:00:12 |
|   1 |  MERGE JOIN CARTESIAN|      |  1000K|   119M|  2263   (3)| 00:00:12 |
|*  2 |   TABLE ACCESS FULL  | T2   |     1 |     4 |     2   (0)| 00:00:01 |
|   3 |   BUFFER SORT        |      |  1000K|   115M|  2261   (3)| 00:00:12 |
|   4 |    TABLE ACCESS FULL | T1   |  1000K|   115M|  2261   (3)| 00:00:12 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T2"."N1"=5308)


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  1000K|   119M|  2263   (3)| 00:00:12 |
|   1 |  NESTED LOOPS      |      |  1000K|   119M|  2263   (3)| 00:00:12 |
|*  2 |   TABLE ACCESS FULL| T2   |     1 |     4 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T1   |  1000K|   115M|  2261   (3)| 00:00:12 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T2"."N1"=5308)

Notice how the plan without the invisible index does a “sort” (actually a “buffer sort” so it’s just copying the data into local memory), while the plan with the not quite invisible enough index in place gets away with just a full tablescan. This is bug 16564891, associated with base bug 16544878.

The bug notes say “fixed in 12.2”, but in Oracle 12.1.0.2 the first plan appears in both cases, and we have to make the index visible to get the second plan. (Take note of the need for the “negative” test to prove the point; the fact that the same plan appears for both cases doesn’t, by itself, prove that the bug was fixed, we have to show that the plan would have changed if the bug had still been present).

I believe the problem isn’t the problem of Oracle using the statistics when it shouldn’t; the change appears because in 11g Oracle incorrectly allows itself to see the uniqueness of the index and infer that table t2 is a “single row” table. In 12c the optimizer calculates that there will probably be only one row but that doesn’t stop it choosing the merge join cartesian as the “insurance bet” against having to do more than one tablescan of the t1 table. We can see this difference in the 10053 trace files, the 11g file has an entry for the “Single Table Access Path” for t2 that reads:

1-ROW TABLES:  T2[T2]#0

If you read the bug note for bug 16564891 you’ll see that it has a more realistic example of the problem – and it may give you some idea of where you might run into the bug. In general I don’t think many people are likely to come across the problem since it revolves around uniqueness, which is rather an important property, and there can’t be many occasions when someone decides to add (or test dropping) a unique index. Given that the example in the bug looks like “add a unique index to a dimension table that’s joining to a fact table” that may be a good pointer to where you’re most likely to run into the problem — when you’re trying to enforce data correctness in a data warehouse.

 


Setting Environment Variables in Application Server/Process Scheduler Tuxedo Domains

David Kurtz - Thu, 2016-07-07 10:54
The topic of how to manage environment variables was mentioned recently on the PeopleSoft Administrator Podcast
Recently I have built a pair of PeopleSoft environments for a proof-of-concept and have faced into exactly this problem.  I have two PS_APP_HOMEs (application homes) share the same PS_HOME (PeopleTools home).  The environment variables need to be correct before I open psadmin to administer the application server and process scheduler domains.
On Unix, I would usually recommend running domains for different PS_APP_HOMEs in different Unix user accounts.  Thus environmental variables can be set up in the .profile or .bashrc scripts.  The processes then run as different Unix users which makes them easier to monitor.  The Unix users should be in the same Unix group and have group permissions to the PS_HOME directory.  This approach is not possible on Windows.
The alternative is to have a scripts or a menu that sets up those variables before you enter psadmin.
I noticed that there is a new menu in the domain configuration menu in psadmin which permits you to set environmental variables that are then built into the Tuxedo domain configuration.  In fact this has always been possible by editing the psappsrv.ubx and psprcs.ubx files directly to cause variables to be created in the psappsrv.env and psprcs.env files, but now you just have to enter the variables through the menu in psadmin.
When we start psadmin we can see the relevant environmental parameters.   PS_APP_HOME points to my HR installation.
PSADMIN -- PeopleTools Release: 8.54.16
Copyright (c) 1996, 2014, Oracle. All rights reserved.

--------------------------------
PeopleSoft Server Administration
--------------------------------

PS_CFG_HOME /home/psadm1/psft/pt/8.54
PS_HOME /opt/oracle/psft/pt/tools
PS_APP_HOME /opt/oracle/psft/pt/hr91
PS_CUST_HOME /opt/oracle/psft/pt/hr91/pscust

1) Application Server
...
But what if I have a Financials installation under the same PS_APP_HOME?  Option 16 of the configuration menu lets me define environmental settings.  Something very similar happens in the Process Scheduler configuration.
----------------------------------------------
Quick-configure menu -- domain: RASC3K
----------------------------------------------
Features Settings
========== ==========
1) Pub/Sub Servers : Yes 17) DBNAME :[XXXXXX]
...
Actions
=========
14) Load config as shown
15) Custom configuration
16) Edit environment settings
h) Help for this menu
q) Return to previous menu

HINT: Enter 17 to edit DBNAME, then 14 to load
So, I have added PS_APP_HOME, PS_CUST_HOME and INTERFACE_HOME and they have become a part of the configuration of the Tuxedo domain.
--------------------------------------
PeopleSoft Domain Environment Settings
--------------------------------------
Domain Name: RASC3K

TEMP :[{LOGDIR}{FS}tmp]
TMP :[{LOGDIR}{FS}tmp]
TM_BOOTTIMEOUT :[120]
TM_RESTARTSRVTIMEOUT :[120]
TM_BOOTPRESUMEDFAIL :[Y]
FLDTBLDIR32 :[{$TUXDIR}{FS}udataobj]
FIELDTBLS32 :[jrep.f32,tpadm]
ALOGPFX :[{LOGDIR}{FS}TUXACCESSLOG]
INFORMIXSERVER :[{$Startup\ServerName}]
COBPATH :[{$PS_APP_HOME}/cblbin:{$PS_HOME}/cblbin]
PATH :[{$PATH}:{$Domain Settings\Add to PATH}]
PS_APP_HOME :[/opt/oracle/psft/pt/fin91]
PS_CUST_HOME :[/opt/oracle/psft/pt/fin91/pscust]
INTERFACE_HOME :[/opt/oracle/psft/pt/fin91/pscust/interfaces]

1) Edit environment variable
2) Add environment variable
3) Remove environment variable
4) Comment / uncomment environment variable
5) Show resolved environment variables
6) Save
h) Help for this menu
q) Return to previous menu

Command to execute (1-6, h or q) :
What is going on here?  These variables have been added to the PS_ENVFILE section of psappsrv.ubx.
# ----------------------------------------------------------------------
*PS_ENVFILE
TEMP={LOGDIR}{FS}tmp
TMP={LOGDIR}{FS}tmp
TM_BOOTTIMEOUT=120
TM_RESTARTSRVTIMEOUT=120
TM_BOOTPRESUMEDFAIL=Y
FLDTBLDIR32={$TUXDIR}{FS}udataobj
FIELDTBLS32=jrep.f32,tpadm
ALOGPFX={LOGDIR}{FS}TUXACCESSLOG
{WINDOWS}
COBPATH={$PS_HOME}\CBLBIN%PS_COBOLTYPE%
INFORMIXSERVER={$Startup\ServerName}
# Set IPC_EXIT_PROCESS=1 to use ExitProcess to terminate server process.
# Set IPC_TERMINATE_PROCESS=1 to use TerminateProcess to terminate server process.
# If both are set, TerminateProcess will be used to terminate server process.
#IPC_EXIT_PROCESS=1
IPC_TERMINATE_PROCESS=1
PATH={$PS_HOME}\verity\{VERITY_OS}\{VERITY_PLATFORM}\bin;{$PATH};{$Domain Settings\Add to PATH}
{WINDOWS}
{UNIX}
INFORMIXSERVER={$Startup\ServerName}
COBPATH={$PS_APP_HOME}/cblbin:{$PS_HOME}/cblbin
PATH={$PATH}:{$Domain Settings\Add to PATH}
{UNIX}
PS_APP_HOME=/opt/oracle/psft/pt/fin91
PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
INTERFACE_HOME=/opt/oracle/psft/pt/fin91/pscust/interfaces
They then appear in the psappsrv.env file that is generated when the domain is configured.  This file contains fully resolved values of environmental variables that are set by every tuxedo application server process when it starts.
TEMP=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/tmp
TMP=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/tmp
TM_BOOTTIMEOUT=120
TM_RESTARTSRVTIMEOUT=120
TM_BOOTPRESUMEDFAIL=Y
FLDTBLDIR32=/opt/oracle/psft/pt/bea/tuxedo/udataobj
FIELDTBLS32=jrep.f32,tpadm
ALOGPFX=/home/psadm1/psft/pt/8.54/appserv/XXXXXX/LOGS/TUXACCESSLOG
# Set IPC_EXIT_PROCESS=1 to use ExitProcess to terminate server process.
# Set IPC_TERMINATE_PROCESS=1 to use TerminateProcess to terminate server process.
# If both are set, TerminateProcess will be used to terminate server process.
#IPC_EXIT_PROCESS=1
IPC_TERMINATE_PROCESS=1
IPC_TERMINATE_PROCESS=1
INFORMIXSERVER=
COBPATH=/opt/oracle/psft/pt/fin91/cblbin:/opt/oracle/psft/pt/tools/cblbin
PATH=/opt/oracle/psft/pt/fin91/bin:/opt/oracle/psft/pt/fin91/bin/interfacedrivers:/opt/oracle/psft/pt/tools/jre/bin:/opt/oracle/psft/pt/tools/appserv:/opt/oracle/psft/pt/tools/setup:/opt/oracle/psft/pt/bea/tuxedo/bin:.:/opt/oracle/psft/pt/oracle-client/12.1.0.1/bin:/opt/oracle/psft/pt/oracle-client/12.1.0.1/OPatch:/opt/oracle/psft/pt/oracle-client/12.1.0.1/perl/bin:/opt/mf/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/opt/oracle/psft/pt/tools/bin:/opt/oracle/psft/pt/tools/bin/sqr/ORA/bin:/opt/oracle/psft/pt/tools/verity/linux/_ilnx21/bin:/home/psadm1/bin::.
PS_APP_HOME=/opt/oracle/psft/pt/fin91
PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
INTERFACE_HOME=/opt/oracle/psft/pt/fin91/pscust/interfaces
WarningI found that I had to specify fully resolved paths for the variables I defined.  I do try setting variables in terms of other variables,

PS_APP_HOME :[/opt/oracle/psft/pt/fin91]
PS_CUST_HOME :[{$PS_APP_HOME}{FS}pscust]
INTERFACE_HOME :[{$PS_CUST_HOME}{FS}interfaces]

but I started to get errors.
==============ERROR!================
Value for PS_CUST_HOME: {$PS_APP_HOME}{FS}pscust, is invalid. Your environment
may not work as expected.
==============ERROR!================
And some variables were not fully resolved in the psappsrv.env file.
PS_APP_HOME=/opt/oracle/psft/pt/fin91
PS_CUST_HOME=/opt/oracle/psft/pt/fin91/pscust
INTERFACE_HOME={$PS_APP_HOME}{FS}pscust/interfaces
Configuration Settings in the .ubx –v- .cfgMy only reservation is that there is now environment specific configuration in the psappsrv.ubx file, rather than the psappsrv.cfg file.   When I have done this in the past I would create additional variables in psappsrv.cfg that were referenced from the psappsrv.ubx file.  Thus the psappsrv.ubx was consistent across environments, and all the configuration is in the main configuration file psappsrv.cfg.
Although, you can add additional variables in psappsrv.cfg, thus
[Domain Settings]
;=========================================================================
; General settings for this Application Server.
;=========================================================================

Application Home=/opt/oracle/psft/pt/fin91
Custom Directory=pscust
Interface Directory=pscust/interfaces

and then reference them in the variables, and they will resolve correctly in the psappsrv.env.
PS_APP_HOME={$Domain Settings\Application Home}
PS_CUST_HOME={$Domain Settings\PS_APP_HOME}{FS}{$Domain Settings\Custom Directory}
INTERFACE_HOME={$Domain Settings\PS_APP_HOME}{FS}{$Domain Settings\Interface Directory}
You may experience errors in psadmin
==============ERROR!================
Value for PS_APP_HOME: {$Domain Settings\Application Home}, is invalid. Your
environment may not work as expected.
==============ERROR!================

==============ERROR!================
Value for PS_CUST_HOME: {$Domain Settings\Application Home}{FS}{$Domain
Settings\Custom Directory}, is invalid. Your environment may not work as
expected.
==============ERROR!================
ConclusionUsing this technique, it does not matter how environment variables are set when you go into psadmin to start the application server, the correct setting is defined in the Tuxedo domain and overrides that.
You have always been able to do this in Tuxedo, but you would have had to edit the psappsrv.ubx file yourself, now the menu allows you to administer this.
There is no way to view the psappsrv.ubx and psappsrv.env files from within psadmin, only psappsrv.cfg can be opened.  If you want to check your settings have reached the psappsrv.env file, you will need to leave psadmin and look in the files for yourself.

IOUG WebCenter Special Interest Group Survey

WebCenter Team - Thu, 2016-07-07 10:07

The WebCenter Special Interest Group (SIG) is interested in hearing from you! We are conducting a brief four question survey to get a better understanding of where Oracle WebCenter customers are at today with their current usage of WebCenter solutions and to gauge your interest in learning more about WebCenter topics.

Please take a moment to answer this short survey. Your input is important to us! Understanding where WebCenter customers are at now is critical to helping you move forward in the future.

Regards,
The WebCenter SIG

Website validation using regexp_like.

Tom Kyte - Thu, 2016-07-07 09:06
Hi, Could you please help to get website validation query using regexp_like. e.g. www.google.com https//:www.google.com both should get validated. Thanks in advance.
Categories: DBA Blogs

Datatype issue.

Tom Kyte - Thu, 2016-07-07 09:06
Hi all, I have an issue with column datatype.Can you please look at this issue. I have two tables emp1 and emp2 with empno column in both tables in table emp1 column empno is nvarchar2 and in table emp2 column empno is number datatype I am tryin...
Categories: DBA Blogs

how to debug the SQL logic execution ?

Tom Kyte - Thu, 2016-07-07 09:06
AskTom, I have a question about the 'SQL logic execution': Can Oracle Database log the logic of the SQL execution ? (which condition of the WHERE clause failed ?) example: select 1 from Dual WHERE 1 = 1 AND 1 = 1 + 1 --line 5 ; C...
Categories: DBA Blogs

Fuzzy Name Match Stored Procedure Optimization

Tom Kyte - Thu, 2016-07-07 09:06
Hello, Have written PL/SQL stored proc 'FuzzyNameMatch' that interrogates first, middle, last names from a single column in two distinct tables, ie source and compare columns. The algo parses shorter strings through longer and increments counter v...
Categories: DBA Blogs

Need to generate numbers between a given range for each record.

Tom Kyte - Thu, 2016-07-07 09:06
Hi Team, I have below sample table as input : ID MIN MAX 1 5 10 2 3 5 And I want output as follows: ID value 1 5 1 6 1 7 1 8 1 9 2 3 2 4 Id column is a p...
Categories: DBA Blogs

Performance & features of OCCI vs OCI

Tom Kyte - Thu, 2016-07-07 09:06
I'm going to develop an application that needs max performance. I will need: - batch inserts of BLOB - batch updates of BLOB - batch select of BLOB I consider to use OCCI but I'm not sure if it supports all optimizations that are done in OCI. ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator