Feed aggregator

12cR2 has new SQL*Plus features

Yann Neuhaus - Fri, 2016-11-11 15:30

12cR2 is there. What’s new in SQL*Plus? For sure, you can’t expect lot of things from it. The new command line is the SQL*Developer sqlcl which aims to be 100% compatible with SQL*Plus with lot of more features. However, a few little things came here: default editor, command line history and easy row/LOB prefetch and statement caching.

_EDITOR

Yes, it seems that the default editor is ‘vi’ instead of ‘ed’, finally. This is a great improvement. Of course, you can set the VISUAL environment variable in your system. But when you come to another environment (which consultants do), this default will save lot of “define _editor=vi” keystroke.

The environment variables EDITOR and VISUAL are not set:

SQL> host set | grep -E "(^EDITOR|^VISUAL)"
 
SQL>

but the _EDITOR in sqlplus is set to ‘vi':

SQL> define
DEFINE _DATE = "11-NOV-16" (CHAR)
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)
DEFINE _USER = "SYS" (CHAR)
DEFINE _PRIVILEGE = "AS SYSDBA" (CHAR)
DEFINE _SQLPLUS_RELEASE = "1202000100" (CHAR)
DEFINE _EDITOR = "vi" (CHAR)
DEFINE _O_VERSION = "Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production" (CHAR)
DEFINE _O_RELEASE = "1202000100" (CHAR)
DEFINE 1 = "sqlplus" (CHAR)
DEFINE _RC = "1" (CHAR)

Here is the default. For sure, vi is better than ‘ed’. ‘ed’ was the line editor from the era of 2400 baud network.

Command history

Yes. In 2016 SQL*Plus has a command line history. Do you need it? Probably not. If you are on Windows, you can navigate with arrow-up and arrow-down in any command line program. If you are on Linux, you have probably installed rlwrap. And finally, if you want to do something friendly on command line, you probably use sqlcl.

However, in 12cR2 a very basic history has been introduced to SQL*Plus.
You have to enable it:

SQL> show HISTORY
history is OFF
SQL> set HISTORY on
SQL> show history
history is ON and set to "100"

so the default is 100 lines, but you can increase it:

SQL> set HISTORY 999999
SP2-0267: history option 999999 out of range (0 through 100000)
SQL> set HISTORY 100000

what can you do with it?

SQL> help HISTORY
 
HISTORY
-------
 
Stores, lists, executes, edits of the commands
entered during the current SQL*Plus session.
 
HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]  
N is the entry number listed in the history list.
Use this number to recall, edit or delete the command.
 
Example:
HIST 3 RUN - will run the 3rd entry from the list.
 
HIST[ORY] without any option will list all entries in the list.

Here are some examples:

SQL> show history
history is OFF
SQL> set history on
SQL> show history
history is ON and set to "100"
SQL> prompt 1
1
SQL> prompt 2
2
SQL> history
1 show history
2 prompt 1
3 prompt 2
 
SQL> history list
1 show history
2 prompt 1
3 prompt 2
 
SQL> history 2 run
1
SQL> history 2 edit
 
SQL> history 2 delete
SQL> history
1 show history
2 prompt 2
3 prompt 1
 
SQL> history clear
SQL> history
SP2-1651: History list is empty.

As you see, it’s not the most user friendly. But for the basic DBA task that you do on a server you may find it safer than up-arrow. Imagine that a ‘shutdown immediate’ is in the history. Do you want to take the risk to run it because of some network latency and you run the line above the one you wanted? Or do you prefer to be sure tho have read the command befor

SET LOBPREF[ETCH], SET ROWPREF[ETCH], and SET STATEMENTC[ACHE].

Here are important performance improvements:

SQL> show lobprefetch
lobprefetch 0
SQL> show rowprefetch
rowprefetch 1
SQL> show statementcache
statementcache is 0

Those are things that you can to on OCI or JDBC and that you can easily do now in SQL*Plus: prefetch rows and LOBs to avoid fetch roundtrips and use statement caching to avoid parse calls.

I’ll probably blog about prefetch in a future blog, so for the moment, here is a quick demo of statement caching.

By default, statement caching is off. I run 3 times the same query:

select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache hits 15
11-NOV-16 05.00.41.370333 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.370333 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.370333 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (total) 6
11-NOV-16 05.00.41.370333 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.370333 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache hits 22
11-NOV-16 05.00.41.373429 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.373429 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.373429 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (total) 7
11-NOV-16 05.00.41.373429 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.373429 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache hits 29
11-NOV-16 05.00.41.375993 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.375993 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.375993 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (total) 8
11-NOV-16 05.00.41.375993 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.375993 PM +00:00 parse count (describe) 0
 
8 rows selected.

You can see that each one had its parse call. Of course, it’s not a hard parse because cursor is shared. It’s not even a soft parse thanks to session cursor cache. But it’s still a parse call.

Let’s set statement caching to one and run the query again 3 times:

set statementcache 1
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache hits 36
11-NOV-16 05.00.41.378937 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.378937 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.378937 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.378937 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.378937 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache hits 42
11-NOV-16 05.00.41.381403 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.381403 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.381403 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.381403 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.381403 PM +00:00 parse count (describe) 0
 
8 rows selected.
 
SQL> select current_timestamp,name,value from v$mystat join v$statname using(statistic#) where name like 'parse %' or name like '%cursor cache%';
 
CURRENT_TIMESTAMP NAME VALUE
----------------------------------- ----------------------------------- ----------
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache hits 48
11-NOV-16 05.00.41.383844 PM +00:00 session cursor cache count 4
11-NOV-16 05.00.41.383844 PM +00:00 parse time cpu 0
11-NOV-16 05.00.41.383844 PM +00:00 parse time elapsed 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (total) 9
11-NOV-16 05.00.41.383844 PM +00:00 parse count (hard) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (failures) 0
11-NOV-16 05.00.41.383844 PM +00:00 parse count (describe) 0
 
8 rows selected.

One more parse call only. The cursor was cached at client side.

How many statements can you cache?

SQL> set statementcache 999999
SP2-0267: statementcache option 999999 out of range (0 through 32767)

from 1 to 32767. The value 0 disable statement caching.

set statementcache 32767

Not yet in 12.2 ?

If you did not upgrade yet to 12.2 you have a way to use statement caching. You can set it in oraaccess.xml which can enable those optimizations for all OCI clients.

sqlplus -F

Those performance settings can be set to default values with the ‘-F’ argument.
Let set which settings are different:

[oracle@OPC122 ~]$ sqlplus -s / as sysdba <<< "store set a.txt replace"
Wrote file a.txt
[oracle@OPC122 ~]$ sqlplus -s -F / as sysdba <<< "store set b.txt replace"
Wrote file b.txt
[oracle@OPC122 ~]$ diff a.txt b.txt
3c3
set arraysize 100
31c31
set lobprefetch 16384
46c46
set rowprefetch 2
59c59
set statementcache 20

Those settings avoid roundtrips and unnecessary work. Documentation says that PAGESIZE set to higher value but I don’t see it here and anyway, it’s about formatting output and not about performance.

VARIABLE

You may use SQL*Plus to test queries with bind variables. Here is what you do before 12.2:

SQL> variable text char
SQL> exec :text:='X'
 
PL/SQL procedure successfully completed.
 
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

You can now simply:

SQL> variable text char='X'
SQL> select * from DUAL where DUMMY=:text;
 
D
-
X

SQLcl the SQLDeveloper command line

Since 11g SQLDeveloper is shipped in ORACLE_HOME and in 12.2 it includes SQLcl, the SQLDeveloper command line that is fully compatible with SQL*Plus scripts.
The version we have on the DBCS lacks the executable flag and the right JAVA_HOME:

[oracle@SE222 ~]$ /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
-bash: /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql: Permission denied
[oracle@SE222 ~]$
[oracle@SE222 ~]$ bash /u01/app/oracle/product/12.2.0/dbhome_1/sqldeveloper/sqlcl/bin/sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:16:48 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
USER =
URL = jdbc:oracle:oci8:@
Error Message = No suitable driver found for jdbc:oracle:oci8:@
USER =
URL = jdbc:oracle:thin:@127.0.0.1:1521:CDB2
Error Message = No suitable driver found for jdbc:oracle:thin:@127.0.0.1:1521:CDB2
USER =
URL = jdbc:oracle:thin:@localhost:1521/orcl
Error Message = No suitable driver found for jdbc:oracle:thin:@localhost:1521/orcl
Username? (RETRYING) ('/ as sysdba'?)

I’ve defined the following alias:

alias sql='JAVA_HOME=$ORACLE_HOME/jdk bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'

and I’m ready to run it:

[oracle@SE222 ~]$ sql / as sysdba
 
SQLcl: Release 12.2.0.1.0 RC on Fri Nov 11 21:20:15 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL>

I like SQLcl except one thing – it’s in java and is long to start:

[oracle@SE222 ~]$ time sql /nolog
real 0m2.184s
user 0m3.054s
sys 0m0.149s

2 seconds is long when you run it frequently. Compare with sqlplus:

[oracle@SE222 ~]$ time sqlplus /nolog
real 0m0.015s
user 0m0.008s
sys 0m0.006s

 

Cet article 12cR2 has new SQL*Plus features est apparu en premier sur Blog dbi services.

12cR2 versus 12cR1 scan time using Java inside the RDBMS - a DBAAS Cloud test

Marcelo Ochoa - Fri, 2016-11-11 08:47
As usual my first test with every Oracle RDBMS release is by Java inside the database, for this test I am using a heavy/intensive Java application inside the RDBMS named Scotas OLS.
Basically it provides full text search functionality by using an embedded Solr version running inside the DB.
By now Oracle 12cR2 is available at the Oracle Cloud DBAAS, the screen below is the services started to do the test
As I mentioned above Scotas OLS provides searching functionality through new domain index, during a creation or rebuild of the index and full table scan is done in batch of 5000 rows, this full scan is executed using a query like:
SELECT /*+ DYNAMIC_SAMPLING(L$MT 0) */ L$MT.rowid,..others_columns..
     FROM TABLE_NAME L$MT where rowid in (select /*+ cardinality(L$PT, 10) */ * from table(?) L$PT) for update nowait
table pipeline function receives as arguments a batch of 5000 rowids wrapped using ODCIRidList PLSQL Type.
The Java code running inside the DB look like:
                while (rs.next()) {
                    addCmd.clear();
                    addCmd.solrDoc = new SolrInputDocument();
                    String rowid = rs.getString(1);
                    addCmd.solrDoc.addField("rowid", rowid);
                    if (includeMasterColumn) {
                        Object o = rs.getObject(2);
                        readColumn(addCmd.solrDoc,this.colName,o);
                    }
                    for (int i = 0; i < numExtraCols; i++) {
                        Object o = rs.getObject(i + offset);
                        readColumn(addCmd.solrDoc,extraCols[i],o);
                    }
                    if (log.isTraceEnabled())
                      log.trace("Adding: " + addCmd.solrDoc.toString());
                    try {
                        processor.processAdd(addCmd);
                    } catch (Exception e) {
                        ....
                    }
                }
the scan time observed between the two DBAAS instances is summarized in this graph
Average scan time is 890 ms per batch in 12cR2 versus 1648 ms in 12cR1, GOOD JOB Oracle OJVM dev team for this improvement!!!!



Documentum – Cannot delete document due to VD/assembly references

Yann Neuhaus - Fri, 2016-11-11 02:46

I went into an error that you might already have once, so here is a quick guide on how to fix it.

When deleting a document from D2 I had a popup with an error like: Cannot delete file because it is still referenced in a virtual document or an assembly. Well indeed the document that I wanted to delete was previously in a VD, but for some reasons I couldn’t find the VD by the search engine as it was deleted, but not its old versions.

To summarize, my document was still referenced by the old versions of a deleted VD. So here’s the way I fixed it:

First, I needed to know which VD referenced it, for this simply use the table dmr_containment:

select * from dmr_containment where component_id='<DOCUMENT_ID>';

You should see a list of ids like:

r_object_id: ID of the current row (we will need it)
parent_id: ID of the VD that references our document
component_id: ID of our document
order_no: Index order of all documents in this VD

The problem with old versions VD is that it is set as immutable so you can’t remove the document from the VD through DQL and API. You first have to set it mutable.

So here is the condensed version of API to remove a component in an immutable VD:

fetch,c,<PARRENT_ID>
set,c,l,r_immutable_flag
F
save,c,l
removepart,c,l,<R_OBJECT_ID>
save,c,l
fetch,c,l
set,c,l,r_immutable_flag
T
save,c,l

If you have a lot of entries to remove, you may want a scripted version in DQMan:

First select all entries:

select * from dmr_containment where component_id='<DOCUMENT_ID>';

Select all rows, right click and generate the script:

fetch,c,{parent_id}
set,c,l,r_immutable_flag
F
save,c,l
removepart,c,l,{r_object_id}
save,c,l
set,c,l,r_immutable_flag
T
save,c,l

Verify that each entries have been deleted and now you should be able to remove your document without errors.

 

Cet article Documentum – Cannot delete document due to VD/assembly references est apparu en premier sur Blog dbi services.

Links for 2016-11-10 [del.icio.us]

Categories: DBA Blogs

Oracle 12cR2: MAX_PDBS

Yann Neuhaus - Fri, 2016-11-11 00:43

Oracle database 12.2 is there on the Database Cloud Service, in multitenant. In EE High Performance or Extreme Performance, you have the multitenant option: you can create 4096 pluggable database (instead of 252 in 12.2). If you are in lower services, you can create only one user PDB (not counting application root and proxy PDB). If you are in Standard Edition, it’s simple: it is a hard limit. If you are in simple Enterprise Edition without option, then you have a way to be sure you stay under the limit: MAX_PDBS parameters.

Containers and Plugable Databases

A CDB is a container (CON_ID=0) that contains containers:

  • CDB$ROOT (CON_ID=1)
  • PDB$SEED (CON_ID=2)
  • User created PDBs (CON_ID between 3 and 4098)

Here is how I show it:
CaptureCON_ID

MAX_PDBS

In 12.1 you have no supported way to prevent creating more than one PDB. In 12.2 you have a parameter, MAX_PDBS, which is documented as the maximum number of user created pluggable database. You you can expect it to have the maximum of 4096 but it’s actually 4098 and this is the default value:

SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 4098

So to be sure, let’s create many pluggable databases.

I have one pluggable database, PDB1, opened in read-only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ ONLY NO

And use the following script to clone them:
for i in {1..5000} ; do echo "connect / as sysdba"; echo "create pluggable database pdb$i$RANDOM from pdb1 snapshot copy;" ; echo 'select max(con_id),count(*) from dba_pdbs;' ; echo "host df -h /u01 /u02" ; done | sqlplus / as sysdba
until it fails with:

SQL> create pluggable database pdb49613971 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Note that I use clonedb=true snapshot copy because I don’t want to fill up my filesystem:

SQL> show parameter clonedb
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
clonedb boolean TRUE
clonedb_dir string /u02/oradata/CDB1
 
SQL> host du -h /u02/oradata/CDB1/CDB1_bitmap.dbf
31M /u02/oradata/CDB1/CDB1_bitmap.dbf

As you see I’ve put the bitmap file outside of $ORACLE_HOME/dbs because in 12.2 we have a parameter for that. So many new features…
In addition to that I had to increase sga, processes and db_files.

Here I have my 4097 PDBs

SQL> select max(con_iount(*) from dba_pdbs;
 
MAX(CON_ID) COUNT(*)
----------- ----------
4098 4097

which includes PDB$SEED. This means 4098 containers inside of my CDB:

SQL> select max(con_id),count(*) from v$containers;
 
MAX(CON_ID) COUNT(*)
----------- ----------
4098 4098

SQL> set pagesize 1000 linesize 1000
select min(con_id),max(con_id),count(*),substr(listagg(name,',' on overflow truncate)within group(order by con_id),1,30) from v$containers;SQL>
 
MIN(CON_ID) MAX(CON_ID) COUNT(*) SUBSTR(LISTAGG(NAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYCON_ID),1,30)
----------- ----------- ---------- ------------------------------------------------------------------------------------------------------------------------
1 4098 4098 CDB$ROOT,PDB$SEED,PDB1,PDB2105

So basically you can’t reach the MAX_PDBS default with user created PDBs.

But…

What is really cool with ‘cloud first’ is that we can test it, all on the same platform, probably hit bugs that will be fixed before the on-premises version. This is a great way to ensure that the version is stable when we will put production on it.

I have one PDB:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 12:12:25 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
12:12:25 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED

I drop it:

12:12:26 SQL> drop pluggable database pdb1 including datafiles;
Pluggable database dropped.

I set MAX_PDBS to one:

 
12:12:44 SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 4098
 
12:13:24 SQL> alter system set max_pdbs=1 scope=memory;
System altered.
 
12:13:45 SQL> show parameter max_pdbs
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_pdbs integer 1

And then try to re-create my PDB:

12:13:54 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

This is not what I expected. Let’s try to increase MAX_PDBS to two, even if I’m sure to have only one user PDB:


12:14:07 SQL> alter system set max_pdbs=2 scope=memory;
System altered.
 
12:14:18 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
Pluggable database created.

Ok. Let’s drop it and re-create it again:

12:15:20 SQL> drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
 
12:16:02 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

That’s bad. It seems that the previously dropped PDBs are still counted:

12:16:07 SQL> alter system set max_pdbs=3 scope=memory;
System altered.
 
12:16:17 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
Pluggable database created.
 
12:17:10 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
12:18:14 SQL> drop pluggable database PDB1 including datafiles;
Pluggable database dropped.
 
12:18:28 SQL> create pluggable database PDB1 admin user pdbadmin identified by oracle;
create pluggable database PDB1 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Probably a small bug there. Some counters not reset maybe.

I’ve dropped one PDB from the CDB where I reached the limit of 4096:


SQL> select count(*) from dba_pdbs where con_id>2;
 
COUNT(*)
----------
4095

I can set MAX_PDBS to 4095 if I and to prevent creating a new one:

SQL> alter system set max_pdbs=4095;
System altered.

What if I want to set it lower than the number of PDBs I have? An error message would be nice, but probably not this one:

SQL> alter system set max_pdbs=4094;
alter system set max_pdbs=4094
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65331: DDL on a data link table is outside an application action.

Anyway, now that MAX_PDBS is set to 4095 I can’t create another one:

SQL> create pluggable database PDB2 from PDB1;
create pluggable database PDB2 from PDB1
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

which is the goal of this parameter and confirms that it counts the user created PDBs and not the all containers.

Here it seems that I can re-create my last PDB when I increase the MAX_PDBS:

SQL> alter system set max_pdbs=4096;
System altered.
SQL> create pluggable database PDB2 from PDB1;
Pluggable database created.

By the way, here is how the multitenant feature usage is detected:

SQL> select name feature_name,version,detected_usages,aux_count
from dba_feature_usage_statistics
where name like '%Pluggable%' or name like '%Multitenant%';
 
FEATURE_NAME
--------------------------------------------------------------------------------
VERSION DETECTED_USAGES AUX_COUNT
----------------- --------------- ----------
Oracle Multitenant
12.2.0.1.0 3 4096

The detected usage just means that I’m in a CDB. The AUX_COUNT tells me if I require the multitenant option. But that’s for a future blog post.

 

Cet article Oracle 12cR2: MAX_PDBS est apparu en premier sur Blog dbi services.

Use of views to provide consistent extraction of data

Tom Kyte - Fri, 2016-11-11 00:26
Over the years, I have debated the proper use of views, and, almost without exception, developers are loathe to use them - especially Java developers. The number one reason for their argument is performance. Two part question (Please forget about...
Categories: DBA Blogs

Data base keys

Tom Kyte - Fri, 2016-11-11 00:26
Hi Tom, I am Oracle Beginner,I know integral constraints.So Please Explain me below concepts 1.What are candidate key,super key,Prime Attributes,non-prime attributes? 2.What is the difference between candidate and unique key? 3.how we can use ...
Categories: DBA Blogs

Auditing logons with V$SESSION.AUDSID in AWR

Tom Kyte - Fri, 2016-11-11 00:26
Hi Tom, I have got request from an audit company to monitor logon/off events. Of course in this case it would be really silly question and maybe it really is. The trick is that they want to also track program and module (v$session). This information...
Categories: DBA Blogs

Materialized view

Tom Kyte - Fri, 2016-11-11 00:26
Hi I am creating the Materialized view using COMPLETE REFRESH. how do I see the progress of the operation ? %age complete etc session longops is not showing as it depends on the operation type regards
Categories: DBA Blogs

Flashback Data Archive Limitation -- Please clarify

Tom Kyte - Fri, 2016-11-11 00:26
Hi Tom, I have observed a limitation with Flashback Data Archive. Once Flashback Data Archive is disabled for a table, all the information in History table until that point is lost. Once Flashback Data Archive is enabled again for the same table, ...
Categories: DBA Blogs

Oracle 12cR2: Pluggable database relocation

Yann Neuhaus - Thu, 2016-11-10 23:19

Here is, in my opinion, the most beautiful feature of the multitenant architecture. You know how I love Transportable Tablespaces. But here:

  • No need to put the source in read/only
  • No need to export/import the metadata logically
  • No need for any option: available even in Standard Edition

Standard Edition

I am in Standard Edition here in both source and target, no option required for this:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 13:40:05 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

Source: PDB1 on CDB1

On server opc1 I have a container database CDB1 with one pluggable database PDB1 where I create a new table:

23:40:20 (opc1)CDB1 SQL>alter session set container=PDB1;
Session altered.
23:40:20 (opc1)CDB1 SQL>create table DEMO as select current_timestamp insert_timestamp,instance_name from v$instance;
Table created.
23:40:21 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:21 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1

Export encryption key

I’m in Oracle Public Cloud where tablespaces are encrypted. To ship a pluggable database I must export the keys. Here is the query to get them:

23:40:23 (opc1)CDB1 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AWlnBaUXG0/gv4evS9Ywu8EAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

And I can filter with this query to export it:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

I can’t do that with auto-login wallet.

23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ AUTOLOGIN

Let’s open the wallet with password:

23:40:23 (opc1)CDB1 SQL>administer key management set keystore close;
keystore altered.
23:40:23 (opc1)CDB1 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ PASSWORD

and re-try my export:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

This file must be copied to the destination server. I did it with scp. You can also use dbms_file_transfer as you will need a database link anyway for the remote clone.

Import encryption key

On the destination server, where I have no CDB (I’m limited to one PDB here without the multitenant option)

23:40:31 (opc2)CDB2 SQL>show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO

I have to import the encryption key:

23:40:31 (opc2)CDB2 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
23:40:31 (opc2)CDB2 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

I’m now ready to relocate my PDB as I’m sure I’ll be ready to open it.

Database link

The remote clone is done through a DB link. I’ve a TNS entry named CDB1:

23:40:31 (opc2)CDB2 SQL>select dbms_tns.resolve_tnsname('CDB1') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('CDB1')
--------------------------------------------------------------------------------
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=opc1)(PORT=1521))(CONNECT_DAT
A=(SERVER=DEDICATED)(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM
=oracle)(HOST=SE222.compute-opcoct.oraclecloud.internal)(USER=oracle))))
 
23:40:31 (opc2)CDB2 SQL>create database link CDB1 connect to C##DBA identified by oracle using 'CDB1';
Database link created.

DML on source

In order to show that the source doesn’t have to be read only as in previous release, I’m running the following inserts every 5 minutes:

23:40:44 (opc1)CDB1 SQL>commit;
Commit complete.
23:40:44 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:44 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
6 rows selected.

PDB remote clone

Here is the syntax.
I need to provide the masterkey of the source wallet.
The RELOCATE is this new feature where the source PDB will be relocated to the destination when the clone is opened.

23:40:48 (opc2)CDB2 SQL>create pluggable database PDB1 from PDB1@CDB1 keystore identified by "Ach1z0#d" relocate;
Pluggable database created.
23:41:08 (opc2)CDB2 SQL>

It took some time, shipping the datafiles through the DB link, but this is online.
I was still inserting during this time:

23:41:04 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

Note that you need to be in ARCHIVELOG and LOCAL UNDO to be able to do this because syncronisation will be made by media recovery when we open the clone.

Open the clone

Now, the theory is that when we open the clone, DML is quiesced on source during the recovery of the target and sessions can continue on the target once opened.

23:41:09 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
alter pluggable database PDB1 open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
23:41:26 (opc2)CDB2 SQL>

Bad luck. Every time I tested this scenario, the first open after the relocate fails in deadlock and the session on the source crashes:

23:41:09 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
10-NOV-16 11.41.09.773300 PM +00:00 CDB1
 
11 rows selected.
 
23:41:14 (opc1)CDB1 SQL> commit;
ERROR:
ORA-03114: not connected to ORACLE

It’s a good occasion to look at the traces.
We can see some messages about the recovery:

*** 2016-11-10T23:41:12.660402+00:00 (PDB1(3))
Media Recovery Log /u03/app/oracle/fast_recovery_area/CDB1/foreign_archivelog/PDB1/2016_11_10/o1_mf_1_24_2025109931_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

Those FOREIGN ARCHIVED LOG is a new type of file that you will see in the FRA in 12.2.

So I lost my session on source and now if I try again it works:

23:42:20 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
Pluggable database altered.
23:42:24 (opc2)CDB2 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

All the inserts that were commited on the source are there.
Even with this deadlock bug (SR 3-13618219421), it’s the easiest and fastest way to migrate a database, with the minimum of downtime. Especially in Standard Edition where transportable tablespaces import is not enabled.
Without the deadlock bug, the sessions on the source are supposed to be still running , only paused during the recovery, and then continue on the destination.

 

Cet article Oracle 12cR2: Pluggable database relocation est apparu en premier sur Blog dbi services.

Oracle CHARTOROWID Function with Examples

Complete IT Professional - Thu, 2016-11-10 12:11
In this article, I’ll explain what the Oracle CHARTOROWID function is and show you an example. Purpose of the Oracle CHARTOROWID Function The purpose of the CHARTOROWID function is to convert a value from a several character data types into a ROWID data type. The supported data types that this function can convert are: CHAR […]
Categories: Development

What a Difference a Year Makes

Linda Fishman Hoyle - Thu, 2016-11-10 11:05

A Guest Post by Oracle's CSO and GVP Jon Chorley, SCM Product Strategy (pictured left)

A lot can happen in a 12-month span. And I’m not talking about the US presidential election, BREXIT, or even the “battle of the long-time losers” which was the 2016 Baseball World Series. I’m talking about Oracle Supply Chain Cloud.

We delivered SCM Cloud as promised

When we delivered Release 11 in early 2016, we crossed a real watershed. I’m sure many of the attendees at OpenWorld 2015 were skeptical about our plan to roll out the most comprehensive and innovative built-for-the-cloud SCM suite.

But we did it. With applications as diverse as Order Management, Product Development, Manufacturing, and Planning and Logistics, our Supply Chain Cloud was a real option for customers who wanted to modernize and digitize aspects of their supply chain. It also was very appealing to those who wanted a comprehensive cloud solution, including supply chain across their entire business.

The result? We now have more than 1,000 customers who have purchased and are implementing our Supply Chain Cloud. What’s more, it’s part of almost every customer’s current IT planning process.

So how did this change the vibe among SCM customers at OpenWorld this year?

Every conversation that we in Development had with customers, partners and analysts was not an “if” conversation, but a “when” or “how” conversation. The fear, confusion, and uncertainty from 2015 was replaced by a desire to learn, plan, and act in 2016. Even concerns over Apps Unlimited versus cloud had dissipated.

Customers recognized that Oracle had made an enormous investment in building an all-new SCM cloud solution that is ready when they are.

We are solidly in execution mode. We’re building a skilled and experienced ecosystem. We’re producing assets to accelerate and de-risk deployments. And as a result, we have assembled a community of happy and referenceable customers.

Customers got a preview of more innovation to come

Having crossed one watershed, we felt it was important to show that we’re on to the next with a steady stream of innovation. After all, that’s why people move to the cloud—they want an SCM application platform that innovates at the speed of their business.

We were pleased to demonstrate several planned new products targeted for Release 13, as well as aspects of the Adaptive Intelligent Supply Chain and Internet of Things.

I suppose that, having put our reputation on the line at OpenWorld in 2015, we did the same thing this year. But that’s just how we roll.

Why a Complete Cloud Suite Matters to Your Business

Linda Fishman Hoyle - Thu, 2016-11-10 10:37

A Guest Post by Oracle’s Lisa Schwartz, Cloud Business Group (pictured left)

We tell customers all the time that Oracle has the most complete, unified cloud suite in the industry. But do you know why having a complete suite is so important?

The simple answer is because your customers expect your business to act as one business, not separate departments with discrete jobs. Employees and partners expect a unified business too, and in some cases, regulatory environments require it as well.

Customer expectations are very high

In a 2016 Harvard Business Review article entitled “Focus on Keeping Up with Your Customers, Not Your Competitors,” the author says the greatest challenge businesses have today is keeping up with customer expectations, not with competitors. Think about it: we don’t compare the customer service of one company to that of its competitors, but to the best service we’ve received anywhere.

Silos can ruin the customer experience

If you’ve ever had an unsatisfactory customer experience, it may have been due to departmental silos and disconnected business processes. Here are a few examples of how that can play out:

Your Preferences: Let’s say you receive a marketing promotion from your communications provider addressed to MS. Jane Doe, but the monthly bill lists you as MR. Jane Doe. Not a big deal, you say? Studies show that churn rates increase when customers have the impression a company doesn’t know them even after many years of doing business together.

Your Time: If you’ve ever had a family member with a serious illness, you know how frustrating it can be when the insurance company delays reimbursement. It might take you multiple calls to get through to the right person, and you have to repeat the same information every time. During the next benefits open enrollment period, you might decide to look at other health insurance options.

Your Information: Or maybe your credit score goes down because a payment processor acquires your auto loan, but your account information isn’t transferred properly and your payments aren’t processed. Next time you’re in the market for a car, you might need to broaden your search based on financing terms.

Having multiple clouds from different providers is costly, whether in the form of integration or disgruntled customers. These are just a few examples as to why a complete and unified cloud suite matters. I’m sure you can think of many more.

ADF LOV Auto Suggest Functionality

Andrejus Baranovski - Thu, 2016-11-10 10:34
ADF Faces LOV component can be easily enabled with auto suggest functionality. This is really powerful feature and should save time for user to search for required list values.

Auto suggest can be configured to be invoked only when certain number of characters is entered. I have configured Job ID list to trigger auto suggest, when two or more characters are entered. No auto suggest with single character:


Two or more characters entered - auto suggest displays filtered LOV values. What is nice about it - it automatically shows all these attributes included into LOV list:


In the background ADF BC executes SQL with bind variable from auto suggest:


LOV is defined on attribute in standard way, nothing special here:


To enable auto suggest behaviour, enough to drag and drop ADF af:autoSuggestBehavior tag into ADF Faces LOV component. Point suggestItems property to LOV binding suggestItems expression. Number of characters when to start auto suggest search is configure through minChars property:


Download sample application - ADFLovAutoSuggestApp.zip

Employee Enablement with Oracle PaaS for SaaS – Human Resources

WebCenter Team - Thu, 2016-11-10 09:05

Authored by: Mitchell Palski, Fusion Middlware Specialist

The employee experience of your organization is more transparent than ever. If Public Sector agencies are expected to compete with Commercial employers, they have to make strong first impressions on their new-hires. By improving the employee’s initial experience, your Human Resources department can have an immediate influence on employee productivity and retention.

What do employees care about?

I wrote a blog post back in April 2016 called “Recruit Millennials with Innovative Oracle Cloud Projects” highlighting the fact that the younger generations of the American workforce are attracted to innovative projects that give them the chance to get visibility for their achievements. In a 2015 NY Times article, Swarthmore Professor Barry Schwartz writes:

“But most important, we need to emphasize the ways in which an employee’s work makes other people’s lives at least a little bit better (and, of course, to make sure that it actually does make people’s lives a little bit better).”

More so than commercial industries, the social impact of a Government job has to be evident and visible in the outcomes of every day responsibility. The perennial value of our work is that it supposed to be a dedication to the citizen. The more often your employees are reminded of the imperishable effects they are having on your organization’s beneficiaries, the more motivated they will be to succeed.

There is an important accompaniment to “visibility”, which is of course recognition and reward. Tony Scwartz – Chief Executive of The Energy Project – worked with The Harvard Business Review to conduct a study of +20,000 employees across the globe to inspect which components contribute to a retentive workplace (among other things:

“Feeling treated with respect made employees feel 55 percent more engaged and 110 percent more likely to stay at the company. Employees who felt the most recognized and appreciated were 100 percent more likely to stay with their organizations.”

Think about how you manage your employees today – as “Human Capital”. Sounds more like an allocation of your investment portfolio than an actual person, doesn’t it? Well, it goes two ways. You are invested in your employees and they are invested in you. In this article, I hope to expose some technology-driven strategies to strengthen and lengthen these integral connections.

What can we do?

Natively delivered out-of-the-box, Oracle Human Capital Management (HCM) Cloud Service delivers HR functionality such as:
  • Prospective talent management
  • Workforce management
  • Workforce compensation and payroll
  • Time and Labor, absence management
  • Employee benefits and wellness
  • Social media monitoring
  • Real-time and predictive analytics
All of these capabilities are critical to your business, but software has an inborn influence on how employees are viewed. To effectively manage our staff, we use Employee ID numbers; we refer to employees as “resources”; we quantify, track and analyze performance. My intent isn’t to discount the importance of these practices, but rather to supplement them with intangibles.
  • Formalize innovation project review processes
  • Provide customized training paths
  • Establish leadership development plans
  • Start an Employee of the Month program
  • Encourage social collaboration

Oracle HCM is a wonderful foundation for managing the business of managing your people and has a wealth of data that can be extremely helpful in structuring innovative programs for employees. 

How do we do it?

Oracle’s Platform as a Service (PaaS) is a comprehensive, integrated portfolio of platform services that allow your organization to innovate new services faster and more productively at a lower cost. Using the Oracle PaaS services, technical users can create extensions of Oracle HCM.

HCM1 exposes business objects and processes to other applications through the use open standards-based technologies, including:
  • Extensible Markup Language (XML)
  • Simple Object Access Protocol (SOAP)
  • Business Process Execution Language (BPEL)
  • Web Services Description Language (WSDL)
  • XML schema definitions (XSD)

Take a look through the documentation and you will see how Oracle’s Fusion Applications are built with the purpose of being extended into your greater Enterprise. Oracle PaaS is making the development of application extensions increasingly easier, so it’s actually more important to understand the functional components of HCM than to have a highly technical staff. Platform tools allow your organization to focus on the processes, rules, and data model that you’ve constructed in your HCM system, rather than the technical underpinnings that will ultimately be relied on by your new services.

When it’s time to get technical and start building add-ons to your HCM, you can start by referencing the REST API that is available in the Fusion Application Documentation. The documentation even includes sample payloads to help speed up your development. From this point, you have the flexibility to either:
A) Leverage that REST API or
B) Transform that REST API into a SOAP service to be consumed by other applications

In cases like these, you can use the REST API to start developing right away:

  • Use Oracle Java Cloud Service (JCS) to build office-specific “microsites” that help deliver your users’ need-to-know updates and information
  • Connect to Mobile Cloud Service (MCS) to build a mobile app for traveling employees to submit Travel requests and Expense Reports
  • Leverage Process Cloud Service (PCS) for a custom application that handles HR policy management and approvals
  • Oracle Document Cloud Service (ODCS) includes Sites Cloud Service (SCS) for document collaboration workspaces
I don’t have the word count to go into all of the possibilities that Oracle PaaS can afford your organization, but I hope you walk away from this article with a kernel of knowledge. You are not limited to what comes out-of-the-box with any Oracle Fusion Application, including Oracle’s HCM Cloud. Your organization can enable seamless content delivery, social collaboration, and personalized LOB applications, without ever customizing your Oracle SaaS application.
Remember, there is a purpose to all of this. HCM tools are perfect for managing your personnel and resources but you manage people. Inculcate your people with the virtues of your public service and build an interface that highlights their alignment to those virtues. Oracle Cloud Services will allow you to easily and inexpensively build the interfaces to do exactly that.

1Access Oracle Enterprise Repository for Oracle Fusion Applications to find detailed information about integration assets, such as web services.

Shawnee State University Chooses Complete Oracle Cloud Suite to Improve Recruitment and Operations

Oracle Press Releases - Thu, 2016-11-10 07:00
Press Release
Shawnee State University Chooses Complete Oracle Cloud Suite to Improve Recruitment and Operations First state university in Ohio selects Oracle Cloud solutions to improve retention and streamline business practices

Redwood Shores, Calif.—Nov 10, 2016

Oracle announced today that Shawnee State University is implementing the full Oracle Cloud Suite including Oracle’s Enterprise Resource Planning (ERP), Enterprise Performance Management (EPM) and Human Capital Management (HCM) Cloud), in addition to Student Cloud CX. The university also anticipates moving to the Student Management Cloud as part of the university’s efforts to prepare their students for future success using the latest technology. This adoption comes with a recent Ohio Governor mandate to lower the cost of education, which prompted the university to search for the best technology to modernize and optimize internal operations.

With a student and staff body of 3,800 students and 700 employees, Shawnee State seeks to be a leader in paving the way for universities of all sizes to transition to cloud services. Their previous IT department was too small to handle the level of needs for their constituents. However, through the implementation of Oracle’s Cloud solution, the university will be utilizing a single cloud solution that helps manage recruitment processes to seek out best-fit students; this also enables them to focus on serving their constituents in a timelier manner.

While previously using an on-premise service, ensuring that recruiters reached the right students, and that those prospects ultimately chose to attend the university, was a big problem that needed to be solved. Shawnee State was seeking a service that focused on the entire student lifecycle from student success to finances – all while providing a single cloud experience from a global perspective.

“We believe that implementing Oracle’s Cloud solutions will help us meet our priority of being a student-focused university, by leveraging the latest technology all under a single unified cloud experience,” said Elinda Boyles, vice president for finance and administration at Shawnee State University. “We are fortunate that Shawnee State University President, Rick Kurtz, has forward-thinking vision and commitment to the university’s long-term success. This effort will improve operational efficiencies and better align our IT resources,” she said.

Because Oracle seeks to ensure their customers have a seamless transition to the cloud, Shawnee State was able to easily integrate the modern technology with legacy systems. Shawnee State is live with Oracle’s Student Cloud CX, and are implementing the remainder of the suite now with an expected go live of June 30, 2017.

“Oracle’s commitment to Higher Education and proven expertise in the Cloud provide Shawnee State University with a modern, secure and scalable platform for growth,” said Rondy Ng, Oracle’s Senior Vice President, Applications Development. “Our next-generation cloud platform for Higher Education incorporates embedded analytics, an intuitive user experience, and built-in collaboration functionality to deliver comprehensive support for the business and student lifecycle with a focus on putting the student first.”

Oracle delivers the industry’s broadest suite of enterprise-grade Cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).

Contact Info
Jennifer Yamamoto
Oracle
+1.916.761.9555
jennifer.yamamoto@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Shawnee State University

Shawnee State is a student-focused public university offering a highly personalized, affordable, and accessible education dedicated to the exploration of emerging technologies and emerging ideas. Founded in 1986, and located along the Ohio River in historic Portsmouth, Ohio, the university offers a range of signature programs, including Game Design, which has been nationally ranked for six years in a row.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Jennifer Yamamoto

  • +1.916.761.9555

Trigger not Dropped from Recyclebin

Tom Kyte - Thu, 2016-11-10 06:06
Hi, On one of our 11g databases we are not able to purge an object from the recyclebin: sys@DEVDB11:SQL> select count(*) from dba_recyclebin; COUNT(*) -------------- 1320 sys@DEVDB11:SQL> purge dba_recyclebin; DBA Rec...
Categories: DBA Blogs

Using with clause and double function in where clause causes non joined rows to be processed

Tom Kyte - Thu, 2016-11-10 06:06
Hi, I seem to have stumbled upon Oracle behaviour I can not explain, so I've come to the magical place called Ask Tom for guidance. I have created the following example to make it as short as possible while still reproducing my problem: 2 tabl...
Categories: DBA Blogs

Partition an Exisiting Table

Tom Kyte - Thu, 2016-11-10 06:06
I am trying to partition an existing table (Range partition) by a date column (BEGTIME). Currently the table is 7.5 GB in size. Our business moves monthly and wanted to partition into months. Also table holds data pertaining to forecasts up to Dec 21...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator