Feed aggregator

Oracle Code Call For Papers is Now Open!

OTN TechBlog - Tue, 2016-12-06 16:20

Coming to 20 cities globally, Oracle Code is an event for developers building modern Web, mobile, enterprise and cloud-native applications.  The Oracle Code Call for Papers (CFP) is now open for the cities and dates listed on the CFP site.

Oracle Code will focus on the latest software developer technologies, practices and trends, including:

  • Containers, Microservices/APIs, & DevOps
  • Databases
  • Open Source Technologies
  • Development Tools & Low Code Platforms
  • Machine Learning, Chatbots & AI

Submit your session idea today! 

UKOUG 2016 – Third day

Yann Neuhaus - Tue, 2016-12-06 11:41

Birmingham

Today, it’s the third day in Birmingham for the UKOUG Tech16 event. We had a good time yesterday evening in English pubs.

Today, I attended several sessions today. Sessions that I was mostly interested in was the “Application Express 5.1 New Features part 1 an part 2.

The 1st session was presented by David Peake from Oracle. He provided main new features that will come with Apex 5.1. He demonstrates new capability for developer usage improvement in terms of productivity. In Apex 5.1, we will be able to define the page designer as we want, customising tabs order, displaying the page designer in different pane layout and page rendering. He also presented the Interactive Grid and quickly created a master detail, detail, detail, detail view. The number of detail is unlimited but he strongly advises to carefully minimise the number of detail.

The 2nd session, APEX 5.1 part 2 was presented by Patrick Wolf. He concentrated his session by presenting improvement made on the Universal Theme that was already rolled out with APEX 5.0. So they continued to improve it in APEX 5.1. In my opinion, the important information is the upgrade of the Universal Theme for any existing 5.0 application. You will have to refresh the theme in order to use the improvement made in APEX 5.1. This has to be done by going in the shared component theme visualisation and click on the refresh theme. This will upgrade the already used UT with 5.1 capabilities. There is lot’s of new capabilities and would wait until the final rollout date to do some tests on my side.

Another session I followed was “APEX, Meet the Rest of the Web – Modern Web Technologies in Your APEX Application”. It was good to know how to create a search like google with auto-complete support depending on the value entered in the search field. Presenter also showed us how to quickly integrate google charts using APEX JSON packages. He also showed us how to integrate google map, showed us how to call Facebook and Twitter API in our application in order to follow twits a.s.o. It’s quite easy to integrate Modern Web Technologies in any APEX Application.

See you tomorrow for the last day in Birmingham.

 

Cet article UKOUG 2016 – Third day est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 3

Yann Neuhaus - Tue, 2016-12-06 11:25

uk3

Today at UKOUG 2016, the Cloud has won against the sun :=)

The first sesssion I attended this morning was animated by Kamil Stawiarski from ORA 600 company: Securing the database againt Unauthorized attacks, but the real title was Oracle Hacking Session.

The session was amazing, as usual with Kamil, no slides , only technical demos :=))

He first showed us that after creating a standard user in an Oracle database with the classical privileges connect, resource and create any index, and using a simple function he created, the standard user could receive the DBA privilege.

The seconf demonstration was about DirtyCow (a computer vulnerability under Linux that allows remote execution of non-privileged code to achieve remote root access on a computer). He showed us how easy it is to get connected root under Linux.

In the last demo he showed us how it is possible to read the data from a particular table directly from the data file, only by using one of his C program and the data_object_id of the table.

He finished his session by asking himself why a lot of money is wasted to protect data, and why it should not be more intelligent to spend less money and to write correct applications with correct privileges.

The second session was more corporate: Oracle database 12cR2, the overview by Dominic Giles from Oracle. He talked us about Oracle 12cR2 on the cloud; What is available now: Exadata Express Cloud Server and Database Cloud Service. Comming soon: Exadata Cloud Machine.

Then he talked about the new features of Oracle database 12cR2:

Performances: The main idea for 12cR2 is: go faster, he gave us some examples: a high compression rate of indexes (subject to licensing option of course) which might result in I/O improvement and significantly space savings.

Security: Oracle 12cR2 introduces online encryption of existing data files. There is also the posiibility of full encryption of internal database structures such as SYSTEM SYSAUX or UNDO. Also a Database Vault simulation mode which defines and tests security protection profiles through application lifecycle.

Developpers: AL32UTF8 is the default character set for databases. Object name for tables or columns can now be 128 bits long.

Manageability: PDB number per container increased from 252 to 4096. The PDB are optimized for RAC. And interesting it will be possible to realize PDB hot clones, PDB refresh and PDB relocate without downtime.

Availability: a lot of improvements for RAC: RAC reader nodes, ASM flex disk groups, Autonomous Health Framework (identifies issues, notifies with corrective actions). For active dataguard, diagnostic tuning and SQL plan advisor will be available on standby side, no user disconnection on failover, high speed block comparaison between primary and standby database. And finally there will be the possibility to use SSL redo transport to be more secure.

Finally, I attended at the last session of the day, but one the most active essentially because of the speaker’s talent and of course the subject: Upgrade to the next generation of Oracle Database; live and uncensored !

He talked us about the different ways to upgrade to 12.1.0.2 or 12.2.0.2 abording subjects like extended support, direct upgrade and DBUA.

A new upgrade script is available : preupgrade.jar executes checks in source environment, generates detailed recommendations, generates also fixup scripts and last but not least is rerunnable :=))

He showed us that the upgrade process is faster and has less downtime, and we have the possibility to run databse upgrade in parallel (by using catctlpl.pl with the -n 8 option for example). It deals with non CDBs and CDBs. During his upgrade from 11.2.0.4 to 12.1.0.2 he interrupted the upgrade process by typing CTRL-C during the upgrade process to 12.1.0.2 … and he proved that the process upgrade is rerunnable by running catctl.pl with the -R option :=)

He is not a great fan of DBUA for multiple reasons : for him it is hard to debug, the parallel option is by default to cpu_count, the progress bar is impredictive and sometimes we have to wait a lot without knowing what’s happening in the source database, we have to be careful with datapatch in 12.1 version. For me the only advantage is the timezone  automatic upgrade by using dbua.

Well this was another exciting day at UKOUG 2016, tomorrow is the last day with other interesting sessions and an OEM round table :=)

 

Cet article UKOUG 2016 DAY 3 est apparu en premier sur Blog dbi services.

From MySQL (Oracle) to Postgres using the EDB Migration Toolkit

Yann Neuhaus - Tue, 2016-12-06 11:24

Why should you migrate?
If your current MySQL database does not offer some needed functionnalities according to your business as:
– more security
– more high availibilty options (hot standby)
– Strong Data Warehouse capabilities
If you want to consolidate the number of different instances (Postgres, MySQL, MS-SQL,…)
If you want to reduce administrative costs by using fewer database platforms
Which tool should you use?
the migration Toolkit command-line from EnterpriseDB that can be found below
http://www.enterprisedb.com/products-services-training/products-overview/postgres-plus-solution-pack/migration-toolkit
Why ?
Really easy to use
Which MySQL Objects are supported for the migration?
– Schemas
– Tables
– Constraints
– Indexes
– Table Data
What about partitionned table?
You have to remove the partitions before the migration
mysql> ALTER TABLE Table_name REMOVE PARTITIONING;
My environment:
MySQL: 5.7.14 on Oracle Linux Server 7.1
PostgreSQL: 9.6.1.4 on Oracle Linux Server 7.1
What are the prerequisites?
– download the migration toolkit from EnterpriseDB
Note that it can be only installed by registered users but the registration is free and can be done directly on the EnterpriseDB website.
– Install it and follow the instructions
./edb-migrationtoolkit-50.0.1-2-linux-x64.run
– download the MySQL JDBC driver: mysql-connector-java-5.1.40-bin.jar
http://www.enterprisedb.com/downloads/third-party-jdbc-drivers
– Install the driver by moving it to the right directory:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/jre/lib/ext
– To facilitate  the migration, you have to prepare the configuration file: toolkit.properties located in your installation directory
the most important is to associate the right JDBC URL to the SRC_DB_URL parameter
SRC_DB_URL=jdbc:mysql://hostname[:port]/database
Following is the content of the config file
SRC_DB_URL=jdbc:mysql://192.168.56.200:33001/employees
SRC_DB_USER=root
SRC_DB_PASSWORD=manager
TARGET_DB_URL=jdbc:edb://192.168.56.200:5433/employees # the database must be created in Postgres before
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=manager

In case you get MySQL connection problems (SSL), modify the parameter SRC_DB_URL
SRC_DB_URL=jdbc:mysql://192.168.56.200:33001/employees?autoReconnect=true&useSSL=false
This will disable SSL and also suppress SSL errors.
Before starting the Migration, it is mandatory to create a blank target database in the Postgres instance
What options for the migration ?
-sourcedbtype is mysql
-targetdbtype is enterprisedb
-fetchsize is 1  to avoid  an ‘out of heap space’ error and force the toolkit to load data one row at a time
How to start the migration?
[root@pg_essentials_p1 mtk]# bin/runMTK.sh -sourcedbtype mysql -targetdbtype enterprisedb -fetchSize 1 employees
Running EnterpriseDB Migration Toolkit (Build 50.0.1) ...
Source database connectivity info...
conn =jdbc:mysql://192.168.56.200:33001/employees?autoReconnect=true&useSSL=false
user =root
password=******
Target database connectivity info...
conn =jdbc:edb://192.168.56.200:5433/employees
user =postgres
password=******
Connecting with source MySQL database server...
Connected to MySQL, version '5.7.14-enterprise-commercial-advanced-log'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '9.6.1.4'
Importing mysql schema employees...
Creating Schema...employees
Creating Tables...
Creating Table: departments
..........................
Created 6 tables.
Loading Table Data in 8 MB batches...
Loading Table: departments ...
[departments] Migrated 9 rows.
..............................
Loading Table: salaries ...
[salaries] Migrated 246480 rows.
................................
[salaries] Migrated 2844047 rows.
[salaries] Table Data Load Summary: Total Time(s): 20.143 Total Rows: 2844047 Total Size(MB): 94.1943359375
Loading Table: titles ...
[titles] Migrated 211577 rows.
[titles] Migrated 419928 rows.
[titles] Migrated 443308 rows.
[titles] Table Data Load Summary: Total Time(s): 3.898 Total Rows: 443308 Total Size(MB): 16.8955078125
Data Load Summary: Total Time (sec): 33.393 Total Rows: 3919015 Total Size(MB): 138.165
Creating Constraint: PRIMARY
Creating Constraint: dept_name
................................
Creating Index: dept_no1
Schema employees imported successfully.
Migration process completed successfully.
Migration logs have been saved to /root/.enterprisedb/migration-toolkit/logs
******************** Migration Summary ********************
Tables: 6 out of 6
Constraints: 11 out of 11
Indexes: 2 out of 2
Total objects: 19
Successful count: 19
Failed count: 0
Invalid count: 0
************************************************************

So as you can see, this migration process is really easy and you can take immediately benefits of all the standard features.

 

Cet article From MySQL (Oracle) to Postgres using the EDB Migration Toolkit est apparu en premier sur Blog dbi services.

Enabling the Mobile Workforce with Cloud Content and Experience - Part 4

WebCenter Team - Tue, 2016-12-06 10:21
Author: Mark Paterson, Director, Oracle Documents Cloud Service Product Management

Continuing my series on quick tips on how to use key features of Oracle Documents Cloud Service’ mobile app to drive effective mobile collaboration (part 1, part 2 and part 3 available), I’d like to introduce you to a brand new feature that helps bridge the gap between Oracle Documents Cloud Service (our content and social collaboration platform) and Oracle Sites Cloud Service (visual site and community builder solution).

Have trouble remembering where the latest product micro sites that the marketing team deployed are? Let me guess, you forgot to bookmark those. Well, now there is no need to remember. Oracle Documents Cloud Service’s mobile app has been enhanced to make it easy to access sites deployed using Oracle Sites Cloud Service. Now you can access all your documents, conversations and sites from the same mobile app.

  1. Install the Oracle Documents app on your iPhone, iPad, or Android device, log on to Oracle Documents Cloud Service, and you’re ready to go. Use it anywhere. It’s designed to be familiar, swiping to navigate and tapping to open folders and files. The app guides you through what to do.

  2. Sites are now only a couple of taps away on your mobile device. Open the navigation bar and tap on the new “My Sites” link at the bottom.

  3. All your sites can be accessed from this handy list.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif";}

It isn’t just about ease of access though. We’ve also optimized the browsing experience so that you can take advantage of features in the Oracle Documents mobile app directly from your sites. For example, if you tap on the link to a file, you can preview it directly within the mobile app which makes it easy not only to share the link to the file with others but even open it and make edits. Direct download links like the “File Download” icon will give you the option to save the file directly into one of your own Oracle Documents folders for easy, anytime, anywhere and any device access.

Check out our latest how-to video to see it in action:

You can always find Oracle Documents Cloud Service mobile app in both the AppStore and Google play store.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif";}

Short and sweet this time round! Next time we’ll explore changes we’ve made that make it super simple to message your colleagues using Oracle Documents.

Don’t have Oracle Documents Cloud Service yet? You can start a free trial immediately. Visit cloud.oracle.com/documents to get started on your free trial today.

Previous posts in this series:

Enabling the Mobile Workforce with Cloud Content and Experience - Part 1 (Mobile editing files)

Enabling the Mobile Workforce with Cloud Content and Experience - Part 2 (Simplifying mobile file uploads)

Enabling the Mobile Workforce with Cloud Content and Experience - Part 3 (Reviewing and approving files on-the-go)


Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif";}

12.2 New Features -- 5 : Memory Parameters for Pluggable Database

Hemant K Chitale - Tue, 2016-12-06 08:07
12.2 allows Instance Memory parameters to be configured at the PDB level.

[oracle@HKCORCL ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 6 13:56:28 2016

Copyright (c) 1982, 2016, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga boolean FALSE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 2544M
sga_min_size big integer 0
sga_target big integer 2544M
unified_audit_sga_queue_size integer 1048576
SQL> show parameter db_cach

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 0
SQL>


Those are parameters set at the CDB level. Let's see the PDB.

SQL> alter session set container = PDB1;

Session altered.

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga boolean FALSE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 2544M
sga_min_size big integer 0
sga_target big integer 0
unified_audit_sga_queue_size integer 1048576
SQL> show parameter db_cache

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 0
SQL> alter system set db_cache_size=400M;

System altered.

SQL>
SQL> alter system set sga_target=512M;
alter system set sga_target=512M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-56750: invalid value 536870912 for parameter sga_target; must be larger
than 200% of parameter db_cache_size


SQL> alter system set sga_target=810M;

System altered.

SQL> alter system set shared_pool_size=256M;

System altered.

SQL> show parameter db_cache

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 400M
SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 810M
SQL> show parameter shared_pool

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
shared_pool_reserved_size big integer 26004684
shared_pool_size big integer 256M
SQL>
SQL> alter system set pga_aggregate_target=128M;

System altered.

SQL>


Returning to the CDB ...

SQL> connect / as sysdba
Connected.
SQL> show parameter db_cache

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 0
SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 2544M
SQL> show parameter shared_pool

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
shared_pool_reserved_size big integer 26004684
shared_pool_size big integer 0
SQL> show parameter pga_aggergate_target
SQL> show parameter pga_aggregate_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 1775294400
SQL>


Thus, multiple PDBs can have their own private target and limits (even an SGA_MIN_SIZE) all shared within the single instance that they co-exist in.
Note : The requirement is that MEMORY_TARGET is to be not set.
.
.
.

Categories: DBA Blogs

Gluent Podcast with Mark Rittman

Tanel Poder - Tue, 2016-12-06 07:11

Mark Rittman has been publishing his podcast series (Drill to Detail) for a while now and I sat down with him at UKOUG Tech 2016 conference to discuss Gluent and its place in the new world with him.

This podcast episode is about 49 minutes and it explains the reasons why I decided to go on to build Gluent a couple of years ago and where I see the enterprise data world going in the future.

It’s worth listening to, if you are interested in what we are up to at Gluent and hear Mark’s excellent comments about what he sees going on in the modern enterprise world too!

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Flashback Archive Internal History table is not compressed -- Kindly suggest

Tom Kyte - Tue, 2016-12-06 05:06
Hi Tom, Once flashback archive is enabled for a table, a corresponding history table will be created by oracle internally. It is automatically partitioned and compressed as well. But when I have enabled FBA for a table, the history table is partit...
Categories: DBA Blogs

Column with default value

Tom Kyte - Tue, 2016-12-06 05:06
I have a table in which one of the column is as below create_date TIMESTAMP(6) default SYSDATE, When the record is inserted from the GUI(URL) , it doesn't insert any value for create_Date .. But when inserted from other than URL i...
Categories: DBA Blogs

Redo Log 4K Blocksize

Tom Kyte - Tue, 2016-12-06 05:06
Good Evening, In 11g, I've read about the possibility of setting redo logs to have a blocksize of 4k. Supposedly, the blocksize is automatically set based on the block sector of the disk. Supposedly, high capacity disks have block sectors of 4k....
Categories: DBA Blogs

SQL profile is usable?

Tom Kyte - Tue, 2016-12-06 05:06
Hi,guy! I have some question about the SQL PROFILE, when use SQL PROFILE to bind the SQL,it performanced good,but as time goes by,the data in the table will grow rapidly and the SQL which use SQL PROFILE performanced bad. So what's the meaning of th...
Categories: DBA Blogs

db2 Query in Unix: load from /dev/null of del replace into Schema.Tablename nonrecoverable;

Tom Kyte - Tue, 2016-12-06 05:06
Hi Tom, In Unix db2 I am using the below query to clear the table. db2 Query in Unix: load from /dev/null of del replace into Schema.Tablename nonrecoverable; What would be the best approach to do the same thing in Oracle. I don't thin...
Categories: DBA Blogs

How mandatory is to use DBMS_AUDIT_MGMT

Tom Kyte - Tue, 2016-12-06 05:06
Hello everyone, One question please, how mandatory is to use the package DBMS_AUDIT_MGMT for Oracle. a)There is a NOT ADVISABLE suggestion from Oracle to work in aud$ table directly, to force to use ALWAYS the package. b) is simple we ADVISE ...
Categories: DBA Blogs

Latch Free

Tom Kyte - Tue, 2016-12-06 05:06
Hi Connor, I have no intention of complaining. But all over the web I find lot of discussion about latch , latch spin, and latch sleep. And the description goes like below. 1] Try to acquire a latch 2] Failed ! Try again after sometime 3] Ret...
Categories: DBA Blogs

How to reduce Buffer Busy Waits with Hash Partitioned Tables in #Oracle

The Oracle Instructor - Tue, 2016-12-06 04:57

fight_contention_2

Large OLTP sites may suffer from Buffer Busy Waits. Hash Partitioning is one way to reduce it on both, Indexes and Tables. My last post demonstrated that for Indexes, now let’s see how it looks like with Tables. Initially there is a normal table that is not yet hash partitioned. If many sessions do insert now simultaneously, the problem shows:

Contention with a heap table

Contention with a heap table

The last extent becomes a hot spot; all inserts go there and only a limited number of blocks is available. Therefore we will see Buffer Busy Waits. The playground:

SQL> create table t (id number, sometext varchar2(50));

Table created.

create sequence id_seq;

Sequence created.

create or replace procedure manyinserts as
begin
 for i in 1..10000 loop
  insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?');
 end loop;
 commit;
end;
/

Procedure created.

create or replace procedure manysessions as
v_jobno number:=0;
begin
FOR i in 1..100 LOOP
 dbms_job.submit(v_jobno,'manyinserts;', sysdate);
END LOOP;
commit;
end;
/

Procedure created.

The procedure manysessions is the way how I simulate OLTP end user activity on my demo system. Calling it leads to 100 job sessions. Each does 10.000 inserts:

SQL> exec manysessions

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
   1000000

SQL> select object_name,subobject_name,value from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';

OBJECT_NAM SUBOBJECT_	   VALUE
---------- ---------- ----------
T			    2985

So we got thousands of Buffer Busy Waits that way. Now the remedy:

SQL> drop table t purge;

Table dropped.

SQL> create table t (id number, sometext varchar2(50))
     partition by hash (id) partitions 32;

Table created.

 
SQL> alter procedure manyinserts compile;

Procedure altered.

SQL> alter procedure manysessions compile;

Procedure altered.

SQL> exec manysessions 

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
   1000000

SQL> select object_name,subobject_name,value from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';  

OBJECT_NAM SUBOBJECT_	   VALUE
---------- ---------- ----------
T	   SYS_P249	       0
T	   SYS_P250	       1
T	   SYS_P251	       0
T	   SYS_P252	       0
T	   SYS_P253	       0
T	   SYS_P254	       0
T	   SYS_P255	       0
T	   SYS_P256	       1
T	   SYS_P257	       0
T	   SYS_P258	       0
T	   SYS_P259	       1
T	   SYS_P260	       0
T	   SYS_P261	       0
T	   SYS_P262	       0
T	   SYS_P263	       0
T	   SYS_P264	       1
T	   SYS_P265	       1
T	   SYS_P266	       0
T	   SYS_P267	       0
T	   SYS_P268	       0
T	   SYS_P269	       0
T	   SYS_P270	       0
T	   SYS_P271	       1
T	   SYS_P272	       0
T	   SYS_P273	       0
T	   SYS_P274	       0
T	   SYS_P275	       1
T	   SYS_P276	       0
T	   SYS_P277	       0
T	   SYS_P278	       0
T	   SYS_P279	       2
T	   SYS_P280	       0

32 rows selected.

SQL> select sum(value) from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';

SUM(VALUE)
----------
	 9

SQL> select 2985-9 as waits_gone from dual;

WAITS_GONE
----------
      2976

The hot spot is gone:

hash_part_table

This emphasizes again that Partitioning is not only for the Data Warehouse. Hash Partitioning in particular can be used to fight contention in OLTP environments.


Tagged: partitioning, Performance Tuning
Categories: DBA Blogs

Troubleshooting RAC Cloning Issues in EBS 12.1

Steven Chan - Tue, 2016-12-06 02:09

We have a white paper that outlines a framework for troubleshooting Rapid Clone issues:

Cloning EBS 12.1.3

Things can get more-involved if your E-Business Suite environment uses Real Application Clusters (RAC). There's another white paper that has more details about what to do if you encounter problems when cloning RAC-based EBS environments:

This whitepaper has additional RAC-specific information, such as:

  • Pairsfile.txt
  • RMAN restore phases
  • Secondary node context file creation
  • Known issues (e.g. RAC to RAC cloning causing inventory registration issues)

Related Articles

Categories: APPS Blogs

12.2 Index Advanced Compression “High” – Part I (High Hopes)

Richard Foote - Mon, 2016-12-05 23:52
Oracle first introduced Advanced Compression for Indexes in 12.1 as I’ve discussed here a number of times. With Oracle Database 12c Release 2, you can now use Index Advanced Compression “High” to further (and potentially dramatically) improve the index compression ratio.  Instead of simply de-duplicating the index entries within an index leaf block, High Index […]
Categories: DBA Blogs

Overload Protection Support

Anthony Shorten - Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

ILM Planning - The First Steps

Anthony Shorten - Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

UKOUG 2016 – Second day

Yann Neuhaus - Mon, 2016-12-05 13:51

IMG_1965

This second day at UKOUG was quite good. I slept well at the Jurys Inn hotel and this morning, I enjoyed one more time a real English breakfast with beans, bacons, eggs and sausages. I like that to be fit over all the day ;)

Today, I attended the general Keynote and several sessions around integration, APEX & Database Development and Database. My colleague, Franck Pachot also presented today and I attended his session “12c Multitenant: Not a Revolution, Just an Evolution”. His session reminds me the article I wrote some years ago about Oracle Multitenant architecture and APEX.

Early in the morning, I followed the “Application Container Cloud Service: Backend Integration Using Node.js”. The presenter described what Node.js is, give javascript framework that can be easily integrated with Node.js such as Express.js to create HTTP server and retrieve Node.js data by creating HTTP server. He also presented the architecture where we can have Node.js hosted in Docker on the cloud.

After that, I attended the session “APEX Version Control & Team Working”. During that session, I learned more on Apex Version Control best practices and which nice commands can be done through SQL cli, apex java utility and so on. I was quite happy learning that for internal development we were not so bad and we already properly control version, make backup of APEX workspace, applications and themes. I now have information to improve our internal works around APEX development activities such as APEX ATAF “Apex Test Automation Framework”

Next session was “Interactive Grids in Application Express 5.1″. This session was a demonstration oriented session in which the presenter showed us new amazing features that will be incorporated in APEX 5.1. Most of the demonstration was based on the sample package application.

The next session was “Real Time Vehicle Tracking with APEX5″. For me it was great to see the power of Apex and the Oracle Database to store and display data in real time through the APEX5 MapViewer. The application uses Oracle Spatial getting data from each vehicle GPS where PL/SQL converts data for geospatial information.

During the last session, “A RESTful MicroService for JSON Processing in the Database” I learned how to execute JavaScript directly from the database. In fact, with Java 8 and the Nashhorn project it’s now possible to execute JavaScript codes from the JVM and so directly in the database avoiding data shipping.

This is all for today and see you tomorrow, we will now take time with my blog reviewer to drink some pints in an English pub.

 

Cet article UKOUG 2016 – Second day est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator