Feed aggregator

Bash: The most useless command

Dietrich Schroff - Sun, 2017-06-04 14:08
After many years working with the bourne again shell i still learn some commands, which i never used.
I am completely convinced that i discovered the most useless command:
revThe manpage says:
rev - reverse lines characterwise Is there anything, where i can use this command?

Ok. I can check if a word is a palindrome (like 123321 or otto):

#!/bin/bash
# Shell script to test if a string is a palindrome
# This simply uses the 'rev' command found in util-linux-ng package
# and checks if the reverse of the string is same as the original
 echo "Enter a String : "
read string
if [ "$(echo $string | rev)" = "$string" ]
then
 echo "\"$string\" IS a Palindrome"
else
 echo "\"$string\" IS NOT a Palindrome"
fiThis script i found in this article.

Have anyone used the rev command for something reasonable?
Please write a comment!

12cR2 auditing all users with a role granted

Yann Neuhaus - Sun, 2017-06-04 11:03

12.1 introduced Unified Auditing where you define policies and then enable them. As with the traditional audit, you enable them for all users or for specific users. The unified auditing adds a syntax to audit all users except some listed ones. 12.2 adds a syntax to audit a group of users, defined by the role granted. This is the best way to enable a policy for a group of users, including those created later.

I create a simple policy, to audit logon and DBA role usage:

SQL> create audit policy DEMO_POLICY actions logon, roles DBA;
Audit POLICY created.

I create a new DBA user, USER1

SQL> create user USER1 identified by covfefe quota unlimited on USERS;
User USER1 created.
SQL> grant DBA to USER1;
Grant succeeded.

I want to enable the policy for this user because I want to audit all DBAs

SQL> audit policy DEMO_POLICY by USER1;
Audit succeeded.

I remove Audit records for this demo

SQL> exec dbms_audit_mgmt.clean_audit_trail(audit_trail_type=>dbms_audit_mgmt.audit_trail_unified,use_last_arch_timestamp=>false);
PL/SQL procedure successfully completed.

Let’s connect with this user and see what is audited:

SQL> connect USER1/covfefe@//localhost/PDB1
Connected.
 
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM

The logon and the select on dictionary table (possible here thanks to the DBA role) has been audited because the policy is enabled for this user.

We have a new DBA and we create a new user for him:

SQL> create user USER2 identified by covfefe quota unlimited on USERS;
User USER2 created.
SQL> grant DBA to USER2;
Grant succeeded.

He connects and check what is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM

Nothing is audited for this user. The DBA role usage is audited, but only for USER1.

Of course, we can add an audit statement for each user created for a DBA:

SQL> audit policy DEMO_POLICY by USER2;
Audit succeeded.

Then his new activity is audited:

SQL> connect USER2/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.51.865094000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.51.948187000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.22.52.132814000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.22.52.338928000 PM

But for security reason, we would like to be sure that any new user having the DBA role granted is audited.
Let’s try something else

SQL> noaudit policy DEMO_POLICY by USER1,USER2;
Noaudit succeeded.

We can simply audit all users:

SQL> audit policy DEMO_POLICY;
Audit succeeded.

But this is too much. Some applications constantly logon and logoff and we don’t want to have that in the audit trail.

SQL> noaudit policy DEMO_POLICY;
Noaudit succeeded.

We can still enable the policy for all users, and exempt those users we don’t want:

SQL> audit policy DEMO_POLICY except DEMO;
Audit succeeded.

Here is what is enabled, and this will audot all new users:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO DEMO_POLICY EXCEPT EXCEPT USER DEMO USER YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

But once again, this is not what we want.

SQL> noaudit policy DEMO_POLICY by DEMO;
Noaudit succeeded.
 
SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

Audit all users to whom roles are granted directly

In 12cR2 we have the possibility to do exactly what we want: audit all users having the DBA role granted:

SQL> audit policy DEMO_POLICY by users with granted roles DBA;
Audit succeeded.

This enables the audit for all users for whom the DBA role has been directly granted:

SQL> select * from audit_unified_enabled_policies;
 
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
--------- ----------- ----------- -------------- ----------- ----------- ------- -------
DEMO_POLICY INVALID BY GRANTED ROLE DBA ROLE YES YES
ALL USERS ORA_SECURECONFIG BY BY USER ALL USERS USER YES YES
ALL USERS ORA_LOGON_FAILURES BY BY USER ALL USERS USER NO YES

The important thing is that a newly created user will be audited as long as he has the DBA role directly granted:

SQL> create user USER3 identified by covfefe quota unlimited on USERS;
User USER3 created.
SQL> grant DBA to USER3;
Grant succeeded.
 
SQL> connect USER3/covfefe@//localhost/PDB1
Connected.
SQL> select audit_type,os_username,userhost,terminal,dbusername,action_name,unified_audit_policies,system_privilege_used,event_timestamp
2 from unified_audit_trail where unified_audit_policies='DEMO_POLICY' order by event_timestamp;
 
AUDIT_TYPE OS_USERNAME USERHOST TERMINAL DBUSERNAME ACTION_NAME UNIFIED_AUDIT_POLICIES SYSTEM_PRIVILEGE_USED EVENT_TIMESTAMP
---------- ----------- -------- -------- ---------- ----------- ---------------------- --------------------- ---------------
Standard oracle VM104 pts/0 USER1 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.17.915217000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.17.988151000 PM
Standard oracle VM104 pts/0 USER1 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.117258000 PM
Standard oracle VM104 pts/0 USER2 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.322716000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.345351000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.415117000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.439656000 PM
Standard oracle VM104 pts/0 USER2 SELECT DEMO_POLICY SELECT ANY DICTIONARY 04-JUN-17 04.29.18.455274000 PM
Standard oracle VM104 pts/0 USER3 LOGON DEMO_POLICY CREATE SESSION 04-JUN-17 04.29.18.507496000 PM

This policy applies to all users having the DBA role, and gives the possibility to audit more than their DBA role usage: here I audit all login from users having the DBA role.

So what?

We don’t use roles only to group privileges to grant. A role is usually granted to define groups of users: DBAs, Application user, Read-only application users, etc. The Unified Auditing can define complex policies, combining the audit of actions, privileges, and roles. The 12.2 syntax allows enabling the policy to a specific group of users.

 

Cet article 12cR2 auditing all users with a role granted est apparu en premier sur Blog dbi services.

Ubuntu: Eclipse does not find openjdk (JRE)

Dietrich Schroff - Sat, 2017-06-03 14:06
On my Ubunut linux i decided to do some Java programming.
So first step is to install java:
apt-get install openjdk-9-jdk openjdk-9-demo openjdk-9-doc openjdk-9-jre-headless openjdk-9-source This just works like expected. But after that i installed eclipse.
Download via www.eclipse.org was easy and the installation ran without any error.
But Eclipse was not able to find the JRE for compiling, autocorrection, etc.

For all developers, which are not used to linux:
The JRE is in Ubuntu located here:
/usr/lib/jvm/java-9-openjdk-amd64/jreAnd it is added like on any other platform via the preferences window:
 After that Eclipse runs fine and you can start building your java applications...

Ubuntu: visudo problem / NOPASSWD option does not work for commands

Dietrich Schroff - Sat, 2017-06-03 13:56
It is always the same:
After installing a new Linux commands like
/usr/sbin/s2disk
/sbin/rebootonly work with entering your password everytime (or precisely: after every reboot).
This can be changed with visudo:
export EDITOR=vi; visudo Inside the /etc/sudoers.conf you will see:
# User privilege specification
root    ALL=(ALL:ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALLAnd this is the problem:
First idea is to double the line
root    ALL=(ALL:ALL) ALL
schroff    ALL=(ALL) NOPASSWD: /usr/sbin/s2diskand do the desired changes.

BUT: This will not work!

You have to add your line at the end of the file and after you exit vi (or whatever editor you have configured) it will work...

UEFI and Linux: Installation of Ubuntu on a new laptop / How to add grob on UEFI

Dietrich Schroff - Sat, 2017-06-03 09:28
On my new laptop i decided to install in addition to windows 10 a linux distribution.
First idea was to install debian linux but the debian installer does not provide any functionality to resize the partitions. And without this, the windows partition will be erased after the installtion.
I knew, that the ubuntu installer had this functionality, so i created a usb stick and tried installing ubuntu.

The installation went through like a charm, but the next reboot does not show up the grub boot menu. The laptop started without any call back.

?

I found a solution here.

You have to enter the BIOS/UEFI and set a password:
After that you are allowed to select a new trusted uefi file:
 Then walk to grubx64.efi via HDD0 -> EFI -> ubuntu:
 



After that you have to reboot once and enter the BIOS/UEFI again.
New you can change the boot order and move the grub to the first entry:
Now every reboot enters the grub menu and you are able to choose ubuntu or windows...





Running ADF BC REST Service Together with ADF Faces Application

Andrejus Baranovski - Sat, 2017-06-03 05:44
ADF 12c provides out of the box support for ADF BC REST API. It is straightforward to create ADF BC REST service and run it. But what if you would like to expose ADF BC REST for existing ADF application with ADF Faces UI. This could be useful if there is a requirement to handle ADF Bindings access to ADF BC along with light ADF BC REST service API implementation for Oracle JET or other JavaScript clients. The same ADF application could handle two types of client access - ADF Faces and REST.

When you create ADF BC REST application, JDeveloper creates REST Model and WebService projects. Technically speaking, if you have existing ADF application with Model and ViewController, you could add third project for REST Web Service by yourself. Two configuration files are required - web.xml and weblogic.xml (if ADF Security is enabled):


ADF BC REST is exposed to the client through ADF servlet. It doesnt work to enable ADF BC REST servlet in ADF Faces project web.xml. This requires to have separate WAR deployment for ADF BC REST and different context root.

ADF Faces and ADF BC REST are sharing the same ADF Security context and this means we are using the same set of Application Roles for both.

In my example, ADF Bindings are mapped with Employees VO instance - this brings data to be displayed in ADF Faces:


The same VO instance is exposed through ADF BC REST:


ADF BC REST servlet is defined in REST project web.xml. It doesnt work to define it in the same web.xml where ADF Faces are configured - context conflict error comes on runtime. Thats the reason why I have separate WAR for ADF BC REST:


There is one WAR for ADF Faces app and one for ADF BC REST. Both WARs are packaged into single EAR. This means there is one deployment, but two context roots, one for each WAR:


If you download my sample app and want to run it, make sure to build REST project first:


Then click Run for ADF Faces app - ADF BC REST WAR will be packaged automatically (if you deploy it on standalone, simply build EAR - it will include both WARs):


We can see it in the deployment log - both WARs are packaged into single EAR:


This is the result - ADF Faces UI is accessed through adfapp context root:


ADF BC REST from the same application is accessed through restapp context root, authenticated with the same ADF Security context as ADF Faces app:


Download sample application - ADFFacesRESTApp.zip.

Migration of HTF packages from Oracle 8i to Oracle 12c

Tom Kyte - Fri, 2017-06-02 20:26
Hi All, Currently the application exposes HTML pages to users. This is done using HTF packages in Oracle 8.1. The setup is in such away that the application web server(apache http) , DB all reside in the same server host. The mod_plsql is configur...
Categories: DBA Blogs

High SQL ordered by Parse Calls

Tom Kyte - Fri, 2017-06-02 20:26
Database has 22 schema which are similar. Some of the queries in PL.SQL code remains active and in AWR we Observe that there is a very very high Parse to execution ratio for these queries (Example :Parse calls 92,899,371, Executions 3,875 in a p...
Categories: DBA Blogs

Join tables on LIKE condition very slow

Tom Kyte - Fri, 2017-06-02 20:26
Hi Oracle Gurus! I have two tables: 1) BIG: accounts, 200+ mln rows, contains account numbers like these: 123467896546 698746516874 685497964116 748794251631 ... 2) small: patterns, 2000 rows, contains account patterns like these: 12__46...
Categories: DBA Blogs

Pushed out new version of PythonDBAGraphs

Bobby Durrett's DBA Blog - Fri, 2017-06-02 17:03

I pushed out a new version of PythonDBAGraphs. I got a new laptop at work so I am setting it up with the software that I need to do my job. I decided that instead of reinstalling Enthought Canopy I would go with the current version of Python 3 and install the packages that I need one at a time. This gives me a cleaner install and I’m on the most current version of Python and the packages. I had installed Canopy for the two edX classes that I took so I wasn’t motivated to uninstall it on my old laptop. I use PythonDBAGraphs just about every day for my work and I didn’t want to whack it by uninstalling Canopy on my old laptop and trying to install something new. But, now I have my old laptop as a backup so I was able to take the time to install Python 3 and the current packages. I have updated the README with details of my new configuration.

I had to make a number of changes in the code. I had to change <> to !=. I had to change print xyz to print(xyz). Also, I think I ran across a bug in the new version of cx_Oracle. Here is an example. Assume you have an open connection and cursor. The cursor is cur.

cur.execute('select to_char(blocks/7) from user_tables')
row = cur.fetchone()
print(row)

cur.execute('select blocks/7 from user_tables')
row = cur.fetchone()
print(row)

Here is the output:

('2.28571428571428571428571428571428571429',)
Traceback (most recent call last):
  File "bug.py", line 12, in 
    row = cur.fetchone()
ValueError: invalid literal for int() with base 10: '2.28571428571428571428571428571428571429'

Strange. The workaround is to take any numbers with fractional values and run to_char() on them. I made a number of changes in the code to work around this bug/feature.

Anyway, I’m glad that I moved to Python 3 and that it enables me to use pip to install just the packages I want. Hopefully this update will make PythonDBAGraphs more usable for others since not everyone has Canopy.

Bobby

Categories: DBA Blogs

How to Secure our Oracle Databases

Learn DB Concepts with me... - Fri, 2017-06-02 13:27

How Secure can we make our Oracle Databases??

This is a routine question that runs in minds of most database administrators.  
HOW SECURE ARE OUR DATABASES. CAN WE MAKE IT ANYMORE SECURE.

I am writing this post to share my experience and knowledge on securing databases. I personally follow below tips to secure my databases:


 1. Make sure we only grant access to those users that really need to access database.
2. Remove all the unnecessary grants/privileges from users/roles.
3. Frequently audit database users Failed Logins in order to verify who is trying to login and their actions.
4. If a user is requesting elevated privileges, make sure you talk to them and understand their requirements.
5. Grant no more access than what needed.
6. At times users might need access temporarily. Make sure these temporary access are revoked after tasks are completed.
7. Define a fine boundary on who can access what??
8. Use User profiles / Audit to ensure all activities are tracked.
9.  Enforce complex password. Here is the Link on how to do it 
10 Use Triggers to track user activity.
11. Make sure passwords are encrypted in applications, this can be potential threat if you application code has been compromised.
12. Add password to your listener.
13. Allow access only from needed/known servers/clients. Use Valid_node_checking Link on how to restrict access to servers/clients.







Categories: DBA Blogs

Video: Distributed Stream Processing Using the Java 8 Streams API

OTN TechBlog - Fri, 2017-06-02 12:43

What is distributed stream processing and why does it matter?

"Java 8 came out with these really amazing language enhancements for Java developers," including the addition of streams, explains Oracle Cloud Engineering Group architect Brian Oliver. "Basically it lets you very elegantly describe declaratively what you want and how you want to process some data structures. So we can take a set or a map or a list that contains application data and you can say, well, I want to filter it on a map, I want to transform it. And you do that in a what we call functional programming. It's a natural way of writing quite complex algorithms in a very short period of time that you can basically read."

Now imagine if you could use that in a distributed environment. "When you're using streams in a Java process, all your data is basically in the Java process," says Brian. "So if I need to process a lot of data, I either have to have a very big Java process, or I have to keep the data outside. And so now I'm no longer using Java. And because I have to use some database technology or I have use some other APIs, I switch from being a Java developer to being an integrator with some other technology."

But there is a remedy. "We have these amazing capabilities in Java, and we're teaching everyone, and the community is excited about them," Brian explains, "so why don't we try to apply those concepts to distributed computing?"

The end result is that Java developers can use what they already know, "but on a much larger, grander scale," says Brian.

Watch the interview for more detail, and also check out Cloud Ready: Introduction to Distributed Streams, the complete video recording of Brian's Oracle Code session.

Additional Resources

 

 

Oracle SOA Suite: Want performance? Don’t log so much and clean up your database!

Amis Blog - Fri, 2017-06-02 11:12

The Oracle SOA Suite infrastructure, especially composites, use the database intensively. Not only are the process definitions stored in the database, also a lot of audit information gets written there. The SOA infrastructure database, if not well managed, will grow and will eventually have detrimental effects on performance. In this blog post I will give some quick suggestions that will help you increase performance of your SOA Suite infrastructure on the database side by executing some simple scripts. These are some suggestions I have seen work at different customers. Not only do they help managing the SOA Suite data in the database, they will also lead to better SOA Suite performance.

Do not log too much!

Less data is faster. If you can limit database growth, management becomes easier.

  • Make sure the auditlevel of your processes is set to production level in production environments.
  • Think about the BPEL setting inMemoryOptimization. This can only be set for processes that do not contain any dehydration points such as receive, wait, onMessage and onAlarm activities. If set to true, the completionpersistpolicy can be used to tweak what to do after completion of the process. For example only save information about faulted instances in the dehydration store. In 12c this setting is part of the ‘Oracle Integration Continuous Availability’ feature and uses Coherence.

Start with a clean slate regularly

Especially for development environments it is healthy to regularly truncate all the major SOAINFRA tables. The script to do this is supplied by Oracle: MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/truncate/truncate_soa_oracle.sql

The effect of executing this script is that all instance data is gone. This includes all tasks, long running BPM processes, long running BPEL processes, recoverable errors. For short everything except the definitions. The performance gain from executing the script can be significant. You should consider for example to run the script at the end of every sprint to start with a clean slate.

Delete instances

Oracle has provided scripts to remove old instances. These are scheduled by default in a clean installation of 12c. If you upgrades from 11g to 12c, this scheduling is not enabled by default. The auto-purge feature of 12c is described here.

What this feature does is execute the standard supplied purge scripts: MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/soa_purge/soa_purge_scripts.sql

In a normal SOA Suite 12c installation you can also find the scripts in MW_HOME/SOA_ORACLE_HOME/common/sql/soainfra/sql/oracle

In 12c installations, the patched purge scripts for older versions are also supplied. I would use the newest version of the scripts since the patches sometimes fix logic which can cause data inconsistencies which can have consequences later, for example during migrations.

What the scripts do is nicely described here. These scripts only remove instances you should not miss. Running instances and instances which can be recovered, are not deleted. In the script you can specify for how long data should be retained.

You should schedule this and run it daily. The shorter the period you keep information, the more you can reduce your SOAINFRA space usage and the better the performance of the database will be.

An example of how to execute the script:

DECLARE
MAX_CREATION_DATE TIMESTAMP;
MIN_CREATION_DATE TIMESTAMP;
BATCH_SIZE        INTEGER;
MAX_RUNTIME       INTEGER;
RETENTION_PERIOD  TIMESTAMP;
BEGIN
MIN_CREATION_DATE := TO_TIMESTAMP(TO_CHAR(sysdate-2000, ‘YYYY-MM-DD’),’YYYY-MM-DD’);
MAX_CREATION_DATE := TO_TIMESTAMP(TO_CHAR(sysdate-30, ‘YYYY-MM-DD’),’YYYY-MM-DD’);
RETENTION_PERIOD  := TO_TIMESTAMP(TO_CHAR(sysdate-29, ‘YYYY-MM-DD’),’YYYY-MM-DD’);
MAX_RUNTIME       := 180;
BATCH_SIZE        := 250000;

SOA.DELETE_INSTANCES(
MIN_CREATION_DATE    => MIN_CREATION_DATE,
MAX_CREATION_DATE    => MAX_CREATION_DATE,
BATCH_SIZE           => BATCH_SIZE,
MAX_RUNTIME          => MAX_RUNTIME,
RETENTION_PERIOD     => RETENTION_PERIOD,
PURGE_PARTITIONED_COMPONENT => FALSE);
);

END;
/

The script also has a variant which can be executed in parallel (which is faster) but that requires extra grants for the SOAINFRA database user.

Shrink space Tables

Deleting instances will not free up space on the filesystem of the server. Nor does it make sure that the data is not fragmented over many tablespace segments. Oracle does not provide standard scripts for this but does tell you this is a good idea and explains why here (9.5.2). In addition you can rebuild indexes. You should also of course run a daily gather statistics on the schema.

For 11g you can use this script to shrink space for tables and rebuild indexes. You should execute it under XX_SOAINFRA where XX if your schema prefix.

LOBs

LOB columns are saved outside of the tables and can be shrunk separately. In the below script you should replace XX_SOAINFRA with your SOAINFRA schema. The script explicitly drops BRDECISIONINSTANCE_INDX5 since the table can become quite large in development environments and you cannot shrink it with the index still on it. This script also might overlap with the script above for tables with LOB columns. It only shrinks for large tables where the LOB columns take more than 100Mb of space.

Other database suggestions Redo log size

Not directly related to cleaning, but related to SOAINFRA space management. The Oracle database uses so-called redo-log files to store all changes to the database. In case of a database instance failure, the database can use these redo-log files to recover. Usually there are two or more redo-logfiles. These files are rotated: if one is full, it goes to the next. When the last one is filled, it goes back to the first one overriding old data. Read more about redo-logs here. Rotating a redo-log file takes some time. When the redo-log files are small, they are rotated a lot. The following provides some suggestions in analyzing if increasing the size will help you. I’ve seen default values of 3 redo-log files of 100Mb. Oracle recommends having 3 groups of 2Gb each here.

Clean up long running and faulted instances!

The regular cleaning scripts which you might run on production do not clean instances which have an ECID which is the same as an instance which cannot be cleaned because it is for example still running or recoverable. If you have many processes running, you might be able to win a lot by for example restarting the running processes with a new ECID. You do have to build that functionality for yourself though. Also you should think about keeping track of time for tasks. If a certain task is supposed to only be open for a month, let it expire after a month. If you do not check this, you might encounter large numbers of tasks which remain open. This mains the instance which has created the task will remain open. This means you cannot undeploy the version of the process which has this task running. Life-cycle management is a thing!

Finally SOAINFRA is part of the infrastructure

Oracle SOA Suite logs a lot of audit information in the SOAINFRA database. You might be tempted to join that information to other business data directly on database level. This is not a smart thing to do.

If the information in the SOAINFRA database is used to for example query BPM processes or tasks, especially when this information is being joined over a database link to another database with additional business data, you have introduced a timebomb. The performance will be directly linked to the amount of data in the SOAINFRA database and especially with long running processes and tasks. You have now not only introduced a potential performance bottleneck for all your SOA composites but also for other parts of your application.

It is not a system of records

Secondly, the business might demand you keep the information for a certain period. Eventually they might even want to keep the data forever and use it for audits of historic records. This greatly interferes with purging strategies, which are required if you want to keep your environment to have good performance. If the business considers certain information important to keep, create a table and store the relevant information there.

The post Oracle SOA Suite: Want performance? Don’t log so much and clean up your database! appeared first on AMIS Oracle and Java Blog.

Overview of the new Cloudera Data Science Workbench

Rittman Mead Consulting - Fri, 2017-06-02 09:07

Recently Cloudera released a new product called Cloudera Data Science Workbench(CDSW)

Being a Cloudera Partner, we at Rittman Mead are always excited when something new comes along.

The CDSW is positioned as a collaborative platform for data scientists/engineers and analysts, enabling larger teams to work in a self-service manner through a web browser. This browser application is effectively an IDE for R, Python and Scala - all your favorite toys!

The CDSW is deployed onto edge nodes of your CDH cluster, providing easy access to your HDFS data and the Spark2 and Impala engines. This means that team members can immediately start working on their projects, accessing full datasets and share analysis and results. A CDSW Project can include reusable code and snippets, libraries etc helping your teams to collaborate. Oh, and these projects can be linked with Github repos to help keep version history.

The workbench is used to fire up user session with R, Python or Scala inside a dedicated Docker engines. These engines can be customised, or extended, like any other Docker images to include all your favourite R packages and Python libraries. Using HDFS, Hive, Spark2 or Impala the workload can then be distributed over to the CDH cluster, by use of your preferred methods, without having to configure anything. This engine (virtual machine, really) runs for as long as the analysis. Any logs or output files need to be saved in the project folder, which is mounted inside the engine and saved on the CDSW master node. The master node is a gateway node to the CDH cluster and can scale out to many worker nodes to distribute the Docker engines

(C) Cloudera.com

And under the hood we also have Kubernetes to schedule user workload across the worker nodes and provide CPU and memory isolation

So far I find the IDE to be a bit too simple and lacking features compared to e.g. RStudio Server. But the ease of use and the fact that everything is automatically configured makes the CDSW an absolute must for any Cloudera customer with data science teams. Also, I'm convinced that future releases will add loads of cool functionality

I spent about two days building a new cluster on AWS and install the Cloudera Data Science Workbench, just an indication of how easy it is to get up and running. Btw, it also runs in the cloud (Iaas) ;)

Want to know more or see a live demo? Contact us at info@rittmanmead.com

Categories: BI & Warehousing

Limit to size of csv file download from Interactive Report

Tom Kyte - Fri, 2017-06-02 02:06
Hello Tom, Firstly, I tried to google it a lot to check on if there is a limit to number of rows that can be exported to csv file from APEX Interactive Report but couldn't find one. So here is my issue. I have an interactive report with around ...
Categories: DBA Blogs

Select row count with multiple conditions and having 10 million rows

Tom Kyte - Fri, 2017-06-02 02:06
Hi Tom, I have a table and having 10+ million rows. I am getting the count of the table with specific conditions and execution takes 10 secs. I have used "SELECT COUNT(1)" statement and "MAX(ROWNUM)" statement. both statements are taking time. could ...
Categories: DBA Blogs

alert log

Tom Kyte - Fri, 2017-06-02 02:06
As i have seen everyday in alert.log of database it is showing like this - online index (re)build cleanup: objn=128483 maxretry=2000 forever=0 Can you please let me know why it is showing ? Thanks
Categories: DBA Blogs

update query in oracle when sga size is small and table data is huge

Tom Kyte - Fri, 2017-06-02 02:06
Hi , My sga size is 5 gb and my table data is 20 gb , when i run update statement on table how it will fit in sga and how oracle work . Please explain ? Thanks, RP.
Categories: DBA Blogs

Read CSV from APEX Server and insert into Staging table

Tom Kyte - Fri, 2017-06-02 02:06
Hi Tom, I Have csv file on APEX server.I need to access file from server and read the records one by one and load into staging table through PLSQL coding. Could you please provide the solution. Thanks.
Categories: DBA Blogs

Restore and Recover with no backup noarchivelog mode

Tom Kyte - Fri, 2017-06-02 02:06
Hi team, I have a database which is in no archivelog mode and NO Backup like rman and cold backup are available. Only logical Backup (export schema) is available. And suppose one datafile is corrupted or dropped like rm then how can i restore and...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator