Feed aggregator

Changes and Book

Fuad Arshad - Mon, 2014-01-27 16:35
I just realized that i have not blogged in a very long time. This has been partly because i switched jobs and started working for Oracle.  it has been an interesting six months and i have been enjoying the challenge of working with various customer and helping them solve problems.
The other thing that has been an important milestone in my career is the publishing of a book that collaborated with a very fine team of individuals with . The book is a collection of our experiences and passion with the Oracle Database and is called Practical Oracle Database Appliance. You can pre-order the book at Amazon with a link available below. I will be trying to blog more about various aspects of my new job and interesting stuff above Exadata as i learn them


CRS-1615:No I/O has completed after 50% of the maximum interval

Syed Jaffar - Mon, 2014-01-27 03:01
A few days back on one of the production cluster nodes the clusterware became unhealthy and caused instances termination on the node. If it was a node eviction, everything would have come back automatically after the node reboot, however, in this scenario, just cluster components were in unhealthy state, hence, no instance started on the node.

Upon referring the ocssd.log file, the following error found:

cssd(21532)]CRS-1615:No I/O has completed after 50% of the maximum interval. Voting file /dev/rdisk/oracle/ocr/ora_ocr_01 will be considered not functional in 13169 milliseconds

The node was unable to communicate with rest of the cluster nodes. If so, why didn't my node crashed/evicted? Isn't that question comes to your mind?.

Well, from the error it is clear that the node suffered I/O issues, and was unable to access the voting disk and started complaining (in the ocssd.log) that it can't see other nodes in the cluster. When contacted the storage and OS teams, the OS team was quick to identify the issue on PCI card. For some reasons, the I/O channel to the storage from the node was suspended for 10 seconds and re-established the connection back after 10 seconds. In the mean time, the cluster on the node become unhealthy and couldn't proceed further with any other action.

The only workaround we had to perform was restarting the CSS component using the following command:

crsctl start res ora.cssd -init

Once the CSS was started, everything back to normal state. Luckily, didn't have to do a lot of research why cluster become unhealthy or instances crashed.

It's a drag...

Tony Andrews - Sun, 2014-01-26 06:54
It really used to be a "drag" putting together a set list for my band using my Oracle database of songs we play: the easiest way was to download all the songs to an Excel spreadsheet, manipulate them there, then re-import back into the database.  I tried various techniques within APEX but none was easier than that - until now.  I have just discovered jQuery UI's Sortable interaction, and in a Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com8Surrey, UK51.2622513 -0.4672517000000198150.6264563 -1.7581452000000197 51.898046300000004 0.82364179999998011http://tonyandrews.blogspot.com/2014/01/its-drag.html

AUTO Vs Manual PSU patch - when and why?

Syed Jaffar - Sun, 2014-01-26 03:46
The purpose of this blog entry is to share my thoughts on AUTO Vs Manual PSU patch deployment with when and why scenario, also,  a success story of reducing (almost 50%) over all patching time we achieved in the recent times. Thought this would help you as well.

In my own perspective, I think we are one of the organizations in the Middle East who has a large and complex Oracle Cluster setups in-place. Having six (06) cluster environments, which includes production, non-production and DR, it is always certainly going to be an uphill task and challenging aspect maintaining them. One of the tasks that requires most attention and efforts more often is non-other than the PSU patch deployment for all the six environments at our premises.

We are currently in the process of applying 11.2.0.2.11 PSU patch in all our environments and the challenge in front of us is to bring down the patching time on each server. Our past patching experience says, an AUTO patching on each node needs minimum of 2 hours , and if you are going to patch a 10 node cluster, you need at least 22 hrs time frame to complete the deployment.

AUTO Patching
No doubt AUTO patching is the coolest enhancement from Oracle to automate the entire patching procedure smoothly and more importantly without much human intervene. The downside of the AUTO patch is the following, where AUTO patching for GI and RDBMS homes together fail or can't go with AUTO patch for GI and RDBMS home together:

  1. When you have multiple Oracle homes with different software owners, for example, we have an Oracle EBusiness Suite and typical RDBMS homes(10g and 11g) under different ownership.
  2. If you have multiple versions of Oracle databases running, for example, we have Oracle v10g and Oracle v11g databases.
  3. During the course of patching, if one of the files get failed to copy/rollback the backup copy due to file busy with any other existing OS process, the patch will be rolled back and will automatically restarts the cluster and the other services on the node subsequently.
Perhaps in the above circumstances, one may choose going with the AUTO patch separately for GI and then to the RDBMS homes. However, when you hit the 3 point above, the same thing gonna happen, which is time consuming, of course. While patching on particular node, the AUTO patching on GI home failed 3 times due to unsuccessfully cluster shutdown and we end-up rebooting the node 3 times.

Manual Patching
In contrast, manual patching requires heavy human intervene during the course of patch deployment. A set of steps needs to be followed carefully.

Since we got a challenge to look at all the possibilities to reduce the overall patching time, we started off analyzing various options between AUTO and Manual patch deployment and where the time is being consumed/wasted. We figured out that, after each successful/unsuccessful AUTO patching attempt, the cluster and the services on the nodes will have to restart and this was the time consuming factor. In a complex/large cluster environment with many instances, asm disks, diskgroups, it is certainly going to take a good amount of time to start off everything. This caught our attention and thought of giving a manual patching try.

When we tried manual patching method, we managed to patch the GI and RDBMS homes in about 1 hour time, this was almost 50% less than with the AUTO patching time-frame. Imagine, we finish 6 nodes in about 7 hours time in contrast to 12-13 hours time frame.

In a nutshell, if you have a small cluster environment, say 2 nodes cluster, you may feel you are not gain much with respect to saving the time, however, if you are going to patch a large/complex cluster environment, think of manual method which could save pretty huge patching downtime. At the same time, keep in mind that this method requires DBA intervene more than the AUTO patching method.


IOUG’s 2014 Exadata Virtual Conference

Asif Momen - Sat, 2014-01-25 15:35
IOUG is organizing a two-day virtual Exadata Conference on 29-30 Jan, 2014. More importantly the virtual conference is FREE. Following is the agenda as per the IOUG's website:

29-Jan-2014 (Wednesday)

10-11am CST
Minimizing Risks with Database Consolidation on Exadata using I/O Resource Manager (IORM)
Speaker: Sameer Malik, Exadata Technical Architect, Accenture
11am-12pm CST           
Introduction to the New Oracle Database In-Memory Option
Speaker: Kevin Jernigan, Senior Director, Oracle
12-1pm CST
Understand the Flash storage and Dynamic Tiering Solution for Traditional Database in Comparison of Exadata Flash Solution.
Speaker: Amit Das, Database Engineering Architect, Paypal

30-Jan-2014 (Thursday)

10-11am CST
Oracle Exadata Technical Deep Dive Session
Speaker: Yenugula Venkata RaviKumar, Senior Sales Consultant, Oracle India Pvt. Limited
11am-12pm CST        
Exadata X4: What’s New
Speaker: Mahesh Subramaniam, Director of Product Management, Oracle
12-1pm CST
Exadata for Oracle DBAs
Speaker: Arup Nanda, Director, Starwood Hotels & Resorts

For more information and registration click on the below link:

http://www.ioug.org/exavirtual2014


I have registered myself for the two-day conference and hope to see you all there !!!


WebCenter LDAP Filters Explained!

Bex Huff - Fri, 2014-01-24 18:42

We recently has a client with some LDAP performance issues, and had a need to tune how WebLogic was querying their LDAP repository. In WebLogic, the simplest way to do this is with their LDAP Filters. While trying to explain how to do this, I was struck by the lack of clear documentation on what exactly these filters are and why on earth you would need them... The best documentation was in the WebCenter guide, but it was still a bit light on the details.

Firstly, all these filters use LDAP query syntax. For those familiar with SQL, LDAP query syntax looks pretty dang weird... mainly because it uses Prefix, or Polish notation to construct the queries. So if you wanted all Contact objects in the LDAP repository with a Common Name that began with "Joe", your query would look like this:

    (&(objectClass=contact)(cn=Joe*))

Notice how the ampersand AND operator is in the front, and the conditionals are in their own parenthesis. Also note the * wildcard. If you wanted to grab all Group objects that had either Marketing or Sales in the name, the query would look like this:

    (&(objectClass=group)(|(cn=*Marketing*)(cn=*Sales*)))

Notice that the pipe OR operator prefixes the conditionals checking for Marketing or Sales in the group. Of course, this would not be a great query to run frequently... substring searches are slow, and multiple substring searches are even worse!

Below are what these filters do, and why I think you'd need to change them...

All Users Filter: This is basically the initial filter to grab all "user" objects in the entire repository. LDAP stores all kinds of objects (groups, contacts, computers, domains), and this is a simple query to narrow the list of user objects from the collection of all objects. A common setting is simply:

    (objectclass=user)
    (&(objectCategory=person)(objectClass=user))
    (sAMAccountType=805306368)

Users From Name Filter: This is a query to find a user object based on the name of the user. This is a sub-filter based on the previous All Users Filter to grab one specific person based on the user name. You would sometimes change this based on what single sign on system you are using, some use the common name as the official user ID, whereas other systems use the sAMAccountName. The %u token is the name being looked up. One of these two usually works:

    (&(cn=%u)(objectclass=user))
    (&(sAMAccountName=%u)(objectclass=user))

All Groups Filter: Similar to the all names filter, this filter narrows the list of all objects in the LDAP repository to just the list of groups. By default, most applications just grab all group objects with this filter:

    (objectCategory=group)

However, if you have a particularly large LDAP repository, this can be a performance problem. We usually don't need all the groups defined in the repository, we just need the ones with a specific name:

    (&(objectCategory=group)(|(cn=Sales)(cn=Marketing)))

Or the ones under a specific organizational unit:

    (&(objectCategory=group)(|(ou:dn:=Sales)(ou:dn:=Marketing)))

Then the list of group objects to query based on name is much smaller and faster.

Group From Name Filter: Similar to the User From Name Filter, this filter looks up a specific group by the name (the %g token). Again, thie value here usually depends on what single sing on solution you are using, but one of these two usually works:

    (&(cn=%g)(objectclass=group)
    (&(sAMAccountName=%g)(objectclass=group))

Hopefully that clears things up a bit! If you have performance problems, your best bet is to modify the All Groups Filter and the All Users Filter to only grab the groups and users relevant to your specific app.

read more

Categories: Fusion Middleware

How can companies prevent cyber attacks like that of Dec 2013 Target credit card data theft ?

Arvind Jain - Fri, 2014-01-24 18:07
1//24/2014 By: Arvind Jain

By now we all know that passionate hackers are very smart and they will always have a edge over whatever known systems we can create (Firewall, IPS etc). Even the best SIO (Security Intelligence Operations) team cannot possibly know of each and every malware in advance so a traditional approach of IPS or Malware detection based on signature is so stone age thing now.

So what could have been done at Target? I am sure many experts are pondering over it but here is my simple thinking. A combination of proactive people, process and tools would have prevented it.


We need people for behavior analysis or analytics.  BlackPOS creators and Hackers in general know what a Firewall can do. So they timed data transfer to normal business hours, merged it with FTP traffic and used internal dump servers in Targets own network. This is what I gathered from iSight comment in the WSJ article today.

"ISight, hired by the Secret Service and Department of Homeland Security to help with the investigation, said the bug had a "zero percent antivirus detection rate," meaning even updated security software couldn't tell it was harmful.  So a endpoint security system or antivirus software would also have been ineffective to detect the malware.

This is where you need a joint effort on part of system, people, and process to detect anomalies.  Something like a Cyber Threat Defense solution (like the one offered by Cisco) is a good way to detect patterns and flag them.

The hack involved several tools, a Trojan horse scanned the point-of-sale system's memory for card data which was stored unencrypted in memmory. Another logged when the stolen data was stashed inside Target's network. Yet another sent the stolen data to a computer outside the company. The coordination of those functions was complex and sophisticated, but could have been easily seen as an anomalous pattern.



Like if there is traffic jammed up in freeway you know something is wrong ahead. For that matter if all traffic goes to a different side than normal for that route then also you know something is not right. To detect anomalous activity, you have to look at traffic timing, volume, direction etc. to detect activity.

These are good indicator that something has happened and potentially it requires immediate attention from people and processes. You could then take the traffic flow (using a tool like NetFlow) and look for anomalous traffic patterns.  You would have encountered something that is never before seen and that would have triggered deep packet inspection of dump files.

Typically Malwares siphoned data and stored it in local Intranet (to disguise it as internal traffic over a temporary NetBIOS share to an internal host inside the compromised network) and then attempt to send the data to the attacker over a legitimate call like via FTP or HTTP.  Compromised data was collected in .DLL files (in this case, track data, which includes all of the information within the magnetic strip) and is periodically relayed to an affected “dump” server over a temporary NetBIOS share drive.  In this particular case the DLLs weren't malicious (they just contained normal data so no system could have tracked it without insight from people or Target IT staff).

Tools like Lancope StealthWatch help you detect such anomaly. The dump server was not a host that the POS systems were required to communicate with. So when POS systems attempt to communicate to one another or to a unidentified server a Host Lock Violation alarm is generated. Similarly once the data started to be sent to the dump server, it could have triggered a Relationship High Traffic or potentially a Relationship New Flows alarm.

Internet Control Message Protocol (ICMP) is one of the main protocols of the Internet Protocol Suite used by network devices, like routers, to send error messages indicating, for example, that a requested service is not available or that a host or router could not be reached. ICMP anomalies can be detected using network-monitoring tools provided by companies like Cisco or its recent acquisition

So you do have all the tools at your disposal, all that was needed was a good brain with commonsense to do correlation between the series of activities that were happening anomalously and could have been detected by monitoring tools.



 

Of course if you do not have time for all these or the tools or the in-house security expertise, Cisco Advanced Services for Managed Cyber Security is at your service. Feel free to reach out to me for recommendations.


Arvind

Rocky Mountain Oracle Users group event

Kuassi Mensah - Fri, 2014-01-24 17:16
 
Join me at #RMOUG Feb 6th 8:30 Rm 504:; i'll be presenting #JavaPerf., Scalability, Availability & Security w Oracle #db12c --#OracleRac -- http://bit.ly/1cgzgy1

OTN Virtual Developers Days

Kuassi Mensah - Fri, 2014-01-24 16:51
Join me on Feb 4th @ http://t.co/nheuk9MBZd

at 10:15am: "In-Database MapReduce w SQL & Hadoop"  then 12:15pm "Best Practices 4 App Performance, Scalability and High-Availability with Oracle Database 12c"

My Latin American Tour in March

Oracle NZ - Thu, 2014-01-23 19:00

At March I will be presenting my very successful seminar “Mastering Backup and Recovery” in some countries of Latin America for the very first time. Thank you Ecuador,Panama, Chile and Brazil OUGs for inviting me to your amazing countries!

laouc.fwOracle_Magazine_Editors_Choice_Awards

Please, use the following links for registration and also to find more information about the seminar:

  • Quito, Ecuador – March 13-15 (2014) Click Here
  • Panama City, Panama – March 17-18 (2014) Click Here
  • Porto Alegre, Brazil – March 20-21 (2014) Click Here
  • Sao Paulo, Brazil – March 24-25 (2014) Click Here
  • Santiago, Chile – March 27-28 (2014) Click Here

 

 

Note: the next 10 people on each country to register before February 15 will receive a free signed copy of my book “Oracle Database 12c Backup and Recovery Survival Guide”

 

Regards,

 

Francisco Munoz Alvarez



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [My Latin American Tour in March], All Right Reserved. 2016.
Categories: DBA Blogs

Join me at the next OTN Virtual Developer Day

Oracle NZ - Thu, 2014-01-23 18:18

260032_otn_virtual_dvlper_day_bnr_733w

 

Hi All,

On February 4, 2014 at 9:30 am PT I will be talking on the next OTN Virtual Developer Day about Oracle VM and Oracle Database. Come and discover the answers for the following questions:

  • Does an Oracle Database perform well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

Come and join me on this fantastic event. Registrations and more information here.

Regards,

 

Francisco Munoz Alvarez



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Join me at the next OTN Virtual Developer Day], All Right Reserved. 2016.
Categories: DBA Blogs

Problems when doing a V2V or P2V of a windows server to Oracle VM

Oracle NZ - Thu, 2014-01-23 18:06

Some time ago, I received the request to migrate some Windows 2008 Servers to my Oracle VM farm and after complete the P2V migration the newly create VM would not start. It would crash on boot with blue screen and error:
STOP: 0x0000007B (0xXXXXXXXX,0xXXXXXXXX,0×00000000,0×00000000)

This issue is related to the storage drivers needed for the guest VM, and to solve this issue you should do the following on your Windows Server before start the P2V or V2V migration:

  1. Extract the Atapi.sys, Intelide.sys, Pciide.sys, and Pciidex.sys files from the %SystemRoot%\Driver Cache\I386\Driver.cab file, copy the files to the %SystemRoot%\System32\Drivers folder.
  2. Merge this register file (NOTE: You would need access to My Oracle Support to be able to download it)
  3. Turn off the virtual machine, and then try import it to OracleVM again.

You can also find more information regarding this problem on the following Microsoft link:

http://support.microsoft.com/kb/314082

Hope this would help you if having the same issue I had.

 

Regards,

 

Francisco Munoz Alvarez



Tags:  

Del.icio.us
Facebook
TweetThis
Digg
StumbleUpon

Copyright © OracleNZ by Francisco Munoz Alvarez [Problems when doing a V2V or P2V of a windows server to Oracle VM], All Right Reserved. 2016.
Categories: DBA Blogs

Happy Birthday, Oracle Application Express!

Joel Kallman - Thu, 2014-01-23 13:40


Happy birthday, Oracle Application Express!

In January 2004, Oracle Database 10g was released.  And bundled with the Oracle 10g Database distribution was an additional disc called the Companion CD, which included both Oracle HTTP Server, mod_plsql and this new tool called Oracle HTML DB 1.5.  This was the date of the first officially distributed and supported software from Oracle which has grown into today's Oracle Application Express.

To recap the years:

  • 2004  HTML DB 1.5 - Initial release
  • 2004  HTML DB 1.6 - User interface Themes
  • 2005  HTML DB 2.0 - SQL Workshop
  • 2006  Application Express 2.1 - Embedded with Oracle 10gR2 Express Edition (XE)
  • 2006  Application Express 2.2 - Packaged Applications
  • 2007  Application Express 3.0 - Flash Charts, PDF printing, Microsoft Access migration
  • 2008  Application Express 3.1 - Interactive Reports, runtime-only installation
  • 2009  Application Express 3.2 - Oracle Forms to Oracle Application Express conversion
  • 2010  Application Express 4.0 - Plug-ins, Dynamic Actions, Team Development
  • 2011  Application Express 4.1 - Data Upload, improved Tabular Forms, Error Handling
  • 2012  Application Express 4.2 - Mobile support, mobile and responsive themes, RESTful Web Services

Oracle Application Express has been delivered with every version of the Oracle Database since 2004.  Beginning with Oracle Database 11gR1, Oracle Application Express moved to the database distribution and was treated as a "standard" database component.  Beginning with Oracle Database 12c, Oracle Application Express is installed by default in every Oracle database.  Oracle Application Express is the development framework for both Oracle Audit Vault and Database Firewall and the 12c Multitenant Self-Service Provisioning applications.  apex.oracle.com, the customer evaluation instance of Oracle Application Express, keeps chugging along - with an average of 1,000 new workspace requests per week.

In 2004 / 2005, customers would "dip their toe in the water" with HTML DB.  Today, Oracle Application Express is an approved development framework in countless large enterprises, managing literally hundreds of applications on a single Oracle Database instance.

To say that Oracle Application Express has matured over the past 10 years can be a bit misleading.  Some perceive maturing as "getting old".  I would much rather characterize this as adapting and evolving...and growing.  The story with Oracle Application Express is far from over.  Not only are there vast improvements which need to be made in the framework itself (especially with respect to developer productivity and enterprise deployment), but we need to improve in many other ways - documentation, examples, videos, communication, usability, and a plethora of excellent customer-provided enhancement requests.  The industry is constantly changing and evolving as well, and Oracle Application Express must adapt and evolve and, in some respects, lead.

For now, though, let me just offer a simple "Happy birthday!"

OpenSCAP distributed with Oracle VM Server for x86

Wim Coekaerts - Sun, 2014-01-19 06:09
Security Compliance : true We recently released Oracle VM Server for x86 3.2.7. For more information you can go here. In addition we also recently released Oracle Linux 6.5. Find the press release here and the link to the release notes here.

You will notice that for Oracle Linux we have updated the version of OpenSCAP to use the NIST SCAP 1.2 specification.

We have also decided to distribute OpenSCAP with Oracle VM Server for x86 so you will be able to use the same utility for security compliance checks that you may use with Oracle Linux and Oracle Solaris. Initially, the OpenSCAP package we are distributing with Oracle VM Server for x86 is available on the Oracle Public Yum Server, so you may start by using the oscap(8) - OpenSCAP command line tool after you've installed the openscap-utils RPM on your Dom0 test environment. If you are working on the technical security controls that are required by your organization for the approval to operate Oracle VM Server for x86, then you should understand that OpenSCAP is an effective tool to demonstrate security compliance to your authorizing official. However, you should carefully examine your organizations SCAP content and the implementation details such as the use of OVAL for compliance checks.

We typically recommend that you do not directly execute additional utilities within the Oracle VM Server management domain (i.e. the Dom0 domain), but checking security compliance requires careful limited access by your authorized administrators to produce the reports. The Oracle VM Security Guide for Release 3 explains the philosophy of protection for the installation of the Oracle VM Server using a small footprint:

"Oracle VM Server runs a lightweight, optimized version of Oracle Linux. It is based upon an updated version of the Xen hypervisor technology and includes Oracle VM Agent. The installation of Oracle VM Server in itself is secure: it has no unused packages or applications and no services listening on any ports except for those required for the operation of the Oracle VM environment."

Please note that you should report any potential security vulnerabilities in Oracle products following the instructions found here.

We posted some helpful details about Oracle Linux Errata and CVE information this time last year and you may also review the notifications of Oracle VM errata here. For the examples we are reviewing now, the use of OVAL checks is a part the traditional ways you would show that your servers are all compliant (locked-down or hardened) with relevant security settings in your checklists that reference the product security guides.

The Oracle Software Security Assurance Secure Configuration Initiative has established Oracle product security goals for both Secure Configuration and Security Guides. We have built in the security features with Oracle VM Server for x86 and you should expect that the default installation follows the software security assurance guidelines. Using OpenSCAP for security compliance checks may help you to show that the Oracle VM Server for x86 configuration is up to date with the latest details documented in the security guides for operating systems and server virtualization.

A standardized approach to security compliance is a goal that many organizations are working toward and includes a broad set of security controls typically found within a complete Risk Management Framework provided by the NIST RMF and other standards bodies within the international IT security community. When you begin to use OpenSCAP you will find that the standard SCAP content contains product specific technical security controls that are expected to be unique and have version dependencies as well. You will notice the standard SCAP content used with OpenSCAP on Oracle VM Server for x86 can produce valid securty compliance reports, but you must still understand the technical nuances for measuring compliance that show results for each test:

    True
    False  
    Error  
    Unknown  
    Not Applicable  
    Not Evaluated

Advantages to using a standardized approach for security compliance include considerations of "what is measured" and "how it is measured" to improve the precision, accuracy and ultimate effectiveness required to mitigate risks. The initial results that are produced using OpenSCAP for security compliance checks must be further examined to truly understand the meaning of 'true' or 'false' so that you can demonstrate the rationalization for applying any fixes to re-mediate a verifiable problem. The effectiveness of OpenSCAP depends on the thorough understanding of all the technical details at the early stages of your testing, so you will benefit by the complete coverage that may be repeated for all of your production Oracle VM Servers.

Automating system administration activities is a fundamental objective for on-premise and cloud computing architectures and we are working to standardize as much of the enterprise infrastructure components as possible to produce the most cost effective solutions using Oracle VM Server. The security compliance requirements of many organizations have increased reporting cycles that must be continuously monitored. With careful planning, OpenSCAP may be an effective tool for reporting your organizations IT security controls, but we want to review some of the basic concepts that you should be aware of.

We noted earlier that Dom0 is a special purpose management domain that is based on Xen built with Oracle Linux. The Oracle Linux and Oracle Solaris configurations share a common set of technical security controls that are useful to measure consistently with Oracle VM Server. However, the results you analyse requires historic perspectives and current insight to determine the relevance and criticality that is important to convey to the decision makers or authorizing officials in your organization.

One random example of a security compliance check that illustrates a number of considerations is related to CWE-264: Permissions, Privileges, and Access Controls. More specifically, as an exercise, we want to drill down to both CWE-275: Permission Issues and CWE-426: Untrusted Search Path potential problems.

To demonstrate how OpenSCAP can be used to report the results of a check related to CWE-275 and CWE-426 we can start by viewing the Red Hat 5 STIG Benchmark, Version 1, Release 4 from DISA:

[root@ovm327 ~]# wget 
  http://iase.disa.mil/stigs/os/unix/u_redhat_5_v1r4_stig_benchmark.zip

For brevity, we have extracted out the OVAL compliance item for 'STIG ID: GEN000960' that we show using the DISA STIG Viewer:

If you also want to test this, here is the raw XML

This looks simple enough, so let's see the result using OpenSCAP on Oracle VM Server for x86:

[root@ovm327 ~]# oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: true
Evaluation done.
[root@ovm327 ~]#

We think we understand the result but let's view this differently just to be sure:

[root@ovm327 ~]# ls -ldL `echo $PATH | tr ':' '\n'`
ls: /root/bin: No such file or directory
drwxr-xr-x 2 root root  4096 Jan  2 12:45 /bin
drwxr-xr-x 2 root root  4096 Jan  2 12:45 /sbin
drwxr-xr-x 3 root root 16384 Jan  2 12:45 /usr/bin
drwxr-xr-x 2 root root  4096 Feb 16  2010 /usr/local/bin
drwxr-xr-x 2 root root  4096 Feb 16  2010 /usr/local/sbin
drwxr-xr-x 2 root root 12288 Jan  2 12:45 /usr/sbin
[root@ovm327 ~]#

This looks good to us, but let's make the '/root/bin' directory that we intentionally want to violate the compliance check to see what happens:

[root@ovm327 ~]# mkdir -m 0777 /root/bin
[root@ovm327 ~]# ls -ldL `echo $PATH | tr ':' '\n'`
drwxr-xr-x 2 root root  4096 Jan  2 12:45 /bin
drwxrwxrwx 2 root root  4096 Jan  2 13:55 /root/bin
drwxr-xr-x 2 root root  4096 Jan  2 12:45 /sbin
drwxr-xr-x 3 root root 16384 Jan  2 12:45 /usr/bin
drwxr-xr-x 2 root root  4096 Feb 16  2010 /usr/local/bin
drwxr-xr-x 2 root root  4096 Feb 16  2010 /usr/local/sbin
drwxr-xr-x 2 root root 12288 Jan  2 12:45 /usr/sbin
[root@ovm327 ~]# oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: false
Evaluation done.
[root@ovm327 ~]#

We have reasonably good confirmation that the OVAL compliance check works the way we expect. However, if we look at the entire set of permissions that enforce the discretionary access control policy, we then realize that there are also permissions on the '/root' directory that prevent the write operations by 'others' in the '/root/bin' directory from succeeding:

[root@ovm327 ~]# ls -ldL /root /root/bin
drwxr-x--- 4 root root 4096 Jan  2 13:55 /root
drwxrwxrwx 2 root root 4096 Jan  2 13:55 /root/bin
[root@ovm327 ~]#

We are not suggesting that the mode '0777' permissions on the '/root/bin' are acceptable because we have safer permissions on the '/root' directory, but the example shows that the OVAL check does not completely test the security controls exactly how the kernel enforces the permissions. We should justifiably state that the result of the OVAL security compliance check '0777' permissions on the '/root/bin' directory is a 'condition negative' with a 'test outcome negative' (i.e. a true negative), but also continue to note our other observations related to the access control enforcement.

Before proceeding, we will clean up the problem we just temporarily created on our test server:

[root@ovm327 ~]# chmod 0700 /root/bin
[root@ovm327 ~]# ls -ldL /root /root/bin
drwxr-x--- 4 root root 4096 Jan  2 13:55 /root
drwx------ 2 root root 4096 Jan  2 13:55 /root/bin
[root@ovm327 ~]# oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: true
Evaluation done.
[root@ovm327 ~]#

Hopefully you find this random security compliance check interesting and somewhat enlightening to illustrate what OpenSCAP can help you with. To continue, we decided to check a slightly different way to demonstrate the same security control:

[root@ovm327 ~]# wget
 https://git.fedorahosted.org/cgit/openscap.git/plain/dist/fedora/scap-fedora14-oval.xml

To simplify viewing the portion of the OVAL compliance entry we extracted it like we did with the DISA STIG item. If you also want to test this, here is the raw XML

Now we can show similar results using a slightly different implementation of the compliance check:

[root@ovm327 ~]# oscap oval eval fedora-accounts_root_path_dirs_no_write.xml
Definition oval:org.open-scap.f14:def:200855: true
Evaluation done.
[root@ovm327 ~]# chmod 0770 /root/bin
[root@ovm327 ~]# oscap oval eval fedora-accounts_root_path_dirs_no_write.xml
Definition oval:org.open-scap.f14:def:200855: false
Evaluation done.
[root@ovm327 ~]#

But we can also see that it is indeed a different check because it includes the test for group write permissions and the 'STIG ID: GEN000960' does not:

[root@ovm327 ~]# chmod 0770 /root/bin
[root@ovm327 ~]# oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: true
Evaluation done.
[root@ovm327 ~]#

Again, let's fix the problem we temporarily created on our test server:

[root@ovm327 ~]# chmod 0700 /root/bin
[root@ovm327 ~]#

You should also review the CIS Oracle Solaris 11.1 Benchmark v1.0.0 and the CIS Red Hat Enterprise Linux 6 Benchmark v1.2.0 to see that they both have the same entry to 'Ensure root PATH Integrity (Scored)' that has an audit section showing script commands that step through multiple potential security compliance issues to check. It is a common practice to combine similar checks in a group, but you may need to parse out the results to obtain a discrete value for a singular check.

As an additional consideration, let's shift our focus away from the differences within OVAL compliance definitions, to the different operating systems that the SCAP content was orignially written for. For this part of our testing we start up an Oracle Solaris 11.1 X86 instance running on a VM to demonstrate the OpenSCAP tests with the same OVAL compliance checks:

root@sol11:/root# pkg install security/compliance/openscap

root@sol11:/root# ls -ldL `echo $PATH | tr ':' '\n'`
drwxr-xr-x   4 root     bin         1126 Jan  2 14:05 /usr/bin
drwxr-xr-x   4 root     bin          445 Jan  2 13:54 /usr/sbin
root@sol11:/root# oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: true
Evaluation done.
root@sol11:/root# oscap oval eval fedora-accounts_root_path_dirs_no_write.xml
Definition oval:org.open-scap.f14:def:200855: true
Evaluation done.
root@sol11:/root# export PATH=$PATH:/tmp
root@sol11:/root# ls -ldL `echo $PATH | tr ':' '\n'`
drwxrwxrwt   5 root     sys          432 Jan  2 14:09 /tmp
drwxr-xr-x   4 root     bin         1126 Jan  2 14:05 /usr/bin
drwxr-xr-x   4 root     bin          445 Jan  2 13:54 /usr/sbin
root@sol11:/root# oscap oval eval GEN000960.xmlDefinition
oval:mil.disa.fso.rhel:def:77: false
Evaluation done.
root@sol11:/root# oscap oval eval fedora-accounts_root_path_dirs_no_write.xml
Definition oval:org.open-scap.f14:def:200855: false
Evaluation done.
root@sol11:/root#

Now let's repeat the same OpenSCAP checks with a non-root user account:

admin@sol11:~$ ls -ldL `echo $PATH | tr ':' '\n'`
drwxr-xr-x   4 root     bin         1126 Jan  2 14:05 /usr/bin
drwxr-xr-x   4 root     bin          445 Jan  2 13:54 /usr/sbin
admin@sol11:~$ oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: true
Evaluation done.
admin@sol11:~$ oscap oval eval fedora-accounts_root_path_dirs_no_write.xml
Definition oval:org.open-scap.f14:def:200855: true
Evaluation done.
admin@sol11:~$ export PATH=$PATH:/tmp
admin@sol11:~$ ls -ldL `echo $PATH | tr ':' '\n'`
drwxrwxrwt   5 root     sys          432 Jan  2 14:09 /tmp
drwxr-xr-x   4 root     bin         1126 Jan  2 14:05 /usr/bin
drwxr-xr-x   4 root     bin          445 Jan  2 13:54 /usr/sbin
admin@sol11:~$ oscap oval eval GEN000960.xml
Definition oval:mil.disa.fso.rhel:def:77: false
Evaluation done.
admin@sol11:~$ oscap oval eval fedora-accounts_root_path_dirs_no_write.xml
Definition oval:org.open-scap.f14:def:200855: false
Evaluation done.
admin@sol11:~$

We have discovered some additional interesting considerations when reviewing the OpenSCAP results executed on Oracle Solaris:

    The OVAL content appears to also work on Oracle Solaris 11.1
    The OVAL check is on the current PATH environment variable
    The OVAL check is for the current user shell or cron(1M) process running oscap(8)
    The OVAL check does not look for scripts that set the PATH for application run time environments
    The OVAL check does not account for more sophisticated access control technology

To further our understanding of the OVAL content, we decided to run the jOVAL tool which is not included with Oracle Solaris:

admin@sol11:~$ echo $PATH
/usr/bin:/usr/sbin:/tmp
admin@sol11:~$ /usr/share/jOVAL/jovaldi -l 1 -m -o GEN000960.xml

----------------------------------------------------
jOVAL Definition Interpreter
Version: 5.10.1.2
Build date: Thursday, January  2, 2014 04:46:39 PM PST
Copyright (c) 2011-2013 - jOVAL.org

Plugin: Default Plugin
Version: 5.10.1.2
Copyright (C) 2011-2013 - jOVAL.org
----------------------------------------------------

Start Time: Fri Jan 02 16:50:05 2014

 ** parsing /home/admin/GEN000960.xml
     - validating xml schema.
 ** checking schema version
     - Schema version - 5.4
 ** skipping Schematron validation
 ** creating a new OVAL System Characteristics file.
 ** gathering data for the OVAL definitions.
      Collecting object:  FINISHED                      
 ** saving data model to system-characteristics.xml.
 ** skipping Schematron validation
 ** running the OVAL Definition analysis.
      Analyzing definition:  FINISHED                    
 ** OVAL definition results.

    OVAL Id                                 Result
    -------------------------------------------------------
    oval:mil.disa.fso.rhel:def:77           true
    -------------------------------------------------------


 ** finished evaluating OVAL definitions.

 ** saving OVAL results to results.xml.
 ** skipping Schematron validation
 ** running OVAL Results xsl: /usr/share/jOVAL/xml/results_to_html.xsl.

----------------------------------------------------
admin@sol11:~$ echo $PATH
/usr/bin:/usr/sbin:/tmp
admin@sol11:~$ /usr/share/jOVAL/jovaldi -l 1 -m
  -o fedora-accounts_root_path_dirs_no_write.xml

----------------------------------------------------
jOVAL Definition Interpreter
Version: 5.10.1.2
Build date: Thursday, January  2, 2014 04:46:39 PM PST
Copyright (c) 2011-2013 - jOVAL.org

Plugin: Default Plugin
Version: 5.10.1.2
Copyright (C) 2011-2013 - jOVAL.org
----------------------------------------------------

Start Time: Fri Jan 02 16:50:30 2014

 ** parsing /home/admin/fedora-accounts_root_path_dirs_no_write.xml
     - validating xml schema.
 ** checking schema version
     - Schema version - 5.5
 ** skipping Schematron validation
 ** creating a new OVAL System Characteristics file.
 ** gathering data for the OVAL definitions.
      Collecting object:  FINISHED                         
 ** saving data model to system-characteristics.xml.
 ** skipping Schematron validation
 ** running the OVAL Definition analysis.
      Analyzing definition:  FINISHED                        
 ** OVAL definition results.

    OVAL Id                                 Result
    -------------------------------------------------------
    oval:org.open-scap.f14:def:200855       false
    -------------------------------------------------------


 ** finished evaluating OVAL definitions.

 ** saving OVAL results to results.xml.
 ** skipping Schematron validation
 ** running OVAL Results xsl: /usr/share/jOVAL/xml/results_to_html.xsl.

----------------------------------------------------
admin@sol11:~$

For now, this concludes our initial investigation of OpenSCAP to show the potential effectiveness on Oracle VM Server for x86 with careful consideration of the results you may observe with your SCAP content. You will also want to understand the XCCDF security checklists that are most often used to perform more complete security compliance checks with OpenSCAP in the same way you can check for STIG compliance:

# oscap xccdf eval --profile stig-rhel6-server --report report.html 
   --results results.xml --cpe ssg-rhel6-cpe-dictionary.xml ssg-rhel6-xccdf.xml

We hope that the random security compliance example we chose will help to illustrate that the use of OpenSCAP is not a substitute for adequately proficient expertise for analyzing IT security controls, but it allows for the repetitive checks in your production Oracle VM Servers after you have completed sufficient testing. Please contact your Oracle representitives if you have any quetions or place service requests with Oracle Support when you encounter problems.

Finally, please remember that you should report any potential security vulnerabilities in Oracle products following the instructions found here.

Clear Day for Cloud Adapters

Antony Reynolds - Fri, 2014-01-17 16:07
salesforce.com Adapter Released

Yesterday Oracle released their cloud adapter for salesforce.com (SFDC) so I thought I would talk a little about why you might want it.  I had previously integrated with SFDC using BPEL and the SFDC web interface, so in this post I will explore why the adapter might be a better approach.

Why?

So if I can interface to SFDC without the adapter why would I spend money on the adapter?  There are a number of reasons and in this post I will just explain the following 3 benefits:

  • Auto-Login
  • Non-Ploymorphic Operations
  • Operation Wizards

Lets take each one in turn.

Auto-Login

The first obvious benefit is how you connect and make calls to SFDC.  To perform an operation such as query an account or update an address the SFDC interface requires you to do the following:

  • Invoke a login method which returns a session ID to placed in the header on all future calls and the actual endpoint to call.
  • Invoke the actual operation using the provided endpoint and passing the session ID provided.
  • When finished with calls invoke the logout operation.

Now these are not unreasonable demands.  The problem comes when you try to implement this interface.

Before calling the login method you need the credentials.  These need to be obtained from somewhere, I set them as BPEL preferences but there are other ways to store them.  Calling the login method is not a problem but you need to be careful in how you make subsequent calls.

First all subsequent calls must override the endpoint address with the one returned from the login operation.  Secondly the provided session ID must be placed into a custom SOAP header.  So you have to copy the session ID into a custom SOAP Header and provide that header to the invoke operation.  You also have to override the endpointURI property in the invoke with provided endpoint.

Finally when you have finished performing operations you have to logout.

In addition to the number of steps you have to code there is the problem of knowing when to logout.  The simplest thing to do is for each operation you wish to perform execute the login operatoin folloed y the actual working operation and then do a logout operation.  The trouble with this is that you are now making 3 calls every time you want to perform an operation against SFDC.  This causes additional latency in processing the request.

The adapter hides all this from you, hiding the login/logout operations and allowing connections to  be re-used, reducing the number of logins required.  The adapter makes the SFDC call look like a call to any other web service while the adapter uses a session cache to avoid repeated logins.

Non-Polymorphic Operations

The standard operations in the SFDC interface provide a base object return type, the sObject.  This could be an Account or a Campaign for example but the operations always return the base sObject type, leaving it to the client to make sure they process the correct return type.  Similarly requests also use polymorphic data types.  This often requires in BPEL that the sObject returned is copied to a variable of a more specific type to simplify processing of the data.  If you don’t do this then you can still query fields within the specific object but the SOA tooling cannot check it for you.

The adapter identifies the type of request and response and also helps build the request for you with bind parameters.  This means that you are able to build your process to actually work with the real data structures, not the abstract ones.  This is another big benefit in my view!

Operation Wizards

The SFDC API is very powerful.  Translation: the SFDC API is very complex.  With great power comes complexity to paraphrase Uncle Ben (amongst others).  The adapter groups the operations into logical collections and then provides additional help in selecting from within those collections of operations and providing the correct parameters for them.

Installing

Installation takes place in two parts.  The Design time is deployed into JDeveloper and the run time is deployed into the SOA Suite Oracle Home.  The adapter is available for download here and the installation instructions and documentation are here.  Note that you will need OPatch to install the adapter.  OPatch can be downloaded from Oracle Support Patch 6880880.  Don’t use the OPatch that ships with JDeveloper and SOA Suite.  If you do you may see an error like:

Uncaught exception
oracle.classloader.util.AnnotatedNoClassDefFoundError:
      Missing class: oracle.tip.tools.ide.adapters.cloud.wizard.CloudAdapterWizard

You will want the OPatch 11.1.0.x.x.  Make sure you download the correct 6880880, it must be for 11.1.x as that is the version of JDeveloper and SOA Suite and it must be for the platform you are running on.

Apart from getting the right OPatch the installation is very straight forward.

So don’t be afraid of SFDC integration any more, cloud integratoin is clear with the SFDC adapter.

Clear Day for Cloud Adapters

Antony Reynolds - Fri, 2014-01-17 16:07
salesforce.com Adapter Released

Yesterday Oracle released their cloud adapter for salesforce.com (SFDC) so I thought I would talk a little about why you might want it.  I had previously integrated with SFDC using BPEL and the SFDC web interface, so in this post I will explore why the adapter might be a better approach.

Why?

So if I can interface to SFDC without the adapter why would I spend money on the adapter?  There are a number of reasons and in this post I will just explain the following 3 benefits:

  • Auto-Login
  • Non-Ploymorphic Operations
  • Operation Wizards

Lets take each one in turn.

Auto-Login

The first obvious benefit is how you connect and make calls to SFDC.  To perform an operation such as query an account or update an address the SFDC interface requires you to do the following:

  • Invoke a login method which returns a session ID to placed in the header on all future calls and the actual endpoint to call.
  • Invoke the actual operation using the provided endpoint and passing the session ID provided.
  • When finished with calls invoke the logout operation.

Now these are not unreasonable demands.  The problem comes when you try to implement this interface.

Before calling the login method you need the credentials.  These need to be obtained from somewhere, I set them as BPEL preferences but there are other ways to store them.  Calling the login method is not a problem but you need to be careful in how you make subsequent calls.

First all subsequent calls must override the endpoint address with the one returned from the login operation.  Secondly the provided session ID must be placed into a custom SOAP header.  So you have to copy the session ID into a custom SOAP Header and provide that header to the invoke operation.  You also have to override the endpointURI property in the invoke with provided endpoint.

Finally when you have finished performing operations you have to logout.

In addition to the number of steps you have to code there is the problem of knowing when to logout.  The simplest thing to do is for each operation you wish to perform execute the login operatoin folloed y the actual working operation and then do a logout operation.  The trouble with this is that you are now making 3 calls every time you want to perform an operation against SFDC.  This causes additional latency in processing the request.

The adapter hides all this from you, hiding the login/logout operations and allowing connections to  be re-used, reducing the number of logins required.  The adapter makes the SFDC call look like a call to any other web service while the adapter uses a session cache to avoid repeated logins.

Non-Polymorphic Operations

The standard operations in the SFDC interface provide a base object return type, the sObject.  This could be an Account or a Campaign for example but the operations always return the base sObject type, leaving it to the client to make sure they process the correct return type.  Similarly requests also use polymorphic data types.  This often requires in BPEL that the sObject returned is copied to a variable of a more specific type to simplify processing of the data.  If you don’t do this then you can still query fields within the specific object but the SOA tooling cannot check it for you.

The adapter identifies the type of request and response and also helps build the request for you with bind parameters.  This means that you are able to build your process to actually work with the real data structures, not the abstract ones.  This is another big benefit in my view!

Operation Wizards

The SFDC API is very powerful.  Translation: the SFDC API is very complex.  With great power comes complexity to paraphrase Uncle Ben (amongst others).  The adapter groups the operations into logical collections and then provides additional help in selecting from within those collections of operations and providing the correct parameters for them.

Installing

Installation takes place in two parts.  The Design time is deployed into JDeveloper and the run time is deployed into the SOA Suite Oracle Home.  The adapter is available for download here and the installation instructions and documentation are here.  Note that you will need OPatch to install the adapter.  OPatch can be downloaded from Oracle Support Patch 6880880.  Don’t use the OPatch that ships with JDeveloper and SOA Suite.  If you do you may see an error like:

Uncaught exception
oracle.classloader.util.AnnotatedNoClassDefFoundError:
      Missing class: oracle.tip.tools.ide.adapters.cloud.wizard.CloudAdapterWizard

You will want the OPatch 11.1.0.x.x.  Make sure you download the correct 6880880, it must be for 11.1.x as that is the version of JDeveloper and SOA Suite and it must be for the platform you are running on.

Apart from getting the right OPatch the installation is very straight forward.

So don’t be afraid of SFDC integration any more, cloud integratoin is clear with the SFDC adapter.

Developing Mobile Web Applications with Oracle Application Express 4.2

Marc Sewtz - Fri, 2014-01-17 15:33
If you’re interested in developing Mobile Web Apps on the Oracle Database – and if you’re following my Blog, chances are that you are – then here’s a great opportunity to get up to speed on the latest trends and learn how you can quickly and easily develop your own mobile web apps and extend your existing APEX apps for mobile use: attend my upcoming free ODTUG Webinar on February 20th:


Here’s what I will be covering: 

Smartphones and tablets are now commonly used throughout the enterprise. Thanks to easy to use mobile apps, ready access to the Internet, and widespread use of web- and cloud-based applications, mobile devices are a logical choice for many employees. Businesses need to respond to these trends by modernizing their public facing websites for mobile use and providing employees with access to in-house applications that support a variety of different devices and screen sizes. Solely building for desktop use and standardizing on IE only won’t cut it anymore. 



Oracle Application Express (APEX) provides you with powerful tools to quickly and efficiently build mobile web apps and to easily extend your existing desktop applications for mobile use. Regardless of whether these applications were built with APEX originally or use other technologies—as long as they’re running on top of an Oracle Database, APEX will enable you to quickly build mobile apps that provide access to your data and interact with your business processes. 



During this session, we’ll discuss mobile development trends and responsive web design, walk you through the basics of the jQuery Mobile framework, and provide you with an overview of the mobile development capabilities built into the current release of Oracle Application Express 4.2. We will include several live demos to get you started, so have your phone or tablet ready to try out our sample apps live while they are being built.

Additional information on ODTUG Webinars and a list of other upcoming Webinars can be found here:


REST, SSE or WebSockets on WebLogic 10.3.6

Edwin Biemond - Wed, 2014-01-15 14:10
WebLogic 10.3.6 comes with Jersey1.9 and has no support for Server Side Events or WebSockets. But for one of our projects we are making a HTML5 / AngularJS application, which need to invoke some RESTful services and we also want to use of SSE or WebSockets. Off course we can use WebLogic 12.1.2 but we already have an OSB / SOA Suite WebLogic 10.3.6 environment. So when you want to pimp your

What is behind these recent acquisitions by Palo Alto Networks and FireEye ? Domain Talent and Virtualization

Arvind Jain - Tue, 2014-01-14 20:17

Security is a red hot fascinating sector right now, acquisitions are happening left and right and I have stopped trying to do a financial valuation, there is something else happening. When money is cheap, I see these acquisitions happening as a race to get ahead with talent and new technology. But payoff will come for those who are first with economies of scale.

The two outstanding reasons for these acquisitions in my opinion are Virtualization in Security and Talent with domain expertise. Many security startup are focusing on use of in-situ virtual sandboxes to investigate suspicious files to detect malware before letting them loose in the main network.

Blue Coat Systems acquired Norman Shark, which had developed a sandboxing technology platform for malware analysis.  Palo Alto network acquired Morta Security  (CEO Raj Shah) a Silicon Valley-based security startup to bolster its cloud-based WildFire malware inspection technology. Aim was to get NSA talent as well as the virtualization technology. A week earlier FireEye acquired Mandiant which provides endpoint security software and is well known for its threat intelligence research and incident response services.

So what next ….. I am waiting to see some big - Bigdata plus Security related acquisitions and they are coming sooner than you will expect ….


Safe Surfing …

Pages

Subscribe to Oracle FAQ aggregator