Skip navigation.

Feed aggregator

FBI warns organizations of dangerous malware

Chris Foot - Thu, 2014-12-18 14:40

Transcript

Hi, welcome to RDX! The Federal Bureau of Investigation recently sent a five-page document to businesses, warning them of a particularly destructive type of malware. It is believed the program was the same one used to infiltrate Sony's databases.

The FBI report detailed the malware's capabilities. Apparently, the software overrides all information on computer hard drives, including the master boot record. This could prevent servers from accessing critical software, such as operating systems or enterprise applications.

Database data can be lost or corrupted for many reasons. Regardless if the data loss was due to a hardware failure, human error or the deliberate act of a cybercriminal, database backups ensure that critical data can be quickly restored.  RDX's backup and recovery experts are able to design well-thought out strategies that help organizations protect their databases from any type of unfortunate event.

Thanks for watching!

The post FBI warns organizations of dangerous malware appeared first on Remote DBA Experts.

Oracle 12.1.0.2 Bundle Patching

Jason Arneil - Thu, 2014-12-18 09:37

I’ve spent a few days playing with patching 12.1.0.2 with the so called “Database Patch for Engineered Systems and Database In-Memory”. Lets skip over why these not necessarily related feature sets should be bundled together into effectively a Bundle Patch.

First I was testing going from 12.1.0.2.1 to BP2 or 12.1.0.2.2. Then as soon as I’d done that of course BP3 was released.

So this is our starting position with BP1:

GI HOME:

[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)
19189240;DATABASE BUNDLE PATCH : 12.1.0.2.1 (19189240)

DB Home:

[oracle@rac2 ~]$ /u01/app/oracle/product/12.1.0.2/db_1/OPatch/opatch lspatches
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19189240;DATABASE BUNDLE PATCH : 12.1.0.2.1 (19189240)

Simple enough, right? BP1 and the individual patch components within BP1 give you 12.1.0.2.1. Even I can follow this.

Lets try and apply BP2 to the above. We will use opatchauto for this, and to begin with we will run an analyze:

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-35-17_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP2/19774304
Grid Infrastructure Patch(es): 19392590 19392604 19649591 
RAC Patch(es): 19392604 19649591 

Patch Validation: Successful

Analyzing patch(es) on "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.
Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.

Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ...
Patch "/tmp/BP2/19774304/19392590" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.
Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.
Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.

SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during analyze (Please see log file for details):
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19392604, 19649591

opatchauto completed with warnings.

Well, that does not look promising. I have no “one-off” patches in this home to cause a conflict, it should be a simple BP1->BP2 patching without any issues.

Digging into the logs we find the following:

.
.
.
[18-Dec-2014 13:37:08]       Verifying environment and performing prerequisite checks...
[18-Dec-2014 13:37:09]       Patches to apply -> [ 19392590 19392604 19649591  ]
[18-Dec-2014 13:37:09]       Identical patches to filter -> [ 19392590 19392604  ]
[18-Dec-2014 13:37:09]       The following patches are identical and are skipped:
[18-Dec-2014 13:37:09]       [ 19392590 19392604  ]
.
.

Essentially out of the 3 patches in the home at BP1 only the Database Bundle Patch 19189240 is superseded by BP2. Maybe this annoys me more than it should. I like my patches applied by BP2 to end in 2. I also don’t like the fact the analyze throws a warning about this.

Lets patch:

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-54-03_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP2/19774304
Grid Infrastructure Patch(es): 19392590 19392604 19649591 
RAC Patch(es): 19392604 19649591 

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0.2/db_1) ... Successful
Following database(s) and/or service(s)  were stopped and will be restarted later during the session: testrac

Applying patch(es) to "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/oracle/product/12.1.0.2/db_1" with warning.
Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/oracle/product/12.1.0.2/db_1" with warning.

Stopping CRS ... Successful

Applying patch(es) to "/u01/app/12.1.0/grid_1" ...
Patch "/tmp/BP2/19774304/19392590" applied to "/u01/app/12.1.0/grid_1" with warning.
Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/12.1.0/grid_1" with warning.
Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/12.1.0/grid_1" with warning.

Starting CRS ... Successful

Starting RAC (/u01/app/oracle/product/12.1.0.2/db_1) ... Successful

SQL changes, if any, are applied successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during patch installation (Please see log file for details):
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19392604, 19649591

opatchauto completed with warnings.

I do not like to see warnings when I’m patching. The log file for the apply is similar to the analyze, identical patches skipped.

Checking where we are with GI and DB patches now:

[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
19649591;DATABASE BUNDLE PATCH : 12.1.0.2.2 (19649591)
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)

[oracle@rac2 ~]$ /u01/app/oracle/product/12.1.0.2/db_1/OPatch/opatch lspatches
19649591;DATABASE BUNDLE PATCH : 12.1.0.2.2 (19649591)
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)

The only one changed is the DATABASE BUNDLE PATCH.

The one MOS document I effectively have on “speed dial” is 888828.1 and that showed up BP3 as being available 17th December. It also had the following warning:

Before install on top of 12.1.0.2.1DBBP or 12.1.0.2.2DBBP, first rollback patch 19392604 OCW PATCH SET UPDATE : 12.1.0.2.1

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP3/20026159 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/20026159/opatch_gi_2014-12-18_14-13-58_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP3/20026159
Grid Infrastructure Patch(es): 19392590 19878106 20157569 
RAC Patch(es): 19878106 20157569 

Patch Validation: Successful
Command "/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1" execution failed

Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-14-50PM_1.log

Analyzing patch(es) on "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP3/20026159/19878106" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.
Patch "/tmp/BP3/20026159/20157569" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.

Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ...
Command "/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp" execution failed: 
UtilSession failed: After skipping conflicting patches, there is no patch to apply.

Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-15-30PM_1.log

Following step(s) failed during analysis:
/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1
/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp


SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during analyze (Please see log file for details):
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19878106, 20157569

Following patch(es) failed to be analyzed:
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19878106, 20157569

opatchauto analysis reports error(s).

Looking at the log file we see patch 19392604 already in the home conflicts with patch 19878106 from BP3. 19392604 is the OCW PATCH SET UPDATE in BP1 (and BP2) while 19878106 is the Database Bundle Patch in BP3. We see the following in the log file:

Patch 19878106 has Generic Conflict with 19392604. Conflicting files are :
                             /u01/app/12.1.0/grid_1/bin/diskmon

That seems messy. It definitely annoys me that to apply BP3 I have to take additional steps of rolling back a pervious BP. I don’t recall having to do this with previous Bundle Patches, and I’ve applied a fair few of them.

I rolled the lot back with opatchauto rollback. Then applied BP3 ontop of the unpatched homes I was left with. But lets look at what BP3 on top of 12.1.0.2 gives you:

GI Home:

[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
20157569;OCW Patch Set Update : 12.1.0.2.1 (20157569)
19878106;DATABASE BUNDLE PATCH: 12.1.0.2.3 (19878106)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)

DB Home:

[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
20157569;OCW Patch Set Update : 12.1.0.2.1 (20157569)
19878106;DATABASE BUNDLE PATCH: 12.1.0.2.3 (19878106)

So for BP2 we had patch 19392604 OCW PATCH SET UPDATE : 12.1.0.2.1 Now we still have a 12.1.0.2.1 OCW Patch Set Update with BP3 but it has a new patch number!

That really irritates.


Oracle’s revises its database administrator accreditation portfolio

Chris Foot - Thu, 2014-12-18 06:14

Enterprises looking for Oracle experts knowledgeable of the software giant's latest database solutions may discover some DBAs' certifications are no longer valid. 

In addition, companies using SAP applications would do well to hire DBAs who know how to optimize Oracle's server solutions. Many SAP programs leverage Oracle 12c databases as the underbelly of their functionality,so ensuring that SAP's software can use and secure data within these environments efficiently is a must. 

Releasing new accreditation standards 
TechTarget's Jessica Sirkin commented on Oracle's certification requirements, which now state that DBAs must take tests within one year in order to revamp their accreditations. The exams vet a person's familiarity with more recent versions of Oracle Database. Those with certifications in 7.3, 8, 8i and 9i must undergo tests to obtain accreditations in 10g, 11g or 12c to retain their active status within Oracle's CertView portal system. 

These rules apply to any professional holding a Certified Associate, Professional, Expert or Master credential in the aforementioned solutions. Those already possessing accreditation in 12c or 11g will be able to retain their active statuses for the foreseeable future, but DBAs with 10g certifications will be removed from the CertView list on March 1 of next year. 

As for the company's reasons, Oracle publicly stated that the measures are intended to "have qualified people implementing, maintaining and troubleshooting our software." 

SAP gets closer to Oracle 
Those who use Oracle's databases to power their SAP software are in luck. Earlier this year, SAP certified its solutions to coincide with the rollout of an updated version of Oracle 12c, specifically build 12.1.0.2. One of the reasons why SAP is supporting Oracle's flagship database is because the company wants to provide its customers with more flexible upgrade plans from 11g Release 2 to Oracle's latest release. SAP's supporting features will include the following:

  • A multitenancy option for 12c, which will allow users to view one piece of information or use one particular function simultaneously. 
  • Hybrid columnar compression technology, which will certainly help those who are trying to engineer back-end databases to store more information.

Most importantly, the news source acknowledged the fact that many businesses use Oracle's database products in conjunction with SAP's enterprise software. Incompatibility between the two has been a persistent headache for IT departments working with these two solutions, but SAP's move will improve the matter.

Overall, hiring a team of database experts experienced in working with different software running on top of Oracle is a safe bet for organizations wary of this change. 

The post Oracle’s revises its database administrator accreditation portfolio appeared first on Remote DBA Experts.

PostgreSQL isn’t just another database engine

Chris Foot - Thu, 2014-12-18 06:13

While Oracle's database engine and Microsoft's SQL Server are among the top three server solutions among enterprises, by no means is PostgreSQL being left in the dust. 

This year, The PostgreSQL Global Development Group released PostgreSQL 9.4, which was equipped with several bug fixes as well as a few new capabilities, such as:

  • An ALTER SYSTEM command can be used to change configuration file entries
  • It's now possible to bring up materialized views to be refreshed without deflecting concurrent reads
  • Logical decoding for WAL data was added
  • Background worker processes can now be initiated, logged and terminated dynamically

PostgreSQL 9.4 is currently available for download on the developer's website. Users can access versions that are compatible with specific operating systems, including SUSE, Ubuntu, Solaris, Windows and Mac OS X. 

Rising to fame? 
ZDNet contributor Toby Wolpe spoke with EnterpriseDB chief architect Dave Page, who is also a member of PostgreSQL's core team, on the open source solution's popularity. He maintained that PostgreSQL's capabilities are catching up to Oracle's, an assertion that may not be shared by everyone but one worth acknowledging nonetheless. 

Page referenced PostgreSQL as "one of those well kept secrets that people just haven't cottoned on to." one of the reasons why PostgreSQL has gained so much traction lately is due to MySQL's purchase by Sun, which was acquired by Oracle. According to Page, people are apprehensive regarding Oracle's control of MySQL, another relational database engine. 

Features galore? 
Throughout his interview with Wolpe, Page noted a general sentiment among DBAs who are switching from MySQL to PostgreSQL for the latter solution's "feature-rich" content. In a sense, it makes sense that PostgreSQL would have plenty of functions to offer DBAs due to its open source format. When users can customize aspects of PostgreSQL that not only complement their workflow but that of other DBAs as well, such capabilities tend to be integrated into the next release permanently. 

One particular feature that has managed to stick in PostgreSQL is foreign data wrappers. Wolpe noted that data wrappers enable remote information to be categorized as a table within PostgreSQL, meaning queries can be run across both PostgreSQL tables and foreign data as if it were native. 

Another tool provides support for JSONB data, allowing information to be stored within PostgreSQL in a binary format. The advantage of this function is that it initiates a new index operator that's quick-acting. 

While PostgreSQL may not be the engine of choice for some DBAs, it is a solution worth acknowledging. 

The post PostgreSQL isn’t just another database engine appeared first on Remote DBA Experts.

Yes, SMBs should pay attention to disaster recovery

Chris Foot - Thu, 2014-12-18 06:12

Effective disaster recovery plans admittedly take a lot of time, resources and attention to develop, which may cause some small and mid-sized businesses to stray away from the practice. While it's easy to think "it could never happen to me," that's certainly not a good mindset to possess. 

While the sole proprietor of a small graphic design operation may want to set up a disaster recovery plan, he or she may not know where to start. It's possible that the application used to create designs resides in-house, but the system used to deliver content to customers may be hosted through the cloud. It's a confusing situation, especially if one doesn't have experience in IT.

SMEs need DR, but don't have robust plans 
To understand how strong small and mid-sized enterprises' DR strategies are, Dimensional Research and Axcient conducted a survey of 453 IT professionals working at companies possessing between 50 and 1000 workers. The study found that 71 percent of respondents back up both information and software, but only 24 percent back up all their data and applications. Other notable discoveries are listed below:

  • A mere 7 percent of survey participants felt "very confident" that they could reboot operations within two hours of an incident occurring. 
  • More than half (53 percent) of respondents asserted company revenues would be lost until critical systems could be rebooted.  
  • Exactly 61 percent of SMBs use backup and recovery tools that perform the same functions. 
  • Almost three-fourths maintain that using multiple DR assets can increase the risk of endeavors failing. 
  • Eighty-nine percent surveyed view cloud-based DR strategies as incredibly desirable. The same percentage acknowledged that business workers are much less productive during outages. 

What measures can SMBs take? 
For many IT departments at SMEs, taking advantage of cloud-based DR plans can be incredibly advantageous. IT Business Edge's Kim Mays noted that decision-makers should pay close attention to the information and applications employees access most often to perform day-to-day tasks. Allowing these IT assets to transition to cloud infrastructures in the event of a disaster will allow workers to continue with their responsibilities. 

Of course, using a cloud-based strategy isn't the be-all, end-all to a solid DR blueprint. For instance, it's possible that personnel residing in affected areas may not have Internet access. This is where a business' senior management comes into play: Set guidelines that will allow staff to make decisions that will benefit the company during an outage. 

The post Yes, SMBs should pay attention to disaster recovery appeared first on Remote DBA Experts.

Linux users may need experts to reinforce malware detection functions

Chris Foot - Thu, 2014-12-18 04:57

Enterprises using Linux operating systems to run servers or desktops may want to consider hiring specialists to prevent actions initiated by the "less" command. 

In addition, Linux users should also be aware that they have been targeted by a dangerous cyberespionage operation that is believed to be headquartered in Russia. If these two threats go unacknowledged, enterprises that use Linux may sustain grievous data breaches. 

A bug in the "less" command 
The vulnerability concerning less was detailed by Lucian Constantin, a contributor to Computerworld. Constantin noted that less presents itself as a "harmless" instruction that enables users to view the contents of files downloaded from the Web. However, using the less directive could also allow perpetrators to execute code remotely. 

Less is typically used to view information without having to load files into a computer's memory, a huge help for those simply browsing documents on the Internet. However, lesspipe is a script that automatically accesses third-party tools to process files with miscellaneous extensions such as .pdf, .gz, .xpi, and so on. 

One such tool, cpio file archiving, could enable a cybercriminal to initiate an arbitrary code execution exploit. Essentially, this would give him or her control over a machine, enabling them to manipulate it at will. This particularly bug was discovered by Michal Zalewski, a Google security engineer. 

"While it's a single bug in cpio, I have no doubt that many of the other lesspipe programs are equally problematic or worse," said Zalewski, as quoted by Constantin. 

Taking aim and firing 
The less command isn't the only thing Linux users should be concerned with. In a separate piece for PCWorld, Constantin noted that Russian cyberespionage group Epic Turla has directed its attention toward infiltrating machines running Linux.

Kaspersky Lab asserted Epic Turla is taking advantage of cd00r, an open-source backdoor program that was created in 2000. This particular tool enables users to initiate arbitrary directives, as well as "listen" to commands received via a transmission control protocol, or user datagram protocol – the perfect function that makes it a dangerous espionage asset. 

"It can't be discovered via netstat, a commonly used administrative tool," said Kaspersky researchers, as quoted by Constantin. "We suspect that this component was running for years at a victim site, but do not have concrete data to support that statement just yet." 

If Linux users want to secure their systems, consulting with specialists certified in the OS may not be a bad idea. 

The post Linux users may need experts to reinforce malware detection functions appeared first on Remote DBA Experts.

Data management challenges, concerns for health care companies

Chris Foot - Thu, 2014-12-18 04:56

Volume and velocity are two words analysts are associating with health care data, motivating CIOs to assess the scalability and security of their current database infrastructures. 

Protecting the sensitive information contained within electronic health records has always been a concern, but the greatest issue at hand is that some health care providers don't have the personnel, assets or time required to effective manage and defend their databases. These concerns may incite mass adoption of outsourced database administration services

Greater volume at a faster rate 
CIO.com's Kenneth Corbin referenced a report conducted by EMC and research firm IDC, which discovered that the amount of health information is expected to increase 48 percent on an annual basis for the foreseeable future. In 2013, 153 exabytes of health care data existed. By 2020, that figure is anticipated to expand to 2,314 exabytes. 

EMC and IDC analysts proposed a scenario in which all of that information was stored on a stack of tablets. Referencing the 2020 statistic, they asserted that stack would be more than 82,000 miles high, reaching a third of the way to the moon. DC Health Insights Research Vice President Lynne Dunbrack maintained that health care companies can prepare for this explosion of data by identifying who owns the information and classifying it. 

"Understanding what the data means is key to making data governance and interoperability work, and is essential for analytics, big data initiatives and quality reporting initiatives, among other things," wrote Dunbrack in an email, as quoted by Corbin. 

More data means greater security concerns
As hospitals, insurance providers, clinics and other such organizations implement EHR software and increase their data storage capacities, it can be imagined that hackers will place the health care industry at the top of their list of targets. Health care records contain a plethora of valuable data, from Social Security numbers to checking account information. 

Health IT Security cited the problems Aventura Hospital and Medical Center in South Florida have encountered. Over the past two years, the institution has sustained three data breaches, one of which was caused by a vendor's employee who stole information on an estimated 82,000 patients. Worst of all, the worker was an employee of Valesco Ventures, Aventura's Health Insurance Portability and Accountability Act business associate. 

With this particular instance in mind, finding a database administration service with trustworthy employees is essential. In addition, contracting a company that can provide remote database monitoring 24/7/365 is a must – there can be no compromises. 

The post Data management challenges, concerns for health care companies appeared first on Remote DBA Experts.

Notable updates of SUSE Linux Enterprise 12

Chris Foot - Fri, 2014-12-12 13:03

Transcript

Hi, welcome to RDX! Using SUSE Linux Enterprise Server to manage your workstations, servers and mainframes? SUSE recently released a few updates to the solution, dubbed Linux Enterprise Server 12, that professionals should take note of.

For one thing, SUSE addressed the problem with Unix’s GNU Bourne Again Shell, also known as the “Shellshock” bug. This is a key fix, as it disallows hackers from placing malicious code onto servers through remote computers.

As far as disaster recovery capabilities are concerned, Linux Enterprise Server 12 is equipped with snapshot and full-system rollback features. These two functions enable users to revert back to the original configuration of a system if it happens to fail.

Want a team of professionals that can help you capitalize on these updates? Look no further than RDX’s Linux team – thanks for watching!

The post Notable updates of SUSE Linux Enterprise 12 appeared first on Remote DBA Experts.

UKOUG Tech14 slides – Exadata Security Best Practices

Dan Norris - Tue, 2014-12-09 04:54

I think 2 years is long enough to wait between posts!

Today I delivered a session about Oracle Exadata Database Machine Best Practices and promised to post the slides for it (though no one asked about them :). I’ve also posted them to the Tech14 agenda as well.

Direct download: UKOUG Tech14 Exadata Security slides

Turkish Hadoop User Group(TRHUG) 2014 meeting

H.Tonguç Yılmaz - Sun, 2014-12-07 08:58
Turkish Hadoop User Group(TRHUG) 2014 annual meeting will be at Monday December 22, Levent İstanbul. Microsoft TR is the sponsor of the meeting this year. Turkcell has two slots on the agenda this year; one on an interesting project called Curio based on Kafka, Storm and Cassandra the real-time side of the ecosystem. The other […]

Preparing for the end of Windows Server 2003

Chris Foot - Mon, 2014-12-01 07:24

Although support for Windows Server 2003 doesn't end until July of next year, enterprises that have used the operating system since its inception are transitioning to the solution's latest iteration, Windows Server 2012 R2.

Preliminary considerations
Before diving into the implications of transitioning from Server 2003 to Server 2012 R2, it's important to answer a valid question: Why not simply make the switch to Windows Server 2008 R2?

It's a conundrum that Windows IT Pro contributor Orin Thomas has ruminated on since the announcement of Microsoft's discontinuation of Server 2003. While he acknowledged various reasons why some professionals are hesitant to make the leap from Server 2003 to Server 2012 R2 (such as application compatibility issues and the "Windows 8-style interface") he pointed to a key concern: time.

Basically, Server 2008 R2 will cease to receive updates and support on Jan. 14, 2020. Comparatively, Server 2012 R2's end of life is slated for Jan. 10 2023.

In the event organizations have difficulty making the transition, there's always the option of seeking assistance from experts with certifications in Server 2012 R2. On top of migration and integration, these professionals can provide continued support throughout the duration of the solution's usage.

Key considerations
As companies using Windows Server 2003 will be moving to either Server 2008 R2 or Server 2012 R2, a number of implications must be taken into account. ZDNet contributor Ken Hess outlined several recommendations for those preparing for the migration:

  1. Identify how many Server 2003 systems you have in place.
  2. Aggregate and organize the hardware specifications for each system (CPU, memory, disk space, etc.).
  3. Assess how heavily these solutions were utilized over the years, then correlate them with projected growth and future workloads.
  4. Do away with systems that are no longer applicable to operations.
  5. Determine which applications running on top of Server 2003 are critical to the business model.
  6. Deduce how virtual machines can be leveraged to host underutilized processes.
  7. Collaborate with a database administration firm to outline and implement a migration plan (provide the partner with the data mentioned above).

These are just a few starting points on which to base a comprehensive migration plan. Also, it's important to be aware of unexpected spikes in server utilization. Although upsurges of 100 percent may occur infrequently, it's important that systems will be able to handle them effectively. As always, be sure to troubleshoot the renewed solution after implementation.

The post Preparing for the end of Windows Server 2003 appeared first on Remote DBA Experts.

Database active monitoring a strong defense against SQL injections

Chris Foot - Mon, 2014-12-01 07:24

SQL injections have been named as the culprits of many database security woes, including the infamous Target breach that occurred at the commencement of last year's holiday season.

Content management system compromised
One particular solution was recently flagged as vulnerable to such hacking techniques. Chris Duckett, a contributor to ZDNet, referenced a public service announcement released by Drupal, a open source content management solution used to power millions of websites and applications.

The developer noted that, unless users patched their sites against SQL injection attacks before October 15, "you should proceed under the assumption that every Drupal 7 website was compromised." Drupal expanded by asserting that updating to 7.32 will patch the vulnerability, but websites that have already been exposed are still compromised – the reason being that hackers have already obtained back-end information.

There is one way in which websites that sustained attacks could have remained protected. Database monitoring, regardless of the system being used, can alert administrators of problems as they arise, giving them ample time to respond to breaches.

Why database monitoring works
Although access permissions, malware and other assets are designed to dismantle and eradicate intrusions, some of their detection features leave something to be desired. Therefore, in order for programs capable of deterring SQL injections to operate to the best of their ability, they must be programmed to work in conjunction with surveillance tools that assess all database actions constantly.

The Ponemon Institute polled 595 database experts on the matter, asking them about the effectiveness of server monitoring tools. While Chairman Larry Ponemon acknowledged the importance of using continuous monitoring to look for anomalous behavior, Secure Ideas CEO Kevin Johnson said some tools can miscalculate SQL injections because they're designed to appear legitimate. Therefore, it's important for surveillance programs to also be directed toward identifying vulnerabilities. Paul Henry, senior instructor at the SANS Institute, also weighed in on the matter. 

"I believe in a layered approach that perhaps should include a database firewall to mitigate the risk of SQL injection, combined with continuous monitoring of the database along with continuous monitoring of normalized network traffic flows," said Henry, as quoted by the source.

At the end of the day, having a team of professionals on standby to address SQL injections if and when they occur is the only way to guarantee that massive consequences don't exacerbate as a result of these attacks.

The post Database active monitoring a strong defense against SQL injections appeared first on Remote DBA Experts.

Securing Sensitive Database Data Stores

Chris Foot - Mon, 2014-11-24 10:18

Introduction

Database administrators, since the inception of their job descriptions, have been responsible for the protection of their organization’s most sensitive database assets. They are tasked with ensuring that key data stores are safeguarded against any type of unauthorized data access.

Since I’ve been a database tech for 25 years now, this series of articles will focus on the database system and some of the actions we can take to secure database data. We won’t be spending time on the multitude of perimeter protections that security teams are required to focus on. Once those mechanisms are breached, the last line of defense for the database environments will be the protections the database administrator has put in place.

You will notice that I will often refer to the McAfee database security protection product set when I describe some of the activities that will need to be performed to protect your environments. If you are truly serious about protecting your database data, you’ll quickly find that partnering with a security vendor is an absolute requirement and not “something nice to have.”

I could go into an in-depth discussion on RDX’s vendor evaluation criteria, but the focus of this series of articles will be on database protection, not product selection. After an extensive database security product analysis, we felt that the breadth and depth of McAfee’s database security offering provided RDX with the most complete solution available.

This is serious business, and you are up against some extremely proficient opponents. To put it lightly, “they are one scary bunch.” Hackers can be classified as intelligent, inquisitive, patient, thorough, driven and more often than not, successful. This combination of traits makes database data protection a formidable challenge. If they target your systems, you will need every tool at your disposal to prevent their unwarranted intrusions.

Upcoming articles will focus on the following key processes involved in the protection of sensitive database data stores:

Evaluating the Most Common Threats and Vulnerabilities

In the first article of this series, I’ll provide a high level overview of the most common threat vectors. Some of the threats we will be discussing will include unpatched database software vulnerabilities, unsecured database backups, SQL Injection, data leaks and a lack of segregation of duties. The spectrum of tactics used by hackers could result in an entire series of articles dedicated to database threats. The scope of these articles is on database protection activities and not a detailed threat vector analysis.

Identifying Sensitive Data Stored in Your Environment

You can’t protect what you don’t know about. The larger your environment, the more susceptible you will be to data being stored that hasn’t been identified as being sensitive to your organization. In this article, I’ll focus on how RDX uses McAfee’s vulnerability scanning software to identify databases that contain sensitive data such as credit card or Social Security numbers stored in clear text. The remainder of the article will focus on identifying other objects that may contain sensitive, and unprotected data, such as test systems cloned from production, database backups, load input files, report output, etc…

Initial and Ongoing Vulnerability Analysis

Determining how the databases are currently configured from a security perspective is the next step to be performed. Their release and patch levels will be identified and compared to vendor security patch distributions. An analysis of how closely support teams adhere to industry and internal security best practices is evaluated at this stage. The types of vulnerabilities will range the spectrum, from weak and default passwords to unpatched (and often well known) database software weaknesses.

Ranking the vulnerabilities allows the highest priority issues to be addressed more quickly than their less important counterparts. After the vulnerabilities are addressed, the configuration is used as a template for future database implementations. Subsequent scans, run on a scheduled basis, will ensure that no new security vulnerabilities are introduced into the environment.

Database Data Breach Monitoring

Most traditional database auditing mechanisms are designed to report data access activities after they have occurred. There is no alerting mechanism. Auditing is activated, the data is collected and reports are generated that allow the various activities performed in the database to be analyzed for the collected time period.

Identifying a data breach after the fact is not database protection. It is database reporting. To protect databases we are tasked with safeguarding, we need a solution that has the ability to alert or alert and stop the unwarranted data accesses from occurring.

RDX found that McAfee’s Database Activity Monitoring product provides the real time protection we were looking for. McAfee’s product has the ability to identify, terminate and quarantine a user that violates a predefined set of database security policies.

To be effective, database breach protection must be configured as a stand-alone, and separated, architecture. Otherwise, internal support personnel could deactivate the breach protection service by mistake or deliberate intention. This separation of duties is an absolute requirement for most industry compliance regulations such as HIPAA, PCI DSS and SOX. The database must be protected from both internal and external threat vectors.

In an upcoming article of this series, we’ll learn more about real-time database activity monitoring and the benefits it provides to organizations that require a very high level of protection for their database data stores.

Ongoing Database Security Strategies

Once the database vulnerabilities have been identified and addressed, the challenge is to ensure that the internal support team’s future administrative activities do not introduce any additional security vulnerabilities into the environment.

In this article, I’ll prove recommendations on a set of robust, documented security controls and best practices that will assist you in your quest to safeguard your database data stores.

A documented plan to quickly address new database software vulnerabilities is essential to their protection. The hacker’s “golden window of zero day opportunity” exists from when the software’s weakness is identified until the security patch that addresses it is applied.

Separation of duties must also be considered. Are the same support teams that are responsible for your vulnerability scans, auditing and administering your database breach protection systems also accessing your sensitive database data stores?

Reliable controls that include support role separation and the generation of audit records that ensure proper segregation of duties so that even privileged users cannot bypass security will need to be implemented.

Wrap-up

Significant data breach announcements are publicized on a seemingly daily basis. External hackers and rogue employees continuously search for new ways to steal sensitive information. There is one component that is common to many thefts – the database data store. You need a plan to safeguard them. If not, your organization may be the next one that is highlighted on the evening news.

The post Securing Sensitive Database Data Stores appeared first on Remote DBA Experts.

Visualization shows hackers behind majority of data breaches

Chris Foot - Wed, 2014-11-19 10:18

Transcript

Hi, welcome to RDX! Amid constant news of data breaches, ever wonder what's causing all of them? IBM and Ponemon's Global Breach Analysis can give you a rundown. 

While some could blame employee mishaps or poor security, hacking is the number one cause of many data breaches, most of which are massive in scale. For example, when Adobe was hacked, approximately 152 million records were compromised.

As you can imagine, databases were prime targets. When eBay lost 145 million records to perpetrators earlier this year, hackers used the login credentials of just a few employees and then targeted databases holding user information.

To prevent such trespasses from occurring, organizations should employ active database monitoring solutions that scrutinize login credentials to ensure the appropriate personnel gain entry.

Thanks for watching! Visit us next time for more news and tips about database protection!

The post Visualization shows hackers behind majority of data breaches appeared first on Remote DBA Experts.

Setting up Xubuntu in Lenovo Flex2 14D

Vattekkat Babu - Tue, 2014-09-16 00:29

Lenovo Flex2 14D is a good laptop with decent build quality, light weight, 14" screen and touch screen for those who like it. With AMD A6 processor version, it is reasonably priced too.

It comes pre-loaded wth Windows 8.1 and a bunch of Lenovo software. If you want to get this to dual boot with Ubuntu Linux, here are the specific fixes you need to do.

Slicing the EDG

Antony Reynolds - Tue, 2014-08-19 20:24
Different SOA Domain Configurations

In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages.

Super Domain

This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain.

This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Single Administration Point (1 Admin Server)
  • Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
Drawbacks
  • Patching is an all or nothing affair.
  • Startup time for SOA may be slow if large number of composites deployed.
Multiple Domains

This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc.

SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Startup time for SOA may be slow if large number of composites deployed.
Shared Service Environment

This model extends the previous multiple domain arrangement to provide a true shared service environment.

This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
  • Shared SOA Domain hosts
    • Human Workflow Tasks
    • BAM
    • Common "utility" composites
  • Single OSB domain provides "Enterprise Service Bus"
  • All domains use same OWSM policy store (MDS-WSM)
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
  • Supports large numbers of deployed composites in multiple domains.
  • Single URL for Human Workflow end users.
  • Single URL for BAM end users.
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Human Workflow needs to be specially configured to point to shared services domain.
Summary

The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

How to beat workday blues?

Vattekkat Babu - Fri, 2014-08-15 06:37

Let us face it - all of us feel like having achieved or done very little after spending a long day away from family. Then you look back and find that you could've spent some of that time with family at least!

I've been observing my work habits a lot and I think I have found out something that works for me.

I am summarizing these as a NOT-TODO list of 3 items. I am a software engineer by profession and by passion.

Has this worked for me? Absolutely much better than when I was not following these rules.

Oracle CPU July 2014 + Oracle Exploit CVE-2013-3751

Alexander Kornbrust - Wed, 2014-07-16 10:03

Yesterday, Oracle released a new critical patch update (CPU Jul 2014) for July 2014. This CPU contains fixes for 5 database vulnerabilities. The most critical one, CVE-2013-3751, has a base score of 9.0 and affects Oracle 12.1 only. The same issue was already fixed for Oracle 11.2 in July 2013 (CPU Jul 2013).

After a short research on the web (google and twitter, less than 5 minutes) I found an exploit for the CVE-2013-3751.

This vulnerability was found by Nicolas Grégoire: He released an exploit nearly 1 year after the patch was published by Oracle. But it seems that he was not aware that Oracle forgot to fix this issue in Oracle 12.1

Timeline of CVE-2013-3751:

  • January 2012: Vulnerability found (fuzzing)
  • February 2012: Vulnerability reported to ZDI
  • March 2012: Vulnerability contracted $500
  • November 2012: Reported to Oracle by ZDI
  • July 2013: Patch published by Oracle
  • March 2014: Oracle’s Cloud still not patched
  • June 2014: Exploit released at INS#14 conference
  • July 2014: Patch for Oracle 12.1 published by Oracle

 

Exploit:

———-

select * from dual where xmltype(q'{<aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
abbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbcccccccccccccccccccccccccccccccccccccccccccccccc
ddddddddddddddddddddddddddddddddddddddddddddddddeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
ffffffffffffffffffffffffffffffffffffffffffffffffhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
iiiiiiiiiiiiiiiiiiiiiiiiii foo="bar[a &lt; b]"/>}') like '0wn3d_again';

———-

Coherence Adapter Configuration

Antony Reynolds - Wed, 2014-07-02 23:05
SOA Suite 12c Coherence Adapter

The release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite.  In this entry I provide an introduction to configuring the adapter and using the different operations it supports.

The Coherence Adapter provides access to Oracles Coherence Data Grid.  The adapter provides access to the cache capabilities of the grid, it does not currently support the many other features of the grid such as entry processors – more on this at the end of the blog.

Previously if you wanted to use Coherence from within SOA Suite you either used the built in caching capability of OSB or resorted to writing Java code wrapped as a Spring component.  The new adapter significantly simplifies simple cache access operations.

Configuration

When creating a SOA domain the Coherence adapter is shipped with a very basic configuration that you will probably want to enhance to support real requirements.  In this section I look at the configuration required to use Coherence adapter in the real world.

Activate Adapter

The Coherence Adapter is not targeted at the SOA server by default, so this targeting needs to be performed from within the WebLogic console before the adapter can be used.

Create a cache configuration file

The Coherence Adapter provides a default connection factory to connect to an out-of-box Coherence cache and also a cache called adapter-local.  This is helpful as an example but it is good practice to only have a single type of object within a Coherence cache, so we will need more than one.  Without having multiple caches then it is hard to clean out all the objects of a particular type.  Having multiple caches also allows us to specify different properties for each cache.  The following is a sample cache configuration file used in the example.

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>TestCache</cache-name>
      <scheme-name>transactional</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>
  <caching-schemes>
    <transactional-scheme>
      <scheme-name>transactional</scheme-name>
      <service-name>DistributedCache</service-name>
      <autostart>true</autostart>
    </transactional-scheme>
  </caching-schemes>
</cache-config>

This defines a single cache called TestCache.  This is a distributed cache, meaning that the entries in the cache will distributed across the grid.  This enables you to scale the storage capacity of the grid by adding more servers.  Additional caches can be added to this configuration file by adding additional <cache-mapping> elements.

The cache configuration file is reference by the adapter connection factory and so needs to be on a file system accessed by all servers running the Coherence Adapter.  It is not referenced from the composite.

Create a Coherence Adapter Connection Factory

We find the correct cache configuration by using a Coherence Adapter connection factory.  The adapter ships with a few sample connection factories but we will create new one.  To create a new connection factory we do the following:

  1. On the Outbound Connection Pools tab of the Coherence Adapter deployment we select New to create the adapter.
  2. Choose the javax.resource.cci.ConnectionFactory group.
  3. Provide a JNDI name, although you can use any name something along the lines of eis/Coherence/Test is a good practice (EIS tells us this an adapter JNDI, Coherence tells us it is the Coherence Adapter, and then we can identify which adapter configuration we are using).
  4. If requested to create a Plan.xml then make sure that you save it in a location available to all servers.
  5. From the outbound connection pool tab select your new connection factory so that you can configure it from the properties tab.
    • Set the CacheConfigLocation to point to the cache configuration file created in the previous section.
    • Set the ClassLoaderMode to CUSTOM.
    • Set the ServiceName to the name of the service used by your cache in the cache configuration file created in the previous section.
    • Set the WLSExtendProxy to false unless your cache configuration file is using an extend proxy.
    • If you plan on using POJOs (Plain Old Java Objects) with the adapter rather than XML then you need to point the PojoJarFile at the location of a jar file containing your POJOs.
    • Make sure to press enter in each field after entering your data.  Remember to save your changes when done.

You may will need to stop and restart the adapter to get it to recognize the new connection factory.

Operations

To demonstrate the different operations I created a WSDL with the following operations:

  • put – put an object into the cache with a given key value.
  • get – retrieve an object from the cache by key value.
  • remove – delete an object from the cache by key value.
  • list – retrieve all the objects in the cache.
  • listKeys – retrieve all the keys of the objects in the cache.
  • removeAll – remove all the objects from the cache.

I created a composite based on this WSDL that calls a different adapter reference for each operation.  Details on configuring the adapter within a composite are provided in the Configuring the Coherence Adapter section of the documentation.

I used a Mediator to map the input WSDL operations to the individual adapter references.

Schema

The input schema is shown below.

This type of pattern is likely to be used in all XML types stored in a Coherence cache.  The XMLCacheKey element represents the cache key, in this schema it is a string, but could be another primitive type.  The other fields in the cached object are represented by a single XMLCacheContent field, but in a real example you are likely to have multiple fields at this level.  Wrapper elements are provided for lists of elements (XMLCacheEntryList) and lists of cache keys (XMLCacheEntryKeyList).  XMLEmpty is used for operation that don’t require an input.

Put Operation

The put operation takes an XMLCacheEntry as input and passes this straight through to the adapter.  The XMLCacheKey element in the entry is also assigned to the jca.coherence.key property.  This sets the key for the cached entry.  The adapter also supports automatically generating a key, which is useful if you don’t have a convenient field in the cached entity.  The cache key is always returned as the output of this operation.

Get Operation

The get operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be retrieved.

Remove Operation

The remove operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be deleted.

RemoveAll Operation

This is similar to the remove operation but instead of using a key as input to the remove operation it uses a filter.  The filter could be overridden by using the jca.coherence.filter property but for this operation it was permanently set in the adapter wizard to be the following query:

key() != ""

This selects all objects whose key is not equal to the empty string.  All objects should have a key so this query should select all objects for deletion.

Note that there appears to be a bug in the return value.  The return value is entry rather than having the expected RemoveResponse element with a Count child element.  Note the documentation states that

When using a filter for a Remove operation, the Coherence Adapter does not report the count of entries affected by the remove operation, regardless of whether the remove operation is successful.

When using a key to remove a specific entry, the Coherence Adapter does report the count, which is always 1 if a Coherence Remove operation is successful.

Although this could be interpreted as meaning an empty part is returned, an empty part is a violation of the WSDL contract.

List Operation

The list operation takes no input and returns the result list returned by the adapter.  The adapter also supports querying using a filter.  This filter is essentially the where clause of a Coherence Query Language statement.  When using XML types as cached entities then only the key() field can be tested, for example using a clause such as:

key() LIKE “Key%1”

This filter would match all entries whose key starts with “Key” and ends with “1”.

ListKeys Operation

The listKeys operation is essentially the same as the list operation except that only the keys are returned rather than the whole object.

Testing

To test the composite I used the new 12c Test Suite wizard to create a number of test suites.  The test suites should be executed in the following order:

  1. CleanupTestSuite has a single test that removes all the entries from the cache used by this composite.
  2. InitTestSuite has 3 tests that insert a single record into the cache.  The returned key is validated against the expected value.
  3. MainTestSuite has 5 tests that list the elements and keys in the cache and retrieve individual inserted elements.  This tests that the items inserted in the previous test are actually in the cache.  It also tests the get, list and listAll operations and makes sure they return the expected results.
  4. RemoveTestSuite has a single test that removes an element from the cache and tests that the count of removed elements is 1.
  5. ValidateRemoveTestSuite is similar to MainTestSuite but verifies that the element removed by the previous test suite has actually been removed.
Use Case

One example of using the Coherence Adapter is to create a shared memory region that allows SOA composites to share information.  An example of this is provided by Lucas Jellema in his blog entry First Steps with the Coherence Adapter to create cross instance state memory.

However there is a problem in creating global variables that can be updated by multiple instances at the same time.  In this case the get and put operations provided by the Coherence adapter support a last write wins model.  This can be avoided in Coherence by using an Entry Processor to update the entry in the cache, but currently entry processors are not supported by the Coherence Adapter.  In this case it is still necessary to use Java to invoke the entry processor.

Sample Code

The sample code I refer to above is available for download and consists of two JDeveloper projects, one with the cache config file and the other with the Coherence composite.

  • CoherenceConfig has the cache config file that must be referenced by the connection factory properties.
  • CoherenceSOA has a composite that supports the WSDL introduced at the start of this blog along with the test cases mentioned at the end of the blog.

The Coherence Adapter is a really exciting new addition to the SOA developers toolkit, hopefully this article will help you make use of it.

Integration Hub – Branding

Kasper Kombrink - Mon, 2014-06-23 04:57
The Integration Hub has come a long way since I first saw it as the Enterprise Portal 8.8. The biggest selling point in my opinion has always been the branding features. Even though the options never really changed, they did evolve

Continue reading