Skip navigation.

Feed aggregator

Partner News: Portal Architects Release New Connector to Enable Hybrid Storage

WebCenter Team - Fri, 2015-02-20 06:00

Portal Architects, Inc., the creators of SkySync, announced this week the release of a new connector for Oracle WebCenter Content. The new SkySync connector, along with the existing SkySync Oracle Documents Cloud connector, enables organizations to tightly integrate any existing on-premises or cloud content storage systems with Oracle WebCenter Content or Oracle Documents Cloud Service quickly.

SkySync provides bi-directional, fully-synchronized integration across Oracle WebCenter Content on-premises and Oracle Documents Cloud Service, providing organizations the ability to leverage both on-premises content management solution as well as the enterprise grade document sync and sharing solution in the cloud. That means, no content islands or silos, comprehensive security and the ability to leverage your existing investments in content management infrastructures.

Here is a quick video from Portal Architects to show how it all comes together. 

Connections Types in SQLcl

Barry McGillin - Fri, 2015-02-20 05:07

We support many ways to connect in SQLcl, including lots from SQL*Plus which we need to support to make sure all your SQL*Plus scripts work exactly the same way using SQLcl as with SQL*Plus.
I've added several ways to show how to connect to SQLcl.  If there is one you want to see added that is not here, let me know and I'll add it to the list.  So far, We have below:
  • EZConnect
  • LDAP
At any time when connected you can use the command 'SHOW JDBC'  to display what the connection is and how we are connected.  Here's some details of the types above.

The easy connect naming method eliminates the need for service name lookup in the tnsnames.ora files for TCP/IP environments.  It extends the functionality of the host naming method by enabling clients to connect to a database server with an optional port and service name in addition to the host name of the database:
 $sql barry/oracle@localhost:1521/orcl  
SQLcl: Release 4.1.0 Beta on Fri Feb 20 10:15:12 2015
Copyright (c) 1982, 2015, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production

The TWO_TASK (on UNIX) or LOCAL (on Windows) environment variable can be set to a connection identifier. This removes the need to explicitly enter the connection identifier whenever a connection  is made in SQL*Plus or SQL*Plus Instant Client. 
In SQLcl, we can set this up as a jdbc style connection like this

$export TWO_TASK=localhost:1521/orcl  


Local Naming resolves a net service name stored in a tnsnames.ora file stored on a client.  We can set the location of that in the TNS_ADMIN variable.

 $export TNS_ADMIN=~/admin  

An example tons entry is shown here below.

 $cat tnsnames.ora   
(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521) )
(SERVICE_NAME=orcl) ) )

we can then use the entry to connect to the database.

 $sql barry/oracle@BLOG  
SQLcl: Release 4.1.0 Beta on Fri Feb 20 10:29:14 2015
Copyright (c) 1982, 2015, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production


We've already written about LDAP connections here.  Here's a quick review.

  set LDAPCON jdbc:oracle:thin:@ldap://,cn=OracleContext,dc=ldapcdc,dc=lcom   

 $export LDAPCON=jdbc:oracle:thin:@ldap://,cn=OracleContext,dc=ldapcdc,dc=lcom   
$sql /nolog
SQLcl: Release 4.1.0 Beta on Fri Feb 20 10:37:02 2015
Copyright (c) 1982, 2015, Oracle. All rights reserved.
SQL> connect barry/oracle@orclservice_test(Emily's Desktop)

If we have more types to add, then they will appear here.  Let us know what you want to see.

Oracle MAF - assigning value to a backing bean property from javascript

Communicating between layers in MAF is quite easy with the provided APIs. For calling Java method for javascript we have the invokeMethod js function, which is quite straight forward: ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Screaming at Each Other

Scott Spendolini - Thu, 2015-02-19 20:20
Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.  For the past few conferences, I’ve started my sessions by asking who in the audience uses Twitter.  Time and time again, I only get about 10-20% of the participants say that they do.  That means that up to 90% of the participants don’t.  That’s a lot of people.  My informal surveys also indicate a clear generation gap.  Of those that do use Twitter, they tend to be around 40 years old or younger.  There are of course exceptions to this rule, but by and large this is the evidence that I have seen.

I actually took about 10 minutes before my session today to attempt to find out why most people don’t care about Twitter.  The answer was very clear and consistent: there’s too much crap on there.  And they are correct.  I’d guess that almost 100% of all Tweets are useless or at least irrelevant to an Oracle professional.
I then took a few minutes to explain the basics of how it worked - hash tags, followers, re-tweets and the like.  Lots of questions and even more misconceptions.  “So does someone own a hash tag?” and “Can I block someone that I don’t care for” were some of the questions that I addressed.  
After a few more questions, I started to explain how it could benefit them as Oracle professionals.  I showed them that most of the Oracle APEX team had accounts.  I also highlighted some of the Oracle ACEs.  I even showed them the RMOUG hash tag and all of the tweets associated with it.  Light bulbs were starting to turn on.
But enough talking.  It was time for a demo.  To prove that people are actually listening, I simply tweeted this:Please reply if you follow #orclapex - want to see how many people will in the next 30 mins. Thanks!
— Scott Spendolini (@sspendol) February 19, 2015 Over the next 30 minutes, I had 10 people reply. At the end of the session, I went through the replies, and said what I knew about those who did reply.  Oracle Product Manager, Oracle Evangelist, Oracle ACE, APEX expert, etc.  The crowd was stunned.  This proved that Twitter as a medium to communicate with Oracle experts was in fact, real.  
More questions.  “Can I Tweet to my power company if I have an issue with them?” and “Do people use profanity on Twitter?” were some of the others.  People were clearly engaged and interested.  Mission accomplished.
The bigger issue here is that I strongly feel that the vast majority of the Oracle community is NOT on Twitter.  And that is a problem, because so much energy is spent tweeting about user groups and conferences.  It's like we’re just screaming at each other, and not at those who need to listen.  
We can fix this.  I encourage everyone who presents at a conference to take 5 minutes at the beginning or end of their session to talk about the benefits of Twitter.  Demonstrate that if you follow Oracle experts, the content that will be displayed is not about Katy Perry, but rather about new features, blog posts or other useful tidbits that can help people with their jobs. Take the time to show them how to sign up, how to search for content, and who to follow.  I think that if we all put forth a bit of effort, we can recruit many of those to join the ranks of Twitter for all the right reasons, and greatly increase the size of the Oracle community that’s connected via this medium.

Running Carsten Czarski's node-oracledb WebSocket Example

Christopher Jones - Thu, 2015-02-19 17:40

My colleague Carsten Czarski recently presented on the node-oracledb driver for Node.js. One of his demos used WebSockets. It was a live demo, not captured in slides. I thought I'd explain how I got it to run in my Oracle Linux 64 bit environment.

  • Download and extract the Node 0.10.36 bundle from here. (At time of writing, the node-oracle driver requires Node.js 0.10). Add the bin to your PATH, for example:

    $ export PATH=/opt/node-v0.10.36-linux-x64/bin:$PATH
  • Download and install the 'basic' and 'devel' Instant Client RPMs from OTN:

    # rpm -ivh oracle-instantclient12.1-basic-
    # rpm -ivh oracle-instantclient12.1-devel-
  • Download Carsten's demo code from here and extract it:

    $ cd /home/cjones
    $ mkdir wsdemo
    $ cd wsdemo
    $ mv $HOME/Downloads/ .
    $ unzip
  • Create a new package.json file:

        "name": "ccwebsockets",
        "version": "1.0.0",
        "description": "Carsten's WebSocket Demo application using node-oracledb 0.3.1.",
        "scripts": {
    	"start": "node 05-websockets.js"
        "dependencies": {
    	"oracledb": "oracle/node-oracledb#619e9a8fa6625a2c5ca3e1a2ba10dbdaab5ae900",
    	"websocket": "^1.0",
    	"express": "^4.11"
  • Edit 05-websockets.js and change the database credentials at line 111. The schema needs to have the EMP table.

        user          : "scott",
        password      : "tiger",
        connectString : "localhost/pdborcl",
        poolMin       : 5,
        poolMax       : 10
  • Also in 05-websockets.js, change the path name at line 65 to your current directory name:

    filename = path.join("/home/cjones/wsdemo", uri);
  • Use npm to automatically install the node-oracle driver and the "websocket" and "express" dependencies listed in package.json:

    $ npm install
  • To run the demo, use the package.json script "start" target to load 05-websockets.js:

    $ npm start

    The server will start:

    > ccwebsockets@1.0.0 start /home/cjones/wsdemo
    > node 05-websockets.js
    Websocket Control Server listening at
    Database connection pool established
  • Open a couple of browser windows to These are the clients listening for messages.

    The output is the starting point of the demo. Let's send a message to those clients.

  • Open a third browser window for the URL The two listening windows will be updated with the "message" containing the query result payload. My screenshot shows this, and also has evidence that I had previously visited :

You might have noticed the screen shots were made on OS X. If you are not on Linux, refer to INSTALL to see how to install Node.js and node-oracledb. The package.json file I created will download node-oracledb 0.3.1 so you don't need to manually get it from GitHub. You will have to set OCI_LIB_DIR and OCI_INC_DIR during installation, and then set LD_LIBRARY_PATH, DYLD_LIBRARY_PATH or PATH when you want to run node.

You can follow Carsten at @cczarski.

Code Insight on SQLcl

Barry McGillin - Thu, 2015-02-19 17:29
Here's a little preview of the code insight we have in SQLcl.  These changes are part of EA2 which are coming out very soon.  This also shows the buffer and cursor management which was introduced in SQLcl

This allows you to move around the buffer easily and add and change text as you would in a normal text editor, not a console window like this.

We're also adding hotkeys to run the buffer from anywhere or to jump out of the buffer to do something else without losing the contents of the buffer.

Stayed tuned for this soon.

What TechCrunch Got Wrong (and Right) About Instructure Entering Corporate Learning Market

Michael Feldstein - Thu, 2015-02-19 17:08

By Phil HillMore Posts (289)

After yesterday’s “sources say” report from TechCrunch about Instructure – maker of the Canvas LMS – raising a new round of financing and entering the corporate LMS space, Instructure changed plans and made their official announcement to today. The funding is to both expand the Canvas team and to establish the new corporate LMS team. I’m not a fan of media attempts to get a scoop based purely on rumors, and in this case TechCrunch got a few items wrong that are worth correcting.

  • Instructure raised $40 million in new financing (series E), not “between $50 to $70 million”. TechCrunch did hedge their bets with “low end of the range at over $40 million”.
  • The primary competition in the corporate LMS space is Saba, SumTotal, Skillsoft, Cornerstone – and not Blackboard.
  • The Canvas LMS was launched in 2010, not 2011. (OK, I’ll give them this one, as even Instructure seems to use the 2011 date).

TechCrunch did get the overall story of fund-raising and new corporate product right, but these details matter.

Instructure’s new product for the corporate learning market is called Bridge, with its web site here. This is an entirely new product, although it does share a similar product architecture as Canvas, the LMS designed for the education market (including being based on Ruby on Rails). Unlike Canvas, Bridge was designed mobile-first, with all mobile capabilities embedded in the product and not as separate applications. In an interview with Josh Coates, CEO of Instructure, he described their motivation for this new product.

We like the idea of building software that helps people get smarter. Post education there is a void, with bad corporate software.

The design goal of Bridge is to make the creation and consumption of learning content easy, although future directions for the company will emphasize employee engagement and two-way conversations within companies. According to Coates, this focus on engagement parallels their research for future emphasis in the education market.


The Bridge product line will have a separate sales team and product team. From the press release:

Foundation partners include CLEARLINK, OpenTable and Oregon State University.

Oregon State University is an interesting customer of both products – they are adopting Canvas as part of their Unizin membership, and they are piloting Bridge as an internal HR system for training staff. This move will likely be adopted by other Canvas education customers.

Given the self-paced nature of both Competency-Based Education (CBE) and corporate learning systems, I asked if Bridge is targeted to get Instructure into the CBE land grab. Coates replied that they are researching whether and how to get into CBE, but they are first exploring if this can be done with Canvas. In other words, Bridge truly is aimed at the corporate learning market.

While Instructure has excelled on maintaining product focus and simplicity of user experience, this move outside of education raises the question about whether they can maintain company focus. The corporate market is very different than the education market – different product needs, fragmented vendor market, different buying patterns. Many companies have tried to cross over between education and corporate learning, but most have failed. Blackboard, D2L and Moodle have made a footprint in the corporate space using one product for both markets. Instructure’s approach is different.

As for the fund-raising aspects, Instructure has made it very clear they are planning to go public with an IPO sometime soon, as reported by Buzzfeed today.

CEO Josh Coates told BuzzFeed today that the company had raised an additional $40 million in growth funding ahead of a looming IPO, confirming a rumor that was first reported by Tech Crunch yesterday. The company has now raised around $90 million.

Given their cash, a natural question is whether Instructure plans to use this to acquire other companies. Coates replied that they get increasingly frequent inbound requests (for Instructure to buy other companies) that they evaluate, but they are not actively pursuing M&A as a key corporate strategy.

I have requested a demo of the product for next week, and I’ll share the results on e-Literate as appropriate.

Update: Paragraph on organization corrected to point out separate product team. Also added sentence on funding to go to both Canvas and Bridge.

The post What TechCrunch Got Wrong (and Right) About Instructure Entering Corporate Learning Market appeared first on e-Literate.

Oracle Priority Support Infogram for 19-FEB-2015

Oracle Infogram - Thu, 2015-02-19 15:51

Three good articles recently from Upgrade your Database – NOW!:
Grid Infrastructure PSU Jan 2015 - Am I too intolerant?
Oracle Fail Safe 4.1.1 released - supports Multitenant
Is it always the Optimizer? Should you disable Group By Elimination in 12c?
From Robert G. Freeman on Oracle: Oracle Multitenant - Common Users
Oracle Support
Proactive Analysis Center (PAC), from the My Oracle Support blog.
MySQL Enterprise Monitor 2.3.20 has been released, from the MySQL Enterprise Tools Blog.
From the SOA & BPM Partner Community Blog:
Patching the Service Bus 12.1.3 unknown protocol deployment error
Service Bus 12c – Series of Articles
Using OSB 12.1.3 Resequencer
From New Generation Database Access: Oracle REST Data Services EA2 has just shipped!!!!!!
From Jeff Taylor’s Weblog: "ipadm show-addr" with name resolution on Solaris 11.
Profiling the kernel, from Darryl Gove's blog.
Big Data
Extending Oracle Database Security to Hadoop, from the Oracle Big Data blog.
From Data Integration: Hive, Pig, Spark - Choose your Big Data Language with Oracle Data Integrator.
Ops Center
From the Ops Centerblog: Creating Networks for Server Pools
From the adaptersblog: Security Configuration in LDAP Adapter.
From Business Analytics - Proactive Support:
Statement of Direction: EPM and Oracle Data Integrator
Patch Set Update: Hyperion Strategic Finance
OBIEE 11g: Troubleshooting Crash Issues
OBIA 11g - Bundle Patches
From Proactive Support - Java Development using Oracle Tools: Top 10 Documents Linked to SRs for JDeveloper/ADF/MAF Issues 10/2014 thru 2/2015
From the Oracle E-Business Suite Support Blog:
Webcast: Setup and Integration of Work in Process with Oracle Quality
New Shipping Execution Analyzer!!
Webcast: Oracle Time and Labor Timecard Layout Configuration
Webcast: How to Process CTO Bills with Warning: The Configured Bills will be Created with Dropped Components
From the Oracle E-Business Suite Technology blog:
Tutorial: Publishing EBS 12.2 PL/SQL APIs as REST Services

Reminder: EBS 11i Reverts to Sustaining Support on January 1, 2016

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 2

Randolf Geist - Thu, 2015-02-19 15:08
In the second part of this post (go to part 1) I want to focus on the hybrid distribution for skewed join expressions.

2. Hybrid Distribution For Skewed Join Expressions
The HYBRID HASH distribution allows to some degree addressing data distribution skew in case of HASH distributions, which I've described in detail already in the past. A summary post that links to all other relevant articles regarding Parallel Execution Skew can be found here, an overview of the relevant feature can be found here and a detailed description can be found here.

One other side effect of the truly hybrid distribution in case of skew (mixture of BROADCAST / HASH for one row source and ROUND-ROBIN / HASH for the other row source) is that HASH distributions following such a hybrid distribution need to redistribute again even if the same join / distribution keys get used by following joins. If this were regular HASH distributions the data would already be suitably distributed and no further redistribution would be required.

Here's an example of this, using the test case setup mentioned here:

-- Here the HYBRID SKEW distribution works for B->C
-- But the (B->C)->A join is affected by the same skew
-- So the HASH re-distribution of the resulting B.ID is skewed, too
-- And hence the HASH JOIN/SORT AGGREGATE (operation 4+5) are affected by the skew
-- The big question is: Why is there a re-distribution (operation 12+11)?
-- The data is already distributed on B.ID??
-- If there wasn't a re-distribution no skew would happen
-- In 11.2 no-redistribution happens no matter if C is probe or hash row source
-- So it looks like a side-effect of the hybrid distribution
-- Which makes sense as it is not really HASH distributed, but hybrid
select count(t_2_filler) from (
select /*+ monitor
leading(b c a)
use_hash(c a)
pq_distribute(a hash hash)
pq_distribute(c hash hash)
*/ as t_1_id
, a.filler as t_1_filler
, as t_2_id
, c.filler as t_2_filler
from t_1 a
, t_1 b
, t_2 c
c.fk_id_skew =
and =

-- 11.2 plan
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,03 | PCWP | |
|* 5 | HASH JOIN | | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
|* 10 | HASH JOIN | | Q1,03 | PCWP | |
| 11 | PX RECEIVE | | Q1,03 | PCWP | |
| 12 | PX SEND HASH | :TQ10001 | Q1,01 | P->P | HASH |
| 13 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 14 | TABLE ACCESS FULL| T_1 | Q1,01 | PCWP | |
| 15 | PX RECEIVE | | Q1,03 | PCWP | |
| 16 | PX SEND HASH | :TQ10002 | Q1,02 | P->P | HASH |
| 17 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 18 | TABLE ACCESS FULL| T_2 | Q1,02 | PCWP | |

-- 12.1 plan
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10004 | Q1,04 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,04 | PCWP | |
|* 5 | HASH JOIN | | Q1,04 | PCWP | |
| 6 | PX RECEIVE | | Q1,04 | PCWP | |
| 7 | PX SEND HYBRID HASH | :TQ10002 | Q1,02 | P->P | HYBRID HASH|
| 9 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 10 | TABLE ACCESS FULL | T_1 | Q1,02 | PCWP | |
| 11 | PX RECEIVE | | Q1,04 | PCWP | |
| 12 | PX SEND HYBRID HASH | :TQ10003 | Q1,03 | P->P | HYBRID HASH|
|* 13 | HASH JOIN BUFFERED | | Q1,03 | PCWP | |
| 14 | PX RECEIVE | | Q1,03 | PCWP | |
| 15 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 16 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 17 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 18 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 19 | PX RECEIVE | | Q1,03 | PCWP | |
| 20 | PX SEND HYBRID HASH (SKEW)| :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 21 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 22 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
Note that both joins to A and C are based on B.ID. As you can see from the 11.2 plan therefore the final hash join (operation ID 5) doesn't need to have the output of the previous hash join (operation ID 10) redistributed, since the data is already distributed in a suitable way (and as a consequence both joins therefore will be affected by skewed values in T2.FK_ID_SKEW, but no BUFFERED join variant is required).

Now look at the 12c plan when SKEW is detected: Since the SKEW handling in fact leads to a potential mixture of HASH / BROADCAST and HASH / ROUND-ROBIN distribution, the data gets redistributed again for the final join (operation ID 11 + 12) which has several bad side effects: First it adds the overhead of an additional redistribution, as a side effect this then turns one of the hash joins into its BUFFERED variant, and since the SKEW distribution (at present) is only supported if the right side of the join is a table (and not the result of another join), this following join actually will be affected by the skew that was just addressed by the special SKEW handling in the join before (assuming the HYBRID HASH distributions in operation ID 6+7 / 11+12 operate in HASH / HASH, not BROADCAST / ROUND-ROBIN mode)...

ADF BC Range Paging and REST Pagination

Andrejus Baranovski - Thu, 2015-02-19 13:14
In this post I would like to explore and integrate two thing - ADF BC Range Paging and REST service pagination. It would inefficient to retrieve entire set of data in the REST service, ideally should be available option to specify number of rows and range number to fetch. ADF BC allows to query VO in Range Paging mode - SQL query will be constructed with row numbers, to query data in certain range of rows (this allows to fetch less data from DB). We could combine this with REST service and provide light interface to access data.

Here you can download sample application - (compiled with ADF 12c). I'm translating ADF BC VO structure into HashMap, this allows to publish unified structure through REST, without creating a separate POJO object. There is a special generic method called toHashMap, it iterates over VO attributes and constructs a HashMap with attribute names and values:

Generic AM method accepts parameters for page number and range size. Here we enforce Range Paging mode for VO and using ADF BC API methods to scroll to the certain page and set the range size (number of rows to fetch). It is important to get results from default rowset, otherwise ADF BC will generate a separate default SQL query:

ViewController project contains a list of references to the REST and Jersey related libraries, these extra libraries are required to transform HashMap to the REST response:

Here is the REST method. I'm accessing ADF BC Application Module and invoking custom method, with range size and page number coming from the REST request. Result is a list of HashMaps - a set of VO rows:

Make sure there is Jersey servlet defined in web.xml, REST request will not work without it:

Here is the example, where I perform a request through REST for rangePage = 1 and rangeSize = 10. This means 10 rows from the first page of rows are fetched:

You should check SQL query in the log. REST request from above generates a special SQL with ROWNUM. This means we are retrieving less data from DB, only data we need to display in the current page:

Stock brokers do a fine job of securing assets [VIDEO]

Chris Foot - Thu, 2015-02-19 08:17


Hi, welcome to RDX! Recently purchased stock? Worried about hackers stealing your investment data? Have no fear – your broker’s cybersecurity plan is likely up to par.

The Securities and Exchange Commission recently surveyed 106 financial advisories, assessing their ability to protect client information. Each firm’s software, back-end systems, devices and strategies were scrutinized. The SEC discovered 89 percent of brokers audited their own cybersecurity policies to make sure they complied with federal standards.

Brokerage firms didn’t stop there – 71 percent of them outlined cybersecurity requirements in their contracts with vendors and business partners. In addition, more than half of brokers currently pay for cybersecurity insurance.

This approach is a prime example of what makes a complete data protection plan. Want to know how RDX can help you develop a strategy? Check out our database security monitoring page to learn more!

The post Stock brokers do a fine job of securing assets [VIDEO] appeared first on Remote DBA Experts.

QlikView Tips & Tricks

Yann Neuhaus - Thu, 2015-02-19 08:00

For several months now, I've been working on some QlikView projects which is a quite interesting discovery for me. Generally, these projects are limited to the management of QlikView at the administrator level (installation, upgrade, configuration of the QlikView Management Console, aso...) but I was still able to accumulate some knowledge that I want to share with you today. In this blog entry, I will try to explain how to debug the QlikView Offline Service, how to properly configure the access to remote Shared Folders and how to enable the Single Sign-On between QlikView and a third party software. I will try to describe the required steps as best I can to avoid any problems.

I. QlikView Offline Service for QlikView 11.2 SR7 or below

In a complete QlikView environment that is using SSL (I don't know if it can happen without SSL), if you try to setup the QlikView Offline Service, you may face an issue where the Offline Service doesn't work at all. This happen even if the component was installed successfully and even if there are no errors in the QlikView log files. This issue comes from the fact that by default, QlikView enforces the FIPS compliance when using the Offline Service but this can cause some problems depending on your enterprise network restrictions. After a feedback on that point to the QlikView Support Team, they confirmed us that it was a bug and they fixed it in their next QlikView version (11.2 SR8 and above). A simple workaround for this issue can be setup by following these steps:

  1. SSL must be properly configured
  2. The QlikView Offline Service must be properly installed
  3. Login to the Windows Server with any Administrator account
  4. Open the file: C:/Windows/Microsoft.NET/Framework64/v4.0.30319/Config/machine.config
    1. Find the line with: ˂runtime /˃
    2. Replace this line with:
                         ˂enforceFIPSPolicy enabled="false" /˃
  5. Save the file
  6. Open a command prompt as Administrator and execute the command: services.msc
  7. Restart all QlikView Services

Modification of the machine.conf file to disable the FIPS enforcementModification of the machine.conf file to disable the FIPS enforcement

After doing so, you should be able to access to your QlikView documents from a smartphone or a tablet to work offline.

II. Access to remote Shared Folders

As before, depending on your Windows Server GPOs, you may face some issues regarding the access to files stored on a remote Shared Folder (access via the user who run QlikView). By remote I mean another city, country, continent or whatever. This tips can help to solve some Shared Folders access even if you aren't using QlikView, it's more a Windows Server Tips ;). Regarding QlikView, this issue can be easily found in the log file because you will be able to see something like this during a task execution:


The configuration I will show you below worked for me but depending on your network restrictions, it may not work as it is. The important thing here is to understand each parameters and the consequences of this configuration:

  1. Login to the Windows Server with any Administrator account
  2. Open a command prompt as Administrator and execute the command: regedit
  3. Open: HKLM ˃ SYSTEM ˃ CurrentControlSet ˃ Services ˃ LanmanServer ˃ Parameters
    1. Set "enablesecuritysignature" to 1
    2. Set "requiresecuritysignature" to 1
  4. Open: HKLM ˃ SYSTEM ˃ CurrentControlSet ˃ Services ˃ LanmanWorkstation ˃ Parameters
    1. Set "EnableSecuritySignature" to 1
    2. Set "RequireSecuritySignature" to 0
  5. Reboot the Windows Server

Share1.pngConfiguration of the LanmanServer registry keys to 1-1

Share2.pngConfiguration of the LanmanWorkstation registry keys to 1-0

As you can see, there are two different sections named "LanmanServer" and "LanmanWorkstation":

  • LanmanServer control the parameters of the current Windows Server when it acts as a Server
  • LanmanWorkstation control the parameters of the current Windows Server when it acts as a Client

For example, if you access a remote Shared Folder from the QlikView Windows Server, then you are acting as a Client and therefore with this configuration you can access to everything whatever is the LanmanServer configuration of the Shared Folder's Windows Server. Indeed, the local SecuritySignature is enabled but not required (Enable=1, Required=0, it will shorten this as "1-0") so it's the most generic case which cover all possible solutions of LanmanServer configuration of the remote Host (3 solutions: 0-0, 1-0 or 1-1).

In the same way, if a user try to access to a Shared Folder on the QlikView Server, then the QlikView Server will act as a LanmanServer and therefore the configuration taken into account is (1-1). This configuration can be changed but if the LanmanWorkstation configuration of the user's laptop is 1-1, then the LanmanServer configuration will need to be 1-1 otherwise the user will not be able to access to the Shared Folder of the QliKView Server. The 1-1 configuration is of course the most secure and therefore, it's often (always?) chosen on the user's Workstation. That's why it's generally a good idea to set the LanmanServer of the QlikView Server to 1-1 too.

It's really hard to explain this kind of stuff but I hope I was clear enough!

III. SSO between QlikView and "X"

Again, this configuration isn't something only related to QlikView but it can be useful if you need, for example, to allow QlikView to automatically store some documents into another system "X" which can be a Document Management System (Alfresco, SharePoint, Documentum, aso...) or something else. You may not need to do this because it can be allowed by default on your enterprise but it's generally a good practice to restrict the SSO features on a Windows Servers and therefore, this kind of configuration is often required. For this configuration, let's define X as a third party software and as the URL related to it.

From the Windows Server, if you try to access to the real URL of your third party software (e.g for Alfresco Share it would be: and if you get a pop-up window asking you for credentials, then follow the steps below:

  1. Ensure that your Active Directory is properly configured for SSO (this is a very large topic and I will not describe it here)
  2. Login to the Windows Server with the account under which QlikView is running
  3. Open: Internet Explorer ˃ Internet Options ˃ Security ˃ Trusted Sites ˃ Sites
    1. Write:
    2. Click on: Add
    3. Write: about:blank (this step may not be mandatory)
    4. Click on: Add (this step may not be mandatory)
    5. Close the window
  4. Click on: Custom Level...
    1. Scroll down and find: User Authentication
    2. Set "Logon" to "Automatic logon with current username and password"
    3. Click on: OK
    4. Click on: Apply
    5. Click on: OK
  5. Restart Internet Explorer
  6. Ensure that the account under which QlikView is running has the proper permissions on the third party software

Trusted sites configuration with the list of URL for all "X"

SSO2.pngEnable the automatic logon with current user name and password for the Trusted Sites

After that configuration, if you try again to access to the real URL of your third party software, then the pop-up window should not be displayed and the login should be successful. Of course, the last step is important because the user that is running QlikView must have access to the third party software otherwise the SSO is useless...

This concludes this first approach of some QlikView Tips & Tricks. I'm sure that more will come soon but I will need to find some time to share that with you. I hope you will find this blog entry useful and don't hesitate to give me your feedback using the comments below!

Throwback Thursday: Middleware Newsletter Feature on Documents Cloud Service

WebCenter Team - Thu, 2015-02-19 06:00

Originally Published in Middleware Newsletter November 2014

Introducing Oracle Documents Cloud Service

The digital economy runs on a constant flow of information, and smart companies know that limiting access to information reduces the ability to make split-second decisions that drive competitive advantage. The challenge is to provide access to files for people constantly on the move, simply and without compromising security.

Solving that challenge is the driver behind Oracle Documents Cloud Service, announced at Oracle OpenWorld 2014.

"We wanted organizations to be able to sync and share files with the same simplicity found in consumer-oriented services, but with enterprise-grade security and ease of integration with their existing IT infrastructure and applications," says Oracle's Senior Director of Product Management David Le Strat.

Oracle Documents Cloud Service is organized around four key concepts.

  • Simplicity. Collaboration is an organic, intuitive process that succeeds best with tools that help processes unfold without imposing technical restrictions. Oracle Documents Cloud Service is available to you wherever you are, using whatever device you have handy. That means you can work with the tools you're comfortable using—a web browser or mobile app—without any special training.

  • Security. Enterprises cannot risk security vulnerabilities with mission-critical information. Oracle Documents Cloud Service provides enterprise-grade security. Enterprise encryption for files at rest or in transit, secure sharing with auditing, tracking, permission controls, and automatic backups keep your information safe. Your data and documents are completely isolated and secure, with separate database schemas and identity domains for each tenant.

  • Ubiquity. With support for iPhone, iPad, and Android mobile devices, along with desktop sync for both Mac and Windows computers, it doesn't matter how you access your documents. The service is always available, anytime—for you and everyone you collaborate with.

  • Integration. Oracle Documents Cloud Service has been designed with a rich set of APIs for integrating and extending business applications and processes with content collaboration. "For example," says Le Strat, "you can embed the Oracle Documents Cloud Service web interface into your business applications, letting your team manage their content directly within the context of existing tasks."
The ability to easily and securely access files and collaborate on business initiatives is a key underpinning to enterprise agility. With Oracle Documents Cloud Service, you can support the nimble business while still maintaining vital IT control and security.

Visit the Oracle Documents Cloud Service resource pages to view an e-book and access more information, including demos, data sheets, videos, and an FAQ.

PeopleTools 8.54: Global Temporary Tables

David Kurtz - Thu, 2015-02-19 05:11
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Database Feature OverviewGlobal Temporary tables were introduced in Oracle 8i.  They can be used where an application temporarily needs a working storage tables.  They are named
  • Global because the content is private
  • Temporary because the definition is permanent
Or if you prefer
  • Global because the definition is available to everyone
  • Temporary because 
    • physical instantiation of the table is temporary, in the temporary segment (so it isn't redo logged and so isn't recoverable),
    • but it does generate undo in the undo segment, and there is redo on the undo.
    • Each session gets its own private copy of the table in the temp segment.  So you cannot see what is in another session's temporary table, which can make application debugging difficult.
    • The physical instantiation of the table is removed either 
      • when the session disconnects - on commit preserve
      • or when the transaction is terminated with a commit or rollback - on commit delete
This is a very useful database feature (I have been using it in PeopleSoft application ever since it was introduced). 
  • Can be used for temporary records in Application Engines where restart is disabled.
  • Can be implemented without any application code change.
  • Only Application Designer temporary records can be built as global temporary tables.  You cannot make a SQL Table record global temporary.
  • The reduction in redo generation during intensive batch processes, such as payroll processing, can bring significant performance benefits.  There is no point logging redo information for temporary working storage tables that you do not ever need to restore.
  • Shared temporary tables, such as in the GP calculation process GPPDPRUN that is written in COBOL.  If using payroll streaming (multiple concurrent processes to process in parallel), then concurrent delete/update can cause read consistency problems when using a normal table, but with global temporary tables, each session has its own physical table so there is never any need to read consistency recover to read a global temporary tables.
  • Global temporary tables are also an effective way to resolve table high water mark issues that can occur on non-shared temporary tables in on-line application engine.  The PeopleTools %TruncateTable macro still resolves to delete.  You never get high water mark problems with global temporary tables because they are physically created afresh for each new session.  
  • There is often a reduction in database size because the tables are not retained after the session terminates.  Although there will be an increased demand for temporary tablespace while the global temporary tables are in use.
  • I have occasionally seen performance problems when PeopleSoft systems very frequently truncate tables and experience contention on the RO enqueue.  This problem does not occur with global temporary tables.
Global temporary table are not a licensed database feature and are also available in standard edition.
Global Temporary Tables in PeopleToolsThis is the create table DDL created by Application Designer
The first thing to point out is the specification of a tablespace.  This is a new feature in Oracle 11g.  It is not mandatory in Oracle, but it is coded into the PeopleSoft DDL model so you must specify a temporary tablespace on the record otherwise it will fail to build.  A new temporary tablespace PSGTT01 is delivered by Oracle when you upgrade to 8.54, or you could just use the existing temporary tables.

This new feature has been implemented using 2 new DDL models (statement types 6 and 7).
SELECT * FROM psddlmodel WHERE statement_type IN(6,7);

-------------- ---------- ---------- ----------
6 2 0 0

7 2 0 0
  • All tables created ON COMMIT PRESERVE, but on-line instances could be ON COMMIT DELETE (theory subject to testing) and for ALL application engine programs even if restart is enabled because commits suppressed in on-line application engines.  Instead, commit is done by the component.
If you try adding a global temporary table table to an application engine that is not restart disabled you quite rightly get the following error message. The table will be added, but the program will not execute correctly.

"Global Temporary Tables allocated to this restart enabled AE program will not retain any data when program exits."Problems:
  • There has always been a 13 character limit on temporary records, because there used to be a maximum of 99 non-shared instances, and 2 characters were reserved.  If you try to set the number of instances to greater than 99 in an application Engine (I tried GP_GL_PREP)  you now get the warning message
"Do not support more than 99 instances when select the Temp Table which are not attributed as GTT"
  • There is now a maximum length of 11 characters for the name of a record built a global temporary table because from PeopleTools 8.54 there can be up to 9999 non-shared instances of the record.  The restriction applies irrespective of how many instances you are actually using. 
    • I have yet to encounter a system where I need more than 99 instances of a temporary table.  I can just about imagine needing 100 non-shared instances, but not 1000.  
    • This means that I cannot retrofit global temporary tables into an existing Application Engine processes without changing record names.  There are existing delivered application engine programs with 12 and 13 character temporary record names that cannot now be switched to use global temporary tables managed by application designer.  I don't need to support more instances just because the table is global temporary.
      • For example, GP_GL_SEGTMP in GP_GL_PREP is a candidate to be made global temporary because that is a streamed Global Payroll process.  When I tried, I got a record name too long error!
"Record Name is too long. (47,67)"
      • Really, if the table is global temporary you don't need lots of instances.  Everyone could use the shared instance, because Oracle gives each session a private physical copy of the table anyway. 
        • You could do this by removing the record name from the list of temporary records in the application engine, then the %Table() macro will generate the table name without an instance number.
        • There would be a question of how to handle optimizer statistics.  Optimizer statistics collected on a global temporary table in one session could end up being used in another because there is only one place to store them in the data dictionary.
        • The answer is not to collect statistics at all and to use Optimizer Dynamic Sampling.  There is a further enhancement in Oracle 12c where the dynamically sampled stats from different sessions are kept separate.
    • When Application Designer builds an alter script, it can't tell whether it is global temporary or a normal table, so doesn't rebuild the table if you change it from one to the other.
    • The only real runtime downside of global temporary tables is that if you want to debug a process the data is not left behind after the process terminates.  Even while the process is running, you cannot query the contents of a global temporary tables in use by another from your session,
    My RecommendationSupport for global temporary tables is welcome and long overdue.  It can bring significant run time performance and system benefits due to the reduction in redo and read consistency.  It can be implemented without any code change. 

    We just need to sort out the 11 character record name length restriction.©David Kurtz, Go-Faster Consultancy Ltd.

    What has Angelo been doing? Whats this marketplace all about?

    Angelo Santagata - Thu, 2015-02-19 04:39

    About two years ago my role changed from focusing on Fusion Middleware enablement to SaaS Integration enablement. Simply put my team started looking at how to get partners integrated with our SaaS applications (SalesCloud - CRM, HCM, ERP ) using PaaS where needed and more recently also looking at the pure PaaS enablement model.

    The market is growing and we now have an enterprise app store aimed at partners where they can host their apps and integrations. Checkout this video recently released featuring my VP, Sanjay Sinha, where he explains the ISV partner eco-system, how its changing the way we do business and the key benefits for ISV and OEM partners.


    Snippet : How to query the Sales Cloud users username in groovy and in EL

    Angelo Santagata - Thu, 2015-02-19 04:16
    Ive decided to create a new category called "Snippets" to capture small chunks of information which might be useful to other people

    In Groovy
    // Get the security context
    def secCtx = adf.context.getSecurityContext()
    // Check if user has a given role
    if (secCtx.isUserInRole('MyAppRole')) {
      // get the current user's name
      def user = secCtx.getUserName()
      // Do something if user belongs to MyAppRole

    In a EL Expression

    Test MySQL on AWS quickly

    Kubilay Çilkara - Thu, 2015-02-19 02:20
    Using sysbench to performance test AWS RDS MySQL hardware is an easy three step  operation. Sysbench creates synthetic tests and they are done on a 1 mil row 'sbtest' table that sysbench creates in the MySQL database you indicate. The test doesn't intrude with your database schema, and it doesn't use your data, so it is quite safe. The test is an OLTP test trying to simulate event operations in the database as it runs various, SELECT, INSERT, UPDATE and DELETE requests on it's own 'sbtest' table.The results  of the tests are metrics like transactions per second, number of events, elapsed time etc. See man pages for description and Google it, it is a  popular testing tool. Other things you can set it up to do are to control how many requests (events) you want it to execute in a given time or you can tell it to keep on executing infinite requests until you stop it, or destruction testing. Is a very flexible testing tool with many options including throtling concurrency.You can be up and running with 3 commands on a unix system as follows.  Download sysbench tool (doing this on ubuntu)sudo apt-get install sysbenchCreate a table with 1 mil rowssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} prepareTest with different parameterssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} --max-time=60 --num-threads=550 runWarning: Synthetic  tests will just give you the ability of the hardware at a given standard set of requests and DML operations. There are no way an indication of what will happen to your database if the real workload increases beacause of the applications. Application Load Testing is something else, applications are  complex!  Database Workload is dependent on the application generated workload from real users using the system and is very hard to simulate that in a test. It is not imppossible If you use a database, such as Oracle which has the capability of recording and replaying its production database workload - called Automatic Workload Repository (AWR). In MySQL I couldn't find  so far a way to do this. But sysbench synthetic tests gave me the ability to quickly benchmark and baseline a MySQL database capabilities on different AWS Amazon hardware, something is better than nothing I suppose. 
    Categories: DBA Blogs

    255 columns

    Jonathan Lewis - Wed, 2015-02-18 18:45

    Here’s a quick note, written and some strange time in (my) morning in Hong Kong airport as I wait for my next flight – all spelling, grammar, and factual errors will be attributed to jet-lag or something.

    And a happy new year to my Chinese readers.

    You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:

    • If you’re still running 10g and gather stats on a table with more than roughly 165 columns then the query Oracle uses to collect the stats will only handle about 165 of them at a time; so you end up doing multiple (possibly sampled) tablescans to gather the stats. The reason why I can’t give you an exact figure for the number of columns is that it depends on the type and nullity of the columns – Oracle knows that some column types are fixed length (e.g. date types, char() types) and if any columns are declared not null then Oracle doesn’t have to worry about counting nulls – so for some of the table columns Oracle will be able to eliminate one or two of the related columns it normally includes in the stats-gathering SQL statement – which means it can gather stats on a few more table columns.  The 165-ish limit doesn’t apply in 11g – though I haven’t checked to see if there’s a larger limit before the same thing happens.
    • If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment).
    • A particularly nasty side effect of the row split comes with direct path tablescans – and that’s what Oracle does automatically when the table is large. In many cases all the row pieces for a row will be in the same block; but they might not be, and if a continuation row-piece is in a different block Oracle will do a “db file sequential read” to read that block into the buffer cache.  As an indication of how badly this can affect performance, the results I got at a recent client site showed “select count(col1) from wide_table” taking 10  minutes while “select count(column40) from wide_table” took 22 minutes because roughly one row in a hundred required a single block read to follow the chain. An important side effect of the split point is that you really need to put the columns you’re going to index near the start of the table to minimise the risk of this row chaining overhead when you create or rebuild an index.
    • On top of everything else, of course, it takes a surprisingly large amount of extra CPU to load a large table if the rows are chained. Another client test reported 140 CPU seconds to load 5M rows of 256 columns, but only 20 CPU seconds to load 255.

    If you are going to have tables with more than 255 columns, think very carefully about column order – if you can get all the columns that are almost always null at the end of the row you may get lucky and find that you never need to create a secondary row piece. A recent client had about 290 columns in one table of 16M rows, and 150 columns were null for all 16M rows – unfortunately they had a mandatory “date_inserted” column at the end of the row, but with a little column re-arrangement they eliminated row chaining and saved (more than) 150 bytes storage per row.  Of course, if they have to add and back-fill a non-null column to the table they’re going to have to rebuild the table to insert the column “in the middle”, otherwise all new data will be chained and wasting 150 bytes per row, and any old data that gets updated will suffer a row migration/chain catastrophe.

    The most frequently abused feature in APEX

    Denes Kubicek - Wed, 2015-02-18 16:02
    Today I participated in the Scott's survey:

    and there was one really interesting question:

    "What do you think is most frequently abused? (and why)"

    If APEX has any week points than it is definitely the fact that you can place your code almost everywhere. Especially PL/SQL chunks of code. If I start counting and list all the places it will be a very long list:

  • Page Read Only and Cache Page Condition
  • Region Conditional Display and Read Only Condition
  • Buttons Conditional Display
  • Items Conditional Display
  • Items Source value or expression
  • Items Post Calculation Computation
  • Items Default value
  • Items Conditional Display and Read Only Condition
  • Computations
  • Processes on Submit, on Load, on Demand
  • Validations and Validation Conditions
  • ...and yes, Dynamic Actions as my favorite.

  • There is of course more but I will stop counting here. If you start working with APEX this is great - you will easily get on target and have an application up and running in no time. A little bit of code here, a little bit there and there we go.

    This problem becomes obvious if you application is more than just a small and temporary solution shared between a couple of people. As an application grows it will start suffering from performance issues. Furthermore it will be hard to maintain it. It will have a dozens of complex pages with many items, many computations and processes, conditional buttons, validations and dynamic actions. If you practice writing anonymous PL/SQL blocks and you paste those wherever it is possible, your code will become redundant and slow. You will probably repeat the same code many times on your page. This means, your pages will require more processing time than they should. I would even go so far and say this is almost as bad as putting your business logic into triggers. Have you ever had a chance to debug such applications (and you didn't know the code was placed there)?

    The possibility to have so many options is of course great and useful. But it doesn't mean you should use all of the options. The other thing important to know is that you should never write anonymous PL/SQL blocks in your applications. Never! Whenever you need to use PL/SQL, you should use packages and place your code there. The probability that you will then repeat your code is quite low and for sure your application will run much faster.

    My proposal for the future version of APEX would be to have a subtitle for each of those containers saying "Handle with care" and providing some explanations why. I am serious here. Really.

    Categories: Development

    Greenplum is being open sourced

    DBMS2 - Wed, 2015-02-18 15:51

    While I don’t find the Open Data Platform thing very significant, an associated piece of news seems cooler — Pivotal is open sourcing a bunch of software, with Greenplum as the crown jewel. Notes on that start:

    • Greenplum has been an on-again/off-again low-cost player since before its acquisition by EMC, but open source is basically a commitment to having low license cost be permanently on.
    • In most regards, “free like beer” is what’s important here, not “free like speech”. I doubt non-Pivotal employees are going to do much hacking on the long-closed Greenplum code base.
    • That said, Greenplum forked PostgreSQL a long time ago, and the general PostgreSQL community might gain ideas from some of the work Greenplum has done.
    • The only other bit of newly open-sourced stuff I find interesting is HAWQ. Redis was already open source, and I’ve never been persuaded to care about GemFire.

    Greenplum, let us recall, is a pretty decent MPP (Massively Parallel Processing) analytic RDBMS. Various aspects of it were oversold at various times, and I’ve never heard that they actually licked concurrency. But Greenplum has long had good SQL coverage and petabyte-scale deployments and a columnar option and some in-database analytics and so on; i.e., it’s legit. When somebody asks me about open source analytic RDBMS to consider, I expect Greenplum to consistently be on the short list.

    Further, the low-cost alternatives for analytic RDBMS are adding up.

    • Amazon Redshift has considerable traction.
    • Hadoop (even just with Hive) has offloaded a lot of ELT (Extract/Load/Transform) from analytic RDBMS such as Teradata.
    • Now Greenplum is in the mix as well.

    For many analytic RDBMS use cases, at least one of those three will be an appealing possibility.

    By no means do I want to suggest those are the only alternatives.

    • Smaller-vendor offerings, such as CitusDB or Infobright, may well be competitive too.
    • Larger vendors can always slash price in specific deals.
    • MonetDB is still around.

    But the three possibilities I cited first should suffice as proof for almost all enterprises that, for most use cases not requiring high concurrency, analytic RDBMS need not cost an arm and a leg.

    Related link

    Categories: Other