Skip navigation.

Feed aggregator

MapR Sandbox for Hadoop Learning

Surachart Opun - Mon, 2014-03-31 10:49
I got email about MapR Sandbox, that is a fully functional Hadoop cluster running on a virtual machine (CentOS 6.5) that provides an intuitive web interface for both developers and administrators to get started with Hadoop. I belief it's a good idea to learn about Hadoop and its ecosystem. Users can download for VMware VM or VirtualBox. I downloaded for VirtualBox and imported it. I changed about network to use "Bridged Adapter". After started... I connected it http://ip-address:8443
Then, I selected "Launch HUE" and "Launch MCS", but got some error and fixed.
Finally,  I could use HUE and MCS.


Hue is an interface for interacting with web applications that access the MapR File System (MapR-FS). Use the applications in HUE to access MapR-FS, work with tables, run Hive queries, MapReduce jobs, and Oozie workflows.

The MapR Control System (MCS) is a graphical, programmatic control panel for cluster administration that provides complete cluster monitoring functionality and most of the functionality of the command line.

After reviewing MapR Sandbox for VirtualBox, "maprdev" account is development account that can sudo to be root.
login as: maprdev
Server refused our key
Using keyboard-interactive authentication.
Password:
Welcome to your Mapr Demo virtual machine.
[maprdev@maprdemo ~]$ sudo -l
Matching Defaults entries for maprdev on this host:
    !visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG
    LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE",
    env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
User maprdev may run the following commands on this host:
    (ALL) NOPASSWD: ALL
[maprdev@maprdemo ~]$
[maprdev@maprdemo ~]$ sudo showmount -e localhost
Export list for localhost:
/mapr                *
/mapr/my.cluster.com *
[maprdev@maprdemo ~]$
Read More Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Innovation Springs to Life Again - Award Nominations Now Open

WebCenter Team - Mon, 2014-03-31 07:00

12.00

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

2014 Oracle Excellence Awards: Oracle Fusion Middleware Innovation

Oracle is pleased to announce the call for nominations for the 2014 Oracle Excellence Awards: Oracle Fusion Middleware Innovation.  The Oracle Excellence Awards for Oracle Fusion Middleware Innovation honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across nine distinct categories:

Customers may submit separate nominations forms for multiple categories; The 2014 Fusion Middleware categories are as follows; subject to change:

If you consider yourself a pioneer using these solutions in innovative ways to achieve significant business value, submit your nomination for the 2014 Oracle Excellence Awards for Oracle Fusion Middleware Innovation by Friday, June 20th, 2014, for a chance to win a FREE registration to Oracle OpenWorld 2014 (September 28-October 2) in San Francisco, California.   Top customers will be showcased at Oracle Open World 2014, get a chance to mingle with Oracle executives, network with their peers and be featured in Oracle Publications.

NOTE: All nominations are also considered for Oracle OpenWorld Session presentations and speakers. Speakers chosen receive FREE registration to Oracle OpenWorld. 

For additional details, See: Call for Nominations: Oracle Fusion Middleware Innovation

Last year’s winners can be seen here: 2013 FMW Innovation Winners

Come See Integrigy at Collaborate 2014

Come see Integrigy’s sessions at Collaborate 2014 in Las Vegas (http://collaborate14.com/). Integrigy is presenting the following papers:

IOUG - #526 Oracle Security Vulnerabilities Dissected, Wednesday, April 9, 11:00am

OAUG – #14365 New Security Features in Oracle E-Business Suite 12.2, Friday April 11, 9:45am

OAUG – #14366 OBIEE Security Examined, Friday, April 11, 12:15pm

If you are going to Collaborate 2014, we would also be more than happy to talk with you about your Oracle security projects or questions. If you would like to talk with us while at Collaborate please contact us at info@integrigy.com

Tags: ConferencePresentation
Categories: APPS Blogs, Security Blogs

APEX World 2014

Rob van Wijk - Mon, 2014-03-31 03:23
The fifth edition of OGh APEX World took place last Tuesday at Hotel Figi, Zeist in the Netherlands. Again it was a beautiful day full of great APEX sessions. Every year I think we've reached the maximum number of people interested in APEX and we'll never attract more participants. But, after welcoming 300 attendees last year, 347 people showed up this year. Michel van Zoest and Denes Kubicek Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

Health care executives find challenges in new IT adoption

Chris Foot - Mon, 2014-03-31 01:59

As the United States Centers for Medicare and Medicaid Services push health care providers toward electronic health record adoption, many industry leaders are finding the process to be much more difficult than the federal government anticipated. Many physicians are claiming that their in-house IT departments are struggling with implementation, while others are relying on database administration services to successfully deploy EHR programs. 

Forcing deployment 
As outlined by CMS, Stage 2 Meaningful Use requires all health care companies to utilize EHR systems by the end of this year. While some organizations have had better luck than others, the general consensus among professionals is that the industry was taken off guard by the mandate. Creed Wait, a family-practice doctor living in Texas, spoke with The Atlantic contributor James Fallows on a few of the issues hospital IT departments are facing. 

In general, Wait noted that if the health care industry was ready to deploy EHR systems, participants would have done so of their own accord. By forcing hospitals and treatment centers to acclimate to software that – in a number of respects – is poorly designed, Wait claimed the current approach is counterproductive to achieving better care delivery. 

"Our IT departments are swimming upstream trying to implement and maintain software that they do not understand while mandated changes to this software are being released before we can get the last update debugged and working," said Wait, as quoted by Fallows. 

Let someone else handle it 
In an effort to abide by stringent government regulations, some health care CIOs are turning to database support services capable of implementing and managing EHR programs better than in-house IT teams. According to Healthcare IT News, Central Maine Healthcare CIO Denis Tanguay noted that his workload nearly quadrupled once CMS' regulations came into effect. With just a staff of 70 employees to manage IT operations for three hospitals and 85 physician practices, Tanguay claimed that his department was buckling under the pressure. 

"My CEO has a line: We're not in the IT business, we're in the health care business," said Tanguay, as quoted by Healthcare IT News. "This allows me to focus more on making sure that we're focused on the hospital."

In order to resolve the issue, Tanguay advised his fellow executives that investing in a third-party database administration firm would be the most efficient way to streamline the EHR adoption process. The source reported that an outsourced entity specializing in network maintenance would be able to dedicate more resources and personnel to abiding by stringent CMS standards. 

SQL Server: transparent data encryption, key management, backup strategies

Yann Neuhaus - Mon, 2014-03-31 01:55

Transparent Data Encryption requires the creation of a database key encryption. The database key is part of the hierarchy of the SQL Server encryption tree with the DPAPI at the top of the tree. Then, if we go through the tree from top to bottom, we can find the service master key, the database master key, the server certificate or the asymmetric key, and finally the database encryption key (AKA the DEK). In this hierarchy each encryption key is protected by its parent. Encryption key management is one of the toughest tasks in cryptography. Improperly managing the encryption keys can compromises the entire security strategy.

Here is some basic advice on encryption keys:

  • Limit encryption key access to those who really only need it
  • Backup encryption keys and secure them. This is important to be able to restore them in case of corruption or disaster recovery scenarios
  • Rotate the encryption keys on a regular basis. Key rotation based on a regular schedule should be part of the IT policy. Leaving the same encryption key in place for lengthy periods of time gives hackers and other malicious persons the time to attack it. By rotating your keys regularly, your keys become a moving target, much harder to hit.

SQL Server uses the ANSI X.917 hierarchical model for key management which has certain advantages over a flat single-model for encryption keys, particularly in terms of key rotation. With SQL Server, rotate the encryption key that protects the database encryption key requires decrypting and reencrypting an insignificantly small amount of symmetric key data and not the entire database.

However, managing the rotation of the encryption key is very important. Imagine a scenario with a schedule rotation of every day (yes, we are paranoid!!!) and you have a strategy backup with a full back up every Sunday and a transaction log backup every night between Monday and Sunday.

 

Sunday

Monday

Tuesday

Wednesday

Thursday

Friday

Saturday

FULL

LOG

LOG

LOG

LOG

LOG

LOG

TDE_Cert1

TDE_Cert2

TDE_Cert3

 

Here is an interesting question I had to answer: If I have a database page corruption on Tuesday morning that requires a restore of the concerned page from the full backup and the couple of backup logs from Monday to Tuesday, does it work with only the third encryption key? In short: do I need all the certificates TDE_Cert1, TDE_Cert2 and TDE_Cert3 in this case?

To answer this, let’s try with the AdventureWorks2012 database and the table Person.Person.

First, we can see the current server certificate used to protect the DEK of the AdventureWorks2012 database (we can correlate this with the certificate thumbprint):

SELECT        name AS certificate_name,        pvt_key_encryption_type_desc AS pvt_key_encryption,        thumbprint FROM master.sys.certificates WHERE name LIKE 'TDE_Cert%'; GO

 

billet5_tde_certificate_1

 

SELECT        DB_NAME(database_id) AS database_name,        key_algorithm,        key_length,        encryptor_type,        encryptor_thumbprint FROM sys.dm_database_encryption_keys WHERE database_id = DB_ID('AdventureWorks2012')

 

billet5_tde_dek_1

 

Now, we perform a full backup of the AdventureWorks2012 database followed by the database log backup:

BACKUP DATABASE [AdventureWorks2012] TO DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB.BAK' WITH INIT, STATS = 10;   BACKUP LOG [AdventureWorks2012] TO DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB.TRN' WITH INIT, STATS = 10;

 

billet5_tde_bckp_1

 

Then, according to our rotation strategy, we change the old server certificate TDE_Cert by the new one TDE_Cert_2 to protect the DEK

-- Create a new server certificate USE [master]; GO   CREATE CERTIFICATE TDE_Cert2 WITH SUBJECT = 'TDE Certificat 2';   -- Encrypt the DEK by the new server certificate TDE_Cert_2 USE AdventureWorks2012; GO   ALTER DATABASE ENCRYPTION KEY ENCRYPTION BY SERVER CERTIFICATE TDE_Cert_2; GO -- Drop the old server certificate TDE_Cert USE [master]; GO   DROP CERTIFICATE TDE_Cert; GO   SELECT        name AS certificate_name,        pvt_key_encryption_type_desc AS pvt_key_encryption,        thumbprint FROM master.sys.certificates WHERE name LIKE 'TDE_Cert%'; GO

 

billet5_tde_dek_2

 

SELECT        DB_NAME(database_id) AS database_name,        key_algorithm,        key_length,        encryptor_type,        encryptor_thumbprint FROM sys.dm_database_encryption_keys WHERE database_id = DB_ID('AdventureWorks2012')

 

billet5_tde_certificate_2

 

We perform again a new backup log:

BACKUP LOG [AdventureWorks2012] TO DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB_2.TRN' WITH INIT, STATS = 10;

 

billet5_tde_bckp_2

 

Finally, we repeat the same steps as above a last time (rotate the server certificate and perform a new log backup):

-- Create a new server certificate USE [master]; GO   CREATE CERTIFICATE TDE_Cert3 WITH SUBJECT = 'TDE Certificat 3';   -- Encrypt the DEK by the new server certificate TDE_Cert_3 USE AdventureWorks2012; GO   ALTER DATABASE ENCRYPTION KEY ENCRYPTION BY SERVER CERTIFICATE TDE_Cert_3; GO   -- Drop the old server certificate TDE_Cert USE [master]; GO   DROP CERTIFICATE TDE_Cert_2; GO   SELECT        name AS certificate_name,        pvt_key_encryption_type_desc AS pvt_key_encryption,        thumbprint FROM master.sys.certificates WHERE name LIKE 'TDE_Cert%'; GO

 

billet5_tde_certificate_3

 

SELECT        DB_NAME(database_id) AS database_name,        key_algorithm,        key_length,        encryptor_type,        encryptor_thumbprint FROM sys.dm_database_encryption_keys WHERE database_id = DB_ID('AdventureWorks2012')

 

billet5_tde_dek_3

 

BACKUP LOG [AdventureWorks2012] TO DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB_3.TRN' WITH INIT, STATS = 10;

 

billet5_tde_bckp_3

 

So we have achieved our backup strategy with a full backup and a sequence of 3 transaction logs backups before initiating a database corruption next. In the same time, we have performed a rotation of 3 server certificates as encryption keys. Now it’s time to corrupt a data page that belongs to the table Person.Person in the AdventureWorks2012 database:

-- First we check IAM page to get a page ID that belongs to the Person.Person table DBCC IND(AdventureWorks2012, 'Person.Person', 1); GO

 

billet5_tde_dbcc_ind_person_person

 

Then we randomly take a page from the result with the ID 2840. Then, to quickly corrupt the page, we use the undocumented DBCC WRITEPAGE as follows (/! Don’t use DBCC WRITEPAGE in a production environment /!)

 

ALTER DATABASE AdventureWorks2012 SET SINGLE_USER; GO   DBCC WRITEPAGE(AdventureWorks2012, 1, 2840, 0, 2, 0x1111, 1); GO   ALTER DATABASE AdventureWorks2012 SET MULTI_USER; GO


We corrupt the page with the ID 2840 by introducing at the offset 0 two bytes with a global value of 0x1111. The last directORBufferpool option allows the page checksum failures to be simulated by bypassing the bufferpool and flushing the concerned page directly to the disk. We have to switch the AdventureWorks2012 database in the single user mode in order to use this option.

No let’s try to get data from the Person.Person table:

USE AdventureWorks2012; GO   SELECT * FROM Person.Person; GO

 

As expected a logical consistency I/O error with an incorrect checksum occurs during the reading of the Person.Person table with the following message:

 

billet5_tde_error_consistency

 

At this point, we have two options:

  • Try to run DBCC CHECKDB and the REPAIR option but we will likely lose data in this case
  • Restore the page ID 2840 from a consistent full back up and the necessary sequence of transaction log backups after taking a tail log backup

We are reasonable and decide to restore the page 2840 from the necessary backups, but first, we have to perform a tail log backup:

 

USE [master]; GO   -- tail log backup BACKUP LOG [AdventureWorks2012] TO DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_TAILLOG.TRN' WITH NORECOVERY, INIT, STATS = 10;

...

Now we begin our restore process by trying to restore the concerned page from the full backup, but we encounter the first problem:

 

-- Restore the page ID 2840 from the full backup RESTORE DATABAE AdventureWorks2012 PAGE = '1:2840' FROM DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB.BAK' WITH NORECOVERY, STATS = 10; GO


billet5_tde_restore_page_full_backup_error


According to the above error message, we can’t restore the page from this full backup media because it is protected by a server certificate. The displayed thumbprint corresponds to the TDE_Cert certificate which has been deleted during the rotation operation. At this point, we can understand why it is important to have a backup of the server certificate stored somewhere. This is where we remember the basis of encryption and key management.

Of course, we were on the safe side and performed a backup of each server certificate after their creation. Thus, we can restore the server certificate TDE_Cert:

 

USE [master]; GO

CREATE CERTIFICATE TDE_Cert

FROM FILE = 'E:SQLSERVERENCRYPTEDBACKUPTDE_Cert.cer' WITH PRIVATE KEY (        FILE = 'E:SQLSERVERENCRYPTEDBACKUPTDE_Cert.pvk',        DECRYPTION BY PASSWORD = 'P@$$w0rd' ); GO


Then, if we try to restore the page from the full database backup, it now works:

billet5_tde_restore_page_full_backup_success

 

To continue with the restore process we now have to restore the transaction log backup sequence with beginning with the ADVENTUREWORKS2012_DB.TRN media:

 

RESTORE LOG [AdventureWorks2012] FROM DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB.TRN' WITH NORECOVERY, STATS = 10; GO


billet5_tde_restore_page_full_backup_success


Then we try to restore the second transaction log backup ADVENTUREWORKS2012_DB_2.TRN and we face to the same problem as during the earlier full backup. To open the backup media, we first have to restore the certificate with the thumbprint displayed below:

 

RESTORE LOG [AdventureWorks2012] FROM DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB_2.TRN' WITH NORECOVERY, STATS = 10; GO


billet5_tde_restore_page_tran_log_backup_1_success


Ok, we have to restore the TDE_Cert_2 certificate …

 

CREATE CERTIFICATE TDE_Cert_2 FROM FILE = 'E:SQLSERVERENCRYPTEDBACKUPTDE_Cert_2.cer' WITH PRIVATE KEY (        FILE = 'E:SQLSERVERENCRYPTEDBACKUPTDE_Cert_2.pvk',        DECRYPTION BY PASSWORD = 'P@$$w0rd' ); GO


… and we retry to restore the second transaction log. As expected, it works:

 

billet5_tde_restore_page_tran_log_backup_2_success

At this point, we have only two transaction log backups to restore: ADVENTUREWORKS2012_DB_3.TRN and the tail log backup ADVENTUREWORKS2012_DB_TAILLO.TRN. Fortunately, these last two backup medias are encrypted by the TDE_Cert_3 which is the current server certificate that protects the DEK.

 

RESTORE LOG [AdventureWorks2012] FROM DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB_3.TRN' WITH NORECOVERY, STATS = 10; GO


billet5_tde_restore_page_tran_log_backup_3_success

 

RESTORE LOG [AdventureWorks2012] FROM DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB_TAILLOG.TRN' WITH RECOVERY, STATS = 10; GO


billet5_tde_restore_page_tran_log_backup_4_success

 

The restore process is now finished and we can read the data from the Person.Person table without any problem:

 

USE AdventureWorks2012; GO   SELECT * FROM Person.Person


billet5_tde_select_person_person_table

 

To summarize, we have seen the importance of a good key management with a backup / restore strategy in this post. Of course, we chose a paranoid scenario to quickly highlight the problem, but you can easily transpose the same in a normal context with a normal rotation schedule of the encryptions keys - either it is a server certificate, an asymmetric key, or a third party tool.

So what about you, how do you manage your backup strategy with the rotation of the encryption keys?

IT outsourcing a popular trend in business practices

Chris Foot - Mon, 2014-03-31 01:45

Due to the complexity of contemporary IT infrastructures, many enterprises are turning toward database administration services to efficiently manage and secure their digital assets. Between the sophistication of modern hackers and the amount of endpoint devices that are connecting to corporate networks, executives are deducing that on-premise IT departments are not capable of ensuring operability as well as outsourced services.

Common problems 
As long as businesses continue to store critical information in their databases, hackers are sure to attempt to exploit them. Charlie Osborne, a contributor to ZDNet, claimed that these malevolent individuals and groups don't discriminate based on what kind of data enterprises harbor. Whether to obtain financial information, intellectual property or confidential information, cybercriminals look for the following vulnerabilities in an organization's network:

  • Companies without the assistance of third-party database administration often only test for what the system should be doing as opposed to what it should not. If unauthorized activity isn't identified, it can compromise an infrastructure.
  • Unnecessary database features employees neglect to use are often exploited by hackers capable of accessing the hub through legitimate credentials and then forcing the service to run malicious code. 
  • Database experts realize that encryption keys don't need to be held on a disk, but in-house IT teams may not be aware of this option, effectively giving infiltrators the ability to quickly decrypt vital information. 

Moving ahead 
While some corporations choose to stick to management techniques, others are looking for ways to solidify the operability of their systems. Minneapolis/St.Paul Business Journal reported that Cargill, a company specializing in the food procurement process, recently announced that it intends to hire the expertise of a database administration service to manage and oversee all IT operations for the worldwide organization. Although the move could potentially take away 300 jobs from the Twin Cities, some of Cargill's personnel will be hired by the outsourced company. 

As Cargill conducts operations over 67 countries, possessing more than 142,000 employees, its data collection methods are quite vast. Overall, the enterprise currently has 900 workers supporting IT operations, meaning that a mere 0.63 percent of staff is responsible for maintaining database functionality for the entire company. Furthermore, it's likely that many of these professionals don't have the industry-specific knowledge required to adequately manage its system. 

In Cargill's case, consulting with a company to undertake all database administration duties seems like the more secure option. Having a team of professionals well versed in the environment focusing all their energy toward one task can provide the food logistics expert with the protection necessary to conduct global operations. 

Bash infinite loop script

Vattekkat Babu - Sun, 2014-03-30 23:42

There are times when cron won't do for automated jobs. Classic example is when you need to start a script, enter some data and then get it into cron. Now, in most cases this can be handled by having env variables or config files. However, what if some one needs to enter a secret value? You don't want that stored in the filesystem anywhere. For situations like these, you can get inspiration from the following script.

Deterministic functions, result_cache and operators

XTended Oracle SQL - Sun, 2014-03-30 16:51

In previous posts about caching mechanism of determinstic functions I wrote that cached results are kept only between fetch calls, but there is one exception from this rule: if all function parameters are literals, cached result will not be flushed every fetch call.
Little example with difference:

SQL> create or replace function f_deterministic(p varchar2)
  2     return varchar2
  3     deterministic
  4  as
  5  begin
  6     dbms_output.put_line(p);
  7     return p;
  8  end;
  9  /
SQL> set arrays 2 feed on;
SQL> set serverout on;
SQL> select
  2     f_deterministic(x) a
  3    ,f_deterministic('literal') b
  4  from (select 'not literal' x
  5        from dual
  6        connect by level<=10
  7       );

A                              B
------------------------------ ------------------------------
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal

10 rows selected.

not literal
literal
not literal
not literal
not literal
not literal
not literal

As you can see, ‘literal’ was printed once, but ‘not literal’ was printed 6 times, so it was returned from cache 4 times.

Also i want to show the differences in consistency between:
1. Calling a function with determinstic and result_cache;
2. Calling an operator for function with result_cache;
3. Calling an operator for function with deterministic and result_cache;

In this example I will do updates in autonomouse transactions to emulate updates in another session during query execution:
Spoiler:: Tables and procedures with updates SelectShow

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;

create table t1 as select 1 id from dual;
create table t2 as select 1 id from dual;
create table t3 as select 1 id from dual;

create or replace procedure p1_update as
  pragma autonomous_transaction;
begin
   update t1 set id=id+1;
   commit;
end;
/
create or replace procedure p2_update as
  pragma autonomous_transaction;
begin
   update t2 set id=id+1;
   commit;
end;
/
create or replace procedure p3_update as
  pragma autonomous_transaction;
begin
   update t3 set id=id+1;
   commit;
end;
/


Spoiler:: Variant 1
SelectShow

create or replace function f1(x varchar2) return number result_cache deterministic
as
  r number;
begin
   select id into r from t1;
   p1_update;
   return r;
end;
/


Spoiler:: Variant 2
SelectShow

create or replace function f2(x varchar2) return number result_cache
as
  r number;
begin
   select id into r from t2;
   p2_update;
   return r;
end;
/
create or replace operator o2
binding(varchar2)
return number
using f2
/


Spoiler:: Variant 3
SelectShow

create or replace function f3(x varchar2) return number result_cache deterministic
as
  r number;
begin
   select id into r from t3;
   p3_update;
   return r;
end;
/
create or replace operator o3
binding(varchar2)
return number
using f3
/


Test:

SQL> set arrays 2;
SQL> select
  2     f1(dummy) variant1
  3    ,o2(dummy) variant2
  4    ,o3(dummy) variant3
  5  from dual
  6  connect by level<=10;

  VARIANT1   VARIANT2   VARIANT3
---------- ---------- ----------
         1          1          1
         2          1          1
         2          1          1
         3          1          1
         3          1          1
         4          1          1
         4          1          1
         5          1          1
         5          1          1
         6          1          1

10 rows selected.

SQL> /

  VARIANT1   VARIANT2   VARIANT3
---------- ---------- ----------
         7         11         11
         8         11         11
         8         11         11
         9         11         11
         9         11         11
        10         11         11
        10         11         11
        11         11         11
        11         11         11
        12         11         11

10 rows selected.

We can see that function F1 returns same results every 2 execution – it is equal to fetch size(“set arraysize 2″),
operator O2 and O3 return same results for all rows in first query execution, but in the second query executions we can see that they are incremented by 10 – it’s equal to number of rows.
What we can learn from that:
1. A calling a function F1 with result_cache and deterministic reduces function executions, but all function results inconsistent with query;
2. Operator O2 returns consistent results, but function is always executed because we invalidating result_cache every execution;
3. Operator O3 works as well as operator O2, without considering that function is deterministic.

All tests scripts: tests.zip

Categories: Development

Partner Webcast – Oracle ADF Mobile - Implementing Data Caching and Syncing for Working Off Line

Mobile access to enterprise applications is fast becoming a standard part of corporate life. Such applications increase organizational efficiency because mobile devices are more readily at hand...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Visualising OBIEE DMS metrics with Graphite

Rittman Mead Consulting - Sun, 2014-03-30 13:50

Assuming you have set up obi-metrics-agent and collectl as described in my previous post, you have a wealth of data at your disposal for graphing and exploring in Graphite, including:

  • OS (CPU, disk, network, memory)
  • OBIEE’s metrics
  • Metrics about DMS itself
  • Carbon (Graphite’s data collector agent) metrics

In this post I’ll show you some of the techniques we can use to put together a simple dashboard.

Building graphs

First off, let’s see how Graphite actually builds graphs. When you select a data series from the Metrics pane it is added to the Graphite composer where you can have multiple metrics. They’re listed in a legend, and if you click on Graph Data you can see the list of them.

Data held in Graphite (or technically, held in whisper) can be manipulated and pre-processed in many ways before Graphite renders it. This can be mathmatical transforms of the data (eg Moving Average), but also how the data and its label is shown. Here I’ll take the example of several of the CPU metrics (via collectl) to see how we can manipulate them.

To start with, I’ve just added idle, wait and user from the cputotals folder, giving me a nice graph thus:

We can do some obvious things like add in a title, from the Graph Options menu

Graphite functions

Looking at the legend there’s a lot of repeated text (the full qualification of the metric name) which makes the graph more cluttered and less easy to read. We can use a Graphite function to fix this. Click on Graph Data, and use ctrl-click to select all three metrics:

Now click on Apply Function -> Set Legend Name By Metric. The aliasByMetric function is wrapped around the metrics, and the legend on the graph now shows just the metric names which is much smarter:

You can read more about Graphite functions here.

Another useful technique is being able to graph out metrics using a wildcard. Consider the ProcessInfo group of metrics that DMS provides about some of the OBIEE processes:

Let’s say we want a graph that shows cpuTime for each of the processes (not all are available). We could add each metric individually:

But that’s time consuming, and assumes there are only two processes. What if DMS gives us data for other processes? Instead we can use a wildcard in place of the process name:

obieesample.DMS.dms_cProcessInfo.ProcessInfo.*.cpuTime

You can do this by selecting a metric and then amending it in the Graph Data view, or from the Graph Data view itself click on Add and use the auto-complete to manually enter it.

But now the legend is pretty unintelligable, and this time using the aliasByMetric function won’t help because the metric name is constant (cpuTime). Instead, use the Set Legend Name By Node function. In this example we want the third node (the name of the process). Combined with a graph title this gives us:

This aliasbyNode method works well for Connection Pool data too. However it can be sensitive to certain characters (including brackets) in the metric name, throwing a IndexError: list index out of range error. The latest version of obi-metrics-agent should workaround this by modifying the metric names before sending them to carbon.

The above graph shows a further opportunity for using Graphite functions. The metric is a cumulative one – amount to CPU time that the process has used, in total. What would be more useful would be if we could show the delta between each occurrence. For this, the derivative function is appropriate:

Sometimes you’ll get graphs with gaps in; maybe the server was busy and the collector couldn’t keep up.

2014-03-28_07-29-47

To “gloss over” these, use the Keep Last Value function:

2014-03-28_07-30-51

Saving graphs

You don’t have to login to Graphite by default, but to save and return to graphs and dashboards between sessions you’ll want to. If you used the obi-metrics-agent installation script then Graphite will have a user oracle with password Password01. Click the Login button in the top right of the Graphite screen and enter the credentials.

Once logged in, you should see a Save icon (for you young kids out there, that’s a 3.5″ floppy disk…).

You can return to saved graphs from the Tree pane on the left:

flot

As well as the standard Graphite graphing described above, you also have the option of using flot, which is available from the link in the top-right options, or the icon on an existing graph:

2014-03-30_21-44-43

Graphlot/Flot is good for things like examining data values at specific times:

2014-03-30_21-47-36

Creating a dashboard

So far we’ve seen individual graphs in isolation, which is fine for ad-hoc experimentation but doesn’t give us an overall view of a system. Click on Dashboard in the top-right of the Graphite page to go to the dashboards area, ignoring the error about the default theme.

You can either build Graphite dashboards from scratch, or you can bring in graphs that you have prepared already in the Graphite Composer and saved.

At the top of the Graphite Dashboard screen is the metrics available to you. Clicking on them drills down the metric tree, as does typing in the box underneath

Selecting a metric adds it in a graph to the dashboard, and selecting a second adds it into a second graph:

You can merge graphs by dragging and dropping one onto the other:

Metrics within a graph can be modified with functions in exactly the same way as in the Graphite Composer discussed above:

To add in a graph that you saved from Graphite Composer, use the Graphs menu

You can resize the graphs shown on the dashboard, again using the Graphs menu:

To save your dashboard, use the Dashboard -> Save option.

Example Graphite dashboards

Here are some examples of obi-metrics-agent/Graphite being used in anger. Click on an image to see the full version.

  • OS stats (via collectl)
    OS stats from collectl
  • Presentation Services sessions, cache and charting
    Presentation Services sessions, cache and charting
  • BI Server (nqserver) Connection and Thread Pools
    BI Server (nqserver) Connection and Thread Pools
  • Response times vs active users (via JMeter)
    Response times vs active users (via JMeter)
Categories: BI & Warehousing

Ubuntu upgrade to 12.04: grub timeout does not work anymore

Dietrich Schroff - Sun, 2014-03-30 13:11
After doing the upgrade and solving some issues with my screen-resolution, another grub problem hit me:
The timeout for booting the standard kernel did not work anymore.
Inside /etc/default/grub
GRUB_TIMEOUT=10and update-grub, grub worked like
 GRUB_TIMEOUT=-1If you need a  good manual  just look here, but this does not help me, too.

After some tries, i did the following:

In /boot/grub/grub.cfg i changed from
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=-1
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=10
  fi
fito
terminal_output gfxterm
recordfail=0
if [ "${recordfail}" = 1 ] ; then
  set timeout=-1
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=10
  fi
fiThis works, but i have to redo the change everytime update-grub is used...

Ubuntu upgrade to 12.04: screen only works with 1024x768

Dietrich Schroff - Sun, 2014-03-30 12:57
After upgrading a laptop-system to 12.04 (precise pangolin) X only starts with a resolution of 1024x768 and not with 1366x768.
I followed many postings and tried many things:
  • add new resolution with xrandr (here)
  • create a xorg.conf file (here)
  • add new drivers (here)
  • ...
But there was only one thing wrong: /etc/default/grub

After changing the following line everything worked:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i915.modeset=0 to
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"(i added i915.modeset=0  because with the older ubuntu version hibernate does not work and this configuration fixed it. [link])

300 : Rise of an Empire

Tim Hall - Sun, 2014-03-30 10:32

I went to see 300 : Rise of an Empire yesterday.

My feelings on this were a bit of a mixed bag. I was not the biggest fan of the original 300 movie at the cinema, but I have subsequently warmed to it. This film adds a bit more story about the lead up to the first film and fills in more details about what happened after, so it is kind-of like a combined prequel-sequel. Visually it was quite similar to 300, but it felt a little more low budget to me, like a longer, high budget episode of the Spartacus TV series…

The action sequences tended to follow a similar pattern of,

  • Fast shaky camera with no focus.
  • Slow motion slash.
  • Ultra-slow motion blood splash.
  • Repeat.

I did like the back story of how Xerxes became a God King. During that sequence I got really into the film, but after that the film started to meander and drag on a bit. By the end I was starting to nod off, so too was my pregnant friend, but her husband thought it was brilliant.

As is often the case, if you like the first you will probably like this. Just don’t hold out any hope that it will push the envelope, because it won’t.

Cheers

Tim…

PS. While Spartan’s had Scottish accents (see first film), it appears Athenian’s  had Australian accents! You learn something new every day… :)

300 : Rise of an Empire was first posted on March 30, 2014 at 5:32 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ASUS Notebook Terbaik dan Favoritku

Daniel Fink - Sun, 2014-03-30 08:56
If you're like many folks you wish to visualize some water to desire you're really on vacation-lots and much of water! European nation boasts a number of the foremost lovely beaches and resorts altogether of Europe.

The Amalfi coast is simply south of Naples and you will need to go to this space as a part of your European nation vacation. Amalfi may be a resort city additionally Notebook ASUS best-known for its lemon trees. Not solely does one get the ocean here however this can be the region that has the celebrated cities of metropolis and metropolis and additionally the Mt. Mount Vesuvius volcano. throughout your European nation vacation here you'll realize that public transportation is incredibly straightforward to use particularly round the Bay of Naples.

If you are doing attempt to drive around here on your European nation vacation you want to bear in mind that the roads on the coast tend to be slim thus you have got to be terribly careful. There square measure many beach hotels and resorts and along side the sandy shores and green-blue waters this guarantees to deliver a good looking European nation vacation.

If you'd like a seaboard vacation off from most different tourists you'd do higher to go to the previous fishing villages on the Italian geographical region. The Italian geographical region may be a portion of the Italian coast that lies between the Liguiran ocean and also the mountains. The capital of the Italian region region ASUS Notebook Terbaik dan Favoritku is Genova. throughout your European nation vacation within the space, you'll realize that fishing may be a Brobdingnagian a part of the economy here. The fishing village of Portifino is world celebrated and tends to be visited by several tourists however it'll not be as packed because the Amalfi Coast.

Here you'll realize the sculpture of Christ of the abysm placed regarding seventeen meters beneath water to guard the fishermen and different. Beach resorts in Japan and Orlando, Sunshine State are engineered to recreate the small print of the Portifino Harbor on the Italian geographical region.

For additional lovely coastal scenery throughout your European nation vacation, you will need to go to Bordighera. this can be additionally a fishing village on the Italian geographical region. there'll in all probability be fewer crowds here and this place is understood for being a country for the rich in Europe. The palms used on Palm Sunday within the St. Peter's Basilica in Rome square measure from Bordighera.

If you like food, you'll have lots of chance to enjoys delicious delicacies whereas you're visiting the coast for your European nation vacation. food may be a massive a part of these regions and you'll be able to style the freshness within the meals ready by the restaurants you crumble on the coast throughout your vacation. a number of the most effective food restaurants in European nation are found on the coastal regions.

During your European nation vacation here you'll get the prospect ASUS Notebook Terbaik dan Favoritku to sample some appetizing  herb-crusted shrimp, scallop alimentary paste and ancient halibut fish with leeks. Olive oil, garlic and Italian sweet parsley square measure wide employed in food dishes here along side different Italian herbs like basil, oregano and rosemary.

Running an Airport

Pete Scott - Sun, 2014-03-30 08:26
If Manston Airport is up for sale (and the current owner’s motavation to make money may mean that she has other plans) I’d buy the place. The price has to be right though as a lot of extra money will need to be invested to give a chance of success. Sadly, I can’t stretch to […]

Installation of Oracle Identity Management (OID/OVD/ODSM/OIF) 11gR1(11.1.1.7) – Part 1

Online Apps DBA - Sun, 2014-03-30 04:56
This post covers installation of OID/OVD 11gR1 (11.1.1.7) that will be used as user repository (Identity Store) for our Oracle Access Manager (OAM) 11gR2 Admin Training (training starts on 3rd May and fee is 699 USD). If you are new to Oracle Identity & Access Management then first check Identity Management Products from Oracle   1. [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Embanet and 2U: More financial insight into Online Service Providers

Michael Feldstein - Sat, 2014-03-29 11:24

While I have written recently about UF Online and 2U, there is actually very little insight into the operations and finances of the market segment for Online Service Providers (OSP, also known as School-as-a-Service, Online Program Management). Thanks to 2U going public yesterday and the Gainesville Sun doing investigative work on UF Online, we have more information on one of the highest growth segments for educational technology and online learning.

2U’s IPO

2U went public yesterday, initially offered at $13.00 per share and closing the day at $13.98 (a 7.5% gain). The following is not intended to be a detailed stock market evaluation – just the basics to present the general scale of the company as insight into the market. While there is not a direct comparison, this IPO is a much better IPO than the most recent ed tech offering when Chegg (down 2.7% its first day and down 26% to date). Based on 2U’s first day of trading and the IPO filing:

  • 2U’s market valuation is $547 million, and the company raised $120 million from the IPO;
  • 2U’s annual revenue for 2013 was $83.1 million with $28.0 million in net losses, representing a revenue growth of 49% per year;
  • 69% of this revenue ($57 million) came from one client, USC, with two programs – masters of education (Rossier Online) and social work;
  • Across all 9 customers, 2U makes $10,000 – $15,000 in revenue per student per year;
  • Across all 9 customers, 2U makes an average of $10 million in revenue per customer per year;
  • Across all 9 customers, 2U’s customers make an average of $10 million in net revenue per year; and
  • Across all 9 customers, 2U’s customers are charging $17,000 – $45,000 per student per year in tuition.
Pearson Embanet’s Contract with UF Online

Meanwhile, the Gainesville Sun has been doing some investigative work on the University of Florida Online (UF Online) contract with Pearson Embanet. Embanet is the largest OSP in the market and was purchased by Pearson for $650 million in 2012. From yesterday’s article in the Sun we get some specific information on the UF Online contract.

The University of Florida will pay Pearson Embanet an estimated $186 million over the life of its 11-year contract — a combination of direct payments and a share of tuition revenue — to help launch and manage the state’s first fully online, four-year degree program.

Initially the financial terms of the contract were hidden by University of Florida officials due to “trade secrets”, but the Sun was persistent, found a presentation with similar information, eventually leading to UF providing the contract with most redactions removed.

According to the article and its interview with Associate Provost Andy McDonough (who took over the executive director position at UF Online when the first one resigned after just two and a half months), Pearson Embanet will be paid $9.5 million over the first five years to help with startup costs. After this point, Pearson Embanet’s pay will come from revenue sharing (similar to 2U and most OSP contracts).

Gov. Rick Scott signed a bill last year tapping UF to create an online university that would offer a full, four-year degree program at 75 percent of the tuition that residential students pay. The Legislature gave UF $35 million in startup funds for the first five years, and also gave the university six months to get the program up and running.

The program started in January 2014 with 583 transfer students, with the first freshman expected in September 2014. What we don’t know about the program startup is how much Pearson Embanet will invest in the program. Typically an OSP loses money for the first 3 – 5 years of program startup ($9.5 million will not cover costs), which is one of the rationale’s for the long-term contracts of 10 years or more. The model is that up front the provider loses money (see 2U’s losses for comparison) and makes a profit on the back end of the contract. For UF Online, the state legislature plans to stop subsidies by 2019, assuming the program will be self-sustaining.

For the fall term (first term not purely based on transfer students), UF Online is planning on 1,000 students, and so far 91 have signed up. I do not know if this is on target or not.

Under its new contract with UF, Pearson is responsible for creating “proprietary digital content,” providing admission and enrollment support, generating leads and signing new students, tracking retention rates, engaging in joint research and development, and providing on-demand student support.

Note that this set of services is not as comprehensive as what 2U provides. For example, UF Online will use the Canvas LMS from Instructure, like the rest of the University of Florida, whereas 2U provides its own learning platform built on top of Moodle and Adobe Connect.

After 2018, UF will also stop paying Pearson directly and Pearson’s income will come entirely from its share of tuition revenue and any fees it charges. UF projects it will have over 13,000 students in UF Online generating $43 million in tuition revenue by 2019 — of which Pearson will get close to $19 million.

By 2024, with 24,000 students anticipated, revenues generated will be about $76 million, with $28 million going to Pearson, McCullough said.

Based on these numbers, UF Online expects to make just approximately $3,167 per student in revenue with Pearson Embanet making $1,167 per student.

 Notes

Below are some additional notes the 2U and Pearson Embanet examples.

  • It is important to recognize the difference in target markets here. 2U currently targets high-tuition master’s programs, and the UF Online example is an undergraduate program with the goal of charging students 75% of face-to-face UF costs.
  • While the total contract values seem high, the argument for this model is that without the massive investment and startup capability of OSP companies, the school either would not be able to create the online program by itself or at least would not have been able to do so as quickly.
  • Despite the difference in market and in services, it is still remarkable the difference in revenue per student between 2U and Pearson Embanet – $10 – $15k for 2U vs. $1.2k for Pearson Embanet.

Full disclosure: Pearson is a client of MindWires Consulting but not for OSP. All information here is from public sources.

The post Embanet and 2U: More financial insight into Online Service Providers appeared first on e-Literate.

OGh APEX Conference

Denes Kubicek - Sat, 2014-03-29 04:32
Last week I was presenting at OGh (ORACLE GEBRUIKERSCLUB HOLLAND) APEX World. My topic was "APEX 4.2 Application Deployment and Application Management". I can only recommend this conference to all the APEX users in Europe. This is definitely the biggest APEX conference on our continent. If you don't travel to ODTUG then this is something you shouldn't miss. They have an international track where you can listen to the well known APEX developers and book authors. This time Dan McGhan, Martin Giffy D'Souza, Joel Kallman, Dietmar Aust, Roel Hartman, Peter Raganitsch. For the tracks in Dutch, they are also willing to switch their language to English at any time if there are visitors not understanding Dutch. All together, Dutch people are open minded and I admire their sense for organizing such events - they definitely know how to do it.

Categories: Development

Java Cookbook 3rd Edition

Surachart Opun - Sat, 2014-03-29 01:10
Java is a programming language and computing platform. There are lots of applications and websites have used it. About latest Java version, Java 8. Oracle announced Java 8 on March 25, 2014. I mention a book title - Java CookbookJava Cookbook by  Ian F. Darwin and this book covers Java 8.
 It isn't a book for someone who is new (Readers should know a bit about syntax to write Java) in Java, but It is a book that will help readers learn from real-world examples. Readers or some people who work in Java developments, that can use this book like reference or they can pick some example to use with their work. In a book, Readers will find 24 chapters - "Getting Started: Compiling, Running, and Debugging", "Interacting with the Environment", "Strings and Things", "Pattern Matching with Regular Expressions", "Numbers", "Dates and Times - New API", "Structuring Data with Java", "Object-Oriented Techniques", "Functional Programming Techniques:Functional Interfaces, Streams,Spliterators, Parallel Collections", "Input and Output", "Directory and Filesystem Operations", "Media: Graphics, Audio, Video", "Graphical User Interfaces", "Internationalization and Localization", "Network Clients", "Server-Side Java", "Java and Electronic Mail", "Database Access", "Processing JSON Data", "Processing XML", "Packages and Packaging", "Threaded Java", "Reflection, or “A Class Named Class”", "Using Java with Other Languages".

Each example is useful for learning and practice in Java programming. Everyone can read and use it, just know a bit about Java programming. Anyway, I suggest Readers should know basic with Java programming before start with this book.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs