Skip navigation.

Feed aggregator

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - 3 hours 28 min ago

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - 3 hours 28 min ago

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - 3 hours 28 min ago

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - 3 hours 28 min ago

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - 3 hours 28 min ago

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - 3 hours 28 min ago

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - 3 hours 28 min ago

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - 3 hours 28 min ago

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Business Rules in Enterprise Apps and Data Mining Applications

Dylan's BI Notes - 7 hours 52 min ago
Transaction system has several places to capture the decisions and has several places to capture the business rules. These are the potential places where the data mining scoring engine can be deployed. Here are typical “decisions” captured in the enterprise apps I can think of: Marketing  Marketing Strategy and Planning 4 Ps are the decisions […]
Categories: BI & Warehousing

Nymi Band Impressions

Oracle AppsLab - Tue, 2015-05-26 13:35

Editor’s note: Here’s the first post from Osvaldo Villagrana (@vaini11a), one of our AppsLab Mexico team members. Enjoy.

During last week I’ve been playing with Nymi Discovery Kit I got back in our AT&T hackathon participation, and here are my impressions as a developer for Nymi SDK point of view and as a user.

For those who don’t know, this band is wearable biometric identity device that let’s you use your heart’s unique signature (a.k.a. Electrocardiogram or EGC) to authenticate and validate your identity.

Main problem they want to solve is avoid user remembering all passwords, PIN numbers and security codes used in our daily basis.

First off Discovery Kit includes the band, Bluetooth dongle for Windows and USB cable for charging the band. Bluetooth dongle is included because at the beginning Nymi band only could be paired with Windows OS but now can be paired through OS X and Android as well.

unbox

Nymi band material at first feels cheap and easy to bend it and break it, but it really fits very well on my wrist. Band connections terminals are very exposed in both ends of the cord to water or dust but they say is water resistant but not waterproof.

band3

Band is adjustable and can accommodate wrist sizes up to 7.5” in circumference. A full charge takes approximately two hours when you use a wall outlet or computer and battery last 3 days.

band band1 band2

Setting up the band is requires some steps; band must be enrolled and authenticated with your own ECG using the NCA (Nymi Companion App) app available in Windows, OS X and Android. I decided use Android app this time. I tried OS X and Windows but it’s the same. Once the band is clasped on your wrist it will confirm you the charge level and immediately will enter in broadcast mode.

I found this step a bit confusing as there’s no feedback when band is already in broadcast mode so you are not quite sure if your band is ready to be discoverable. Funny thing is there’s no way to turn it off.

After band is clasped, Android app asks for putting your finger over the sensor in the band. It takes like a minute for the app to analyze and save your ECG info in the app. After that, you’re ready to pair your Nymi with any NEA (Nymi Enabled app or third party apps). Band supports just up to 7 different apps profiles (they say in coming updates will be supported more).

nca nca1

Anytime clasp is opened, band must be authenticated once again but with the same NCA app was before. If you want to use any other NCA app (OS X or Windows), the band should be reset and start over the setting up. This is not ideal.

NEA’s must provision a unique key-value (profile) that is saved in the band for future use and this happen only once for each NEA. The NEA should store the provision returned from the band for future communication. On subsequent usage, NEA’s validate against the provisioned Nymi band. Once validation is successful, the NEA can assume an authenticated user. All those steps must be implemented by the developer using the SDK’s for different platforms.

band4

To complete the exercise, I wrote an Android app that makes provisioning and validating flow and finally gets user authenticated if user is close enough to the device, in this case mobile or tablet. After I got authenticated my wife wore the band and tried to get authenticated but authentication failed all the time as expected.

SDK is good but needs some enhancements, though. Even at Nymi, they are having hard time with problems in their own NEA’s like the unlock app for Mac OS X that currently is not working and I have posted couple of issues and bugs I found.

As first attempt for this new authentication automatization niche, I like it, and I think is good enough.

I see a lot of potential and possibles use cases for this band in enterprise. Definitely I would use it, but what I would really love is a band that can handle authentication, sport tracking and motion, notifications and time in the same device. Probably that’s too much for now but I’m looking forward to seeing that device soon.Possibly Related Posts:

MySQL Query Profiling with Performance Schema

Pythian Group - Tue, 2015-05-26 12:34

One of my favorite tools for query optimization is profiling. But recently I noticed this warning:

mysql> set profiling=1;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings;
+---------+------+----------------------------------------------------------------------+
| Level   | Code | Message                                                              |
+---------+------+----------------------------------------------------------------------+
| Warning | 1287 | '@@profiling' is deprecated and will be removed in a future release. |
+---------+------+----------------------------------------------------------------------+

After looking through certain documentation , I should indeed start using the Performance Schema to get this information.

Okay, so let’s give that a try.

I confirmed that I started MySQL 5.6.23 with the default of Performance Schema = ON:

mysql> show global variables like '%perf%';
+--------------------------------------------------------+-------+
| Variable_name                                          | Value |
+--------------------------------------------------------+-------+
| performance_schema                                     | ON    |
...

I’ll be using a development server for doing query profiling, so I can turn all of these on:

mysql> update performance_schema.setup_instruments set enabled='YES', timed='YES'; #you want the stage* ones enabled
mysql> update performance_schema.setup_consumers set enabled='YES'; #you want the events_statements_history* and events_stages_history* enabled

Start with fresh collection tables:

mysql> truncate performance_schema.events_stages_history_long;
mysql> truncate performance_schema.events_statements_history_long;

Then turn the profiler on:

mysql> set profiling=1;

Now run a sample query:

mysql> select distinct(msa) from zip.codes;

And find the resulting event IDs to use in the query below:

mysql> select event_id, end_event_id, sql_text from performance_schema.events_statements_history_long where sql_text like '%msa%';
...
|      41 |       938507 | select distinct(msa) from zip.codes                                                                  |
...

Insert those beginning and ending event IDs, and here’s the new profiling output on my test query from Performance Schema:

mysql> select substring_index(event_name,'/',-1) as Status, truncate((timer_end-timer_start)/1000000000000,6) as Duration from performance_schema.events_stages_history_long where event_id>=41 and event_id<=938507;

+----------------------+----------+
| Status               | Duration |
+----------------------+----------+
| init                 | 0.000103 |
| checking permissions | 0.000006 |
| Opening tables       | 0.000051 |
| init                 | 0.000014 |
| System lock          | 0.000007 |
| optimizing           | 0.000003 |
| statistics           | 0.000011 |
| preparing            | 0.000011 |
| Creating tmp table   | 0.000048 |
| executing            | 0.000002 |
| Sending data         | 1.251331 |
| end                  | 0.000003 |
| removing tmp table   | 0.000008 |
| query end            | 0.000006 |
| closing tables       | 0.000009 |
| freeing items        | 0.000111 |
| cleaning up          | 0.000002 |
+----------------------+----------+

Compare the legacy profiling available for the query:

mysql> show profile for query 1;
+----------------------+----------+
| Status               | Duration |
+----------------------+----------+
| starting             | 0.000125 |
| checking permissions | 0.000007 |
| Opening tables       | 0.000020 |
| init                 | 0.000014 |
| System lock          | 0.000007 |
| optimizing           | 0.000003 |
| statistics           | 0.000011 |
| preparing            | 0.000011 |
| Creating tmp table   | 0.000027 |
| executing            | 0.000001 |
| Sending data         | 1.353825 |
| end                  | 0.000005 |
| removing tmp table   | 0.000007 |
| end                  | 0.000002 |
| query end            | 0.000006 |
| closing tables       | 0.000009 |
| freeing items        | 0.000069 |
| cleaning up          | 0.000028 |
+----------------------+----------+

The obvious question is: Why I would want to be limited to this information when the Performance Schema has so much more available?

But this proves we can get profiler information in a format we’re used to when MySQL fully deprecates the profiling tool.

 

Learn more about Pythian’s expertise in MySQL.

Categories: DBA Blogs

Making Existing SQLPLUS Scripts 12c and Container DB (PDB) Compatible

Pythian Group - Tue, 2015-05-26 12:21

Oracle 12c introduces new catalog features including CDB_ dictionary views (which include a CON_ID column) superseding the DBA_ views that most DBA sqlplus scripts are based upon.

However, existing DBA sqlplus scripts can easily be modified using just a few simple sqlplus techniques to be compatible with 11g, as well as all types of 12c databases including legacy and container databases.

The following simple SQL and sqlplus techniques can be used to make a “universal script” that is compatible with all versions.

Illustrating the Issue

Let’s say for sake of example that we have a simple 10g/11g monitoring script that’s checking the amount of freespace in each tablespace by querying the DBA_TABLESPACE_USAGE_METRICS view.

On our 10g or 11g database the following query gives the necessary information:

SQL> select version from v$instance;

VERSION
-----------------
11.2.0.4.0

SQL> select tablespace_name, tablespace_size, used_percent
  2  from DBA_TABLESPACE_USAGE_METRICS
  3  order by tablespace_name;

TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
------------------------------ --------------- ------------
FCCDEV                                  256000      .053125
SYSAUX                                 1024000   31.0617188
SYSTEM                                 1024000   9.19453125
TEMP                                   1024000            0
UNDOTBS1                               1024000      .015625
USERS                                   256000        1.275

6 rows selected.

SQL>

 

Now will the same query work on a 12c database? Of course it will:

SQL> select version from v$instance;

VERSION
-----------------
12.1.0.2.0

SQL> select tablespace_name, tablespace_size, used_percent
  2  from DBA_TABLESPACE_USAGE_METRICS
  3  order by tablespace_name;

TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
------------------------------ --------------- ------------
SYSAUX                                 4194302   .773048769
SYSTEM                                 4194302   1.05991414
TEMP                                   4194302            0
UNDOTBS1                               4194302   .031280532
USERS                                  4194302   .003051759

SQL>

 

It executes successfully on the 12c database but there’s a problem: the query is only returning the data from the root container (or more accurately, from the container in which the statement was executed). The PDB data is missing, I have both open and closed PDBs in this database:

SQL> select con_id, name, open_mode from V$CONTAINERS order by con_id;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 TEST1                          READ WRITE
         4 LDB3                           MOUNTED

SQL>

 

The LDB3 PDB is closed (mounted) so I’m not interested in monitoring the tablespace freespace in it but I am interested in the details from the opened TEST1 PDB.

To get the required information we need to make two or three (Third being optional) changes:

1) Change the view from DBA_ to CDB_
2) Add the CON_ID column to the output
3) Add the CON_ID column to the ORDER BY clause

Hence (executing from CDB$ROOT) the query becomes:

SQL> select con_id, tablespace_name, tablespace_size, used_percent
  2  from CDB_TABLESPACE_USAGE_METRICS
  3  order by con_id, tablespace_name;

    CON_ID TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
---------- ------------------------------ --------------- ------------
         1 SYSAUX                                 4194302   .773048769
         1 SYSTEM                                 4194302   1.05991414
         1 TEMP                                   4194302            0
         1 UNDOTBS1                               4194302   .031280532
         1 USERS                                  4194302   .003051759
         3 AUDIT_DATA                               64000        .2875
         3 SYSAUX                                 4194302   .410843091
         3 SYSTEM                                 4194302   .474167096
         3 TPCCTAB                                1024000   5.63203125

9 rows selected.

SQL>

 

So that works fine, but as it stands we have two versions of the query and therefore we need two monitoring scripts.

 

Building Blocks for the Universal Script

Applying a number of simple sqlplus techniques can help us with this and will allow us to make the single universal version of the sqlplus script.

1) Use a SQLPLUS variable:

The sqlplus DEFINE command allows us to define variables. We can easily define a variable that tells us which view prefix to use depending on whether the database version is 11g or 12c.

SQL> COLUMN view_prefix NEW_VALUE view_prefix
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CDB','DBA') view_prefix FROM v$instance;

VIE
---
CDB

SQL>

 

2) Dynamically build the view name:

The second tip is that in sqlplus to concatenate a variable with a string a period must be used to show where the variable name ends:

SQL> prompt &view_prefix
CDB

SQL> prompt &view_prefix._TABLESPACE_USAGE_METRICS
CDB_TABLESPACE_USAGE_METRICS

SQL>

 

Plugging that into the original query gives:

SQL> select tablespace_name, tablespace_size, used_percent
  2  from &view_prefix._TABLESPACE_USAGE_METRICS
  3  order by tablespace_name;
old   2: from &view_prefix._TABLESPACE_USAGE_METRICS
new   2: from CDB_TABLESPACE_USAGE_METRICS

TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
------------------------------ --------------- ------------
AUDIT_DATA                               64000        .2875
SYSAUX                                 4194302   .410843091
SYSAUX                                 4194302   .773048769
SYSTEM                                 4194302   1.05991414
SYSTEM                                 4194302   .474167096
TEMP                                   4194302            0
TPCCTAB                                1024000   5.63203125
UNDOTBS1                               4194302   .031280532
USERS                                  4194302   .003051759

9 rows selected.

SQL>

But we’re missing the container ID column.

 

3) Add columns dynamically using additional sqlplus variables:

We can “optionally” include columns such as the CON_ID column using the same technique:

SQL> COLUMN view_prefix NEW_VALUE view_prefix NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CDB','DBA') view_prefix FROM v$instance;

SQL> COLUMN con_id_col NEW_VALUE con_id_col NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','con_id,','') con_id_col FROM v$instance;

SQL> select &con_id_col tablespace_name, tablespace_size, used_percent
  2  from &view_prefix._TABLESPACE_USAGE_METRICS
  3  order by &con_id_col tablespace_name;
old   1: select &con_id_col tablespace_name, tablespace_size, used_percent
new   1: select con_id, tablespace_name, tablespace_size, used_percent
old   2: from &view_prefix._TABLESPACE_USAGE_METRICS
new   2: from CDB_TABLESPACE_USAGE_METRICS
old   3: order by &con_id_col tablespace_name
new   3: order by con_id, tablespace_name

    CON_ID TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
---------- ------------------------------ --------------- ------------
         1 SYSAUX                                 4194302   .773239504
         1 SYSTEM                                 4194302   1.05991414
         1 TEMP                                   4194302            0
         1 UNDOTBS1                               4194302   .003814699
         1 USERS                                  4194302   .003051759
         3 AUDIT_DATA                               64000        .2875
         3 SYSAUX                                 4194302   .410843091
         3 SYSTEM                                 4194302   .474167096
         3 TPCCTAB                                1024000   5.63203125

9 rows selected.

SQL>

 

Note that the comma is in the variable and not in the column list in the SQL SELECT or ORDER BY clauses.

The script is now dynamically determining whether to use the CDB_ or DBA_ view and similarly dynamically adding the CON_ID column to the SELECT and ORDER BY clauses. (And of course should be executed from the root container.)

And the exact same script still works on the 11g database using the 11g version of sqlplus!

Similarly the optional column (including the comma) defined in the sqlplus variable could be used in an aggregation GROUP BY clause. However, if the query has no other aggregation columns then we might need to add a constant to the GROUP BY clause (and ORDER BY), otherwise the GROUP BY would have no columns listed and the universal sqlplus script is executed against an 11g database.

For example:

SQL> COLUMN view_prefix NEW_VALUE view_prefix NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CDB','DBA') view_prefix FROM v$instance;

SQL> COLUMN con_id_col NEW_VALUE con_id_col NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','con_id,','') con_id_col FROM v$instance;

SQL> select &con_id_col min(extended_timestamp), max(extended_timestamp)
  2  from &view_prefix._AUDIT_TRAIL
  3  group by &con_id_col 1 order by &con_id_col 1;
old   1: select &con_id_col min(extended_timestamp), max(extended_timestamp)
new   1: select con_id, min(extended_timestamp), max(extended_timestamp)
old   2: from &view_prefix._AUDIT_TRAIL
new   2: from CDB_AUDIT_TRAIL
old   3: group by &con_id_col 1 order by &con_id_col 1
new   3: group by con_id, 1 order by con_id, 1

    CON_ID MIN(EXTENDED_TIMESTAMP)                  MAX(EXTENDED_TIMESTAMP)
---------- ---------------------------------------- ----------------------------------------
         3 13-MAY-15 11.54.52.106301 AM -06:00      13-MAY-15 12.16.18.941308 PM -06:00

SQL>

 

Finally, once we’re done testing and debugging, we can get rid of the ugly “old” and “new” statements using:

SET VERIFY OFF

Implementing these techniques will allow modifications of most existing DBA sqlplus scripts to create universal versions, of which will be compatible with 11g (and likely earlier) databases as well as 12c legacy and container databases.

 

Deeper Dive

What if our monitoring query is based on an underlying catalog table and not a dictionary view?

For example, let’s say that our objective is to report on users and the last time the database password was changed. The password change date isn’t presented in the DBA_USERS or CDB_USERS view, but it is in the underlying SYS.USER$ table. Hence the monitoring query might be something like:

SQL> select name, ptime from SYS.USER$
  2  where type#=1 order by name;

NAME                     PTIME
------------------------ ---------
ANONYMOUS                23-APR-15
...
SYSTEM                   23-APR-15
XDB                      23-APR-15
XS$NULL                  23-APR-15

 

If we look at the view definition of any of the CDB_ views it is apparent that the view traverses the open PDBs by using the new 12c “CONTAINERS” function which accepts a table name as the only argument.

When run from the root container the CONTAINERS() function will traverse all open PDBs (assuming the common user used has local PDB permission to access the referenced table).

NOTE: Prior to 12.1.0.2 the CONTAINERS function was called CDB$VIEW.

Thus, we can use the new function as follows:

SQL> select con_id, name, ptime from CONTAINERS(SYS.USER$)
  2  where type#=1 order by con_id, name;

    CON_ID NAME                     PTIME
---------- ------------------------ ---------
         1 ANONYMOUS                23-APR-15
...
         1 SYSTEM                   23-APR-15
         1 XDB                      23-APR-15
         1 XS$NULL                  23-APR-15
         3 ANONYMOUS                23-APR-15
...
         3 SYSTEM                   23-APR-15
         3 XDB                      23-APR-15
         3 XS$NULL                  23-APR-15

 

Or to make the script universal so the single script can be run on both 11g and 12c:

SQL> COLUMN view_prefix NEW_VALUE view_prefix NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CONTAINERS(SYS.USER$)','SYS.USER$') view_prefix FROM v$instance;

SQL> COLUMN con_id_col NEW_VALUE con_id_col NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','con_id,','') con_id_col FROM v$instance;

SQL> select &con_id_col name, ptime from &view_prefix.
  2  where type#=1 order by &con_id_col name;
old   1: select &con_id_col name, ptime from &view_prefix.
new   1: select con_id, name, ptime from CONTAINERS(SYS.USER$)
old   2: where type#=1 order by &con_id_col name
new   2: where type#=1 order by con_id, name

    CON_ID NAME                     PTIME
---------- ------------------------ ---------
         1 ANONYMOUS                23-APR-15
...
         1 XDB                      23-APR-15
         1 XS$NULL                  23-APR-15
         3 ANONYMOUS                23-APR-15
...
         3 XDB                      23-APR-15
         3 XS$NULL                  23-APR-15

SQL>

 

A final question might be: why isn’t the PDB$SEED database shown in the results?

The answer is that a new 12c initialization parameter EXCLUDE_SEED_CDB_VIEW controls whether the seed database is displayed in CDB_ view (or CONTAINERS() function calls). EXCLUDE_SEED_CDB_VIEW is dynamic and session modifiable:

SQL> show parameter EXCLUDE_SEED_CDB_VIEW

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
exclude_seed_cdb_view                boolean     TRUE

SQL> select con_id, count(1) from cdb_users group by con_id;

    CON_ID   COUNT(1)
---------- ----------
         1         18
         3         20

SQL> alter session set EXCLUDE_SEED_CDB_VIEW=FALSE;

Session altered.

SQL> select con_id, count(1) from cdb_users group by con_id;

    CON_ID   COUNT(1)
---------- ----------
         1         18
         2         17
         3         20

SQL>

 

Other tools

A final question is whether this technique will still work if the SQL script is run through other tools? The answer is: “it depends“.

It depends on whether the other tools support the “define” command and the use of script variables. Specifically, Oracle SQL Developer and the newer sqlcl tool does. The above examples work fine in SQL Developer and sqlcl using the standard sqlcl “default” sqlformat. Other sqlformat options in sqlcl show some issues (testing with sqlcl version 4.2.0.15.121.1046).

 

Learn more about Pythian’s expertise in Oracle and MySQL.

Categories: DBA Blogs

Gilbane Webcast: Delivering Next-Gen Digital Experiences

WebCenter Team - Tue, 2015-05-26 12:02
ECONTENT SPONSORED WEBEVENTS | LIVE STREAMING AUDIO Delivering Next-Gen Digital Experiences Thursday May 28, 2015 - 11:00am PT / 2:00pm ET ORACLE Becoming a digital business is an imperative for all organizations to deliver the next wave of revenue growth, service excellence and brand loyalty. Today's enterprises need to connect "experiences" to outcomes, encompassing the entire customer engagement lifecycle. And the stakes are high — 94% of customers discontinue communications because of irrelevant messages and experiences.

The key to success is leveraging enterprise-level marketing technology to unlock breakthrough innovations for audience engagement and connecting experiences to business outcomes:
  • Deliver omni-channel experiences that are seamless, tailored and innovative across Paid, Owned and Earned media
  • Convert unknown audiences to known and involved customers
  • Extend reach and orchestrate engagement across all channels and devices
  • Attribute Marketing performance to revenue across channels and target audiences
REGISTER NOW to reserve your seat for this special webinar event.

Log Buffer #424: A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2015-05-26 11:45

This Log Buffer Edition covers various valuable blog posts from the fields of Oracle, SQL Server and MySQL.

Oracle:

  • Oracle Big Data Appliance X5-2 with Big Data SQL for the DBA.
  • Loading, Updating and Deleting From HBase Tables using HiveQL and Python.
  • In keeping with the ODA quarterly patching strategy, Appliance Manager 12.1.2.3 is now available.
  • From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think, “it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix.”
  • Optimizing the PL/SQL Challenge IV: More OR Condition Woes.

SQL Server:

  • Will RDBMs be obsolete? Should Data Professionals care about Big Data technologies? What is NoSQL? What is Hadoop?
  • In a development team, there are times when the relationships between developers and testers can become strained. How can you turn this potential conflict into something more positive?
  • Michael Fal is a huge advocate of automation and many ways it can improve the lives of developers and DBAs alike, but you can’t just automate all your problems away.
  • One way to handle a very complex database project with several databases and cross references.
  • Building the Ideal VMware-based SQL Server Virtual Machine.

MySQL:

  • Optimizing Out-of-order Parallel Replication with MariaDB 10.0.
  • General-purpose MySQL applications should read MySQL option files like /etc/my.cnf, ~/.my.cnf, … and ~/.mylogin.cnf. But ~/.mylogin.cnf is encrypted.
  • Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup.
  • If you don’t know much about bash shell, you should start with the prior post to learn about bash arrays.
  • Installing Kubernetes Cluster with 3 minions on CentOS 7 to manage pods and services.

Learn more about Pythian’s expertise in Oracle , SQL Server and MySQL.

Categories: DBA Blogs

Worth Considering: Faculty perspective on student-centered pacing

Michael Feldstein - Tue, 2015-05-26 11:43

By Phil HillMore Posts (318)

Over the weekend I wrote a post based on the comment thread at Friday’s Chronicle article on e-Literate TV.

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits.

Sunday’s post highlighted two segments of students describing their experiences with re-designed courses, but we also need to hear directly from faculty. Too often the public discussion of technology-enabled initiatives focus on the technology itself, often assuming that the faculty involved are bystanders or technophiles. But what about the perspectives of faculty members – you know, those who are in the classrooms working with real students – on what challenges they face and what changes are needed from an educational perspective? There is no single perspective from faculty, but we could learn a great deal through their unique, hands on experiences.

Consider the the specific case of why students might need to work at their own pace.

The first example is from a faculty member at Middlebury College describing the need for a different, more personalized approach for his geographic information system (GIS) course.

Jeff Howarth: And what I would notice is that there would be some students who would like me to go a little bit faster but had to wait and kind of daydream because they were just waiting. And then there were some students that desperately wanted me slow down. Then you get into that kind of slowest-car-on-the-freeway, how-fast-can-you-really-go type of thing. So, I would slow down, which would lose part of the group.

Then there would be some folks that probably would want me to slow down but would never ask because they don’t want to call attention to themselves as being the kind of—the slow car on the freeway.

Michael Feldstein: At this point, Jeff realized that even his small class might not be as personalized as it could be with the support of a little more technology.

Jeff Howarth: What I realized is that, if I just started packaging that instruction, the worked example, I could deliver the same content but allow students to first—if I made videos and posted it on something like YouTube, I was putting out the same content, but students could now watch it at their own pace and in the privacy of being able to go as slow as they need to without the social hang-ups of being considered different.

Students could now watch it at their own pace and … and go as slow as they need to without the social hang-ups of being considered different. So, that was really the first step of—I did all of this, and then I told another colleague in languages what I was doing. And he said, “Well, that’s called ‘flipping the classroom.’” And I thought, “OK.” I mean, but that’s not really why. I did it without knowing that I was flipping the classroom, but then that’s how it happened.

Compare this description with an example from an instructor at Essex County College teaching developmental math.

Pamela Rivera: When I was teaching the traditional method, I’ll have students coming in and they didn’t know how to multiply. They didn’t know how to add and subtract. Rarely would those students be able to stay throughout the semester, because after the third—no, even after the second week, everyone else was already in division and they’re still stuck.

And the teacher can’t stop the class and say, “OK, let’s continue with multiplication,” because you have a syllabus to stick to. You have to continue teaching, and so those students will be frustrated, and so they drop the class. The Teacher can’t stop the class…because you have a syllabus to stick to.

At the same time, you had students who—the first couple of weeks they’ll be extremely bored because they already know all of that. And so, unfortunately, what would happen is eventually you would get to a point in the content that—they don’t know that, but because they have been zoning out for weeks, they don’t get that “OK, now, I actually have to start paying attention.” And so, yes, they should have been able to do that, but they still are not very successful because they were used to not paying attention.

Remarkably Similar Descriptions

Despite these two examples coming from very different cases, the actual descriptions that faculty offer on the need for course designs that allow students to control their own pacing are remarkably similar. These isolated examples are not meant to end debate on personalized learning or on what role technology should play (rather they should encourage debate), but it is very useful to listen to faculty members describe the challenges they face on an educational level.

The post Worth Considering: Faculty perspective on student-centered pacing appeared first on e-Literate.

Leveraging Oracle Developer Cloud Service in SQL and PL/SQL Projects - lifecycle and team collaboration

Shay Shmeltzer - Tue, 2015-05-26 11:37

Usually my demos are targeted at Java developers, but I realize that a lot of developers out there are not using Java, for example in the Oracle install base there is a huge section of PLSQL developers. This however doesn't change their requirements from a development platform. They can still benefit from version management and code review functionality. They still need to track bugs/issues and requirements from their users, and they still need to collaborate in a team environment. 

So I decided to try out and see what would be the development lifecycle experience for a PL/SQL developer if they'll leverage the services provided by the Oracle Developer Cloud Service - here is a demo that shows a potential experience. 

What you'll see in the demo:

  • Using JDeveloper to create DB Diagrams, Tables and PL/SQL code
  • Version manage PL/SQL and SQL with Git
  • Defining a cloud project and adding users
  • Check code in, and branch PL/SQL functions
  • Tracking tasks for developers
  • Code review by team members
  • Build automation (with Ant) - and almost a deploy to the DB

As you can see it is quite a nice complete solution that is very quick to setup and use.

It seems that the concepts of continuous integration in the world of PL/SQL development are not yet a common thing. In the demo I use the Ant SQL command to show how you could run a SQL script you created to create the objects directly in the database - which is probably the equivalent of doing a deployment in the world of Java. However if you prefer you can use Ant for example to copy files, zip them, or do many other tasks such as run automated testing frameworks.

The Ant task I used is this:

  <path id="antclasspath">
    <fileset dir=".">
      <include name="ojdbc7.jar"/>
    </fileset>
  </path>
   <target name="deploy">
    <sql driver="oracle.jdbc.OracleDriver" userid="hr2" password="hr"
         url="jdbc:oracle:thin:@//server:1521/sid" src="./script1.sql" 
	classpathref="antclasspath"/>
  </target> 

I had both the ojdbc7.jar file and the script file at the root of the project for convenience. 

While my demo uses JDeveloper - you should be able to achieve similar functionality with any tool that supports Git. In fact if you rather not use a tool you can simply use command lines to check your files directly into the cloud.

Categories: Development

OBIEE 11.1.1.9: New Features for Front-End Developers and Business Users

Rittman Mead Consulting - Tue, 2015-05-26 11:13

OBIEE 11.1.1.9 was released this May and Robin and Mark wrote great posts about New Features from System Administrators and the new support for HiveServer2 and Cloudera Impala respectively. In this post, we will see some useful new features for Front-End development. Some of these features like tree map, styles, etc. were included in the OBIEE 11.1.1.7.10 version, but I’m including them here anyway because if you are moving from an earlier release of 11.1.1.7 you probably haven’t seen them yet.

Search & Sort options inside the Subject Area

There is a new option to search inside the subject area when we create an analysis. If we press the magnifying glass icon in the Subject Areas pane, a search box appears and OBIEE will return all the objects that contains the text that we entered. We can also use the % wildcard.

search_option

In addition there is also the option to order the objects inside the subject area sorting the folders and the presentation columns inside them. The default value is showing the objects in the way that are saved in the repository.

sort_subj_area

Save Calculated Items in the Web Catalog

A very useful feature for business users is the possibility to create calculated items in an analysis and save it in the catalog in order to reuse it in other analyses for the same subject area.

In the Results tab there is a new icon for creating a new measure. The calculation is created in the same way the column formulas are created.

new_calc_measure

After the new measure is created you can go to the Criteria tab and in the column pop-up menu select the Save Column As option. In this manner you save this new measure as a new object in the catalog to be reused in other analyses. The feature of creating a new calculated measure for tables and pivot tables was included in the 11.1.1.7.10 version but the possibility to save the column in the catalog is a new feature of the OBIEE 11.1.1.9.

savenew_colcalculated_measure4

You can also change the properties of the new column in the Criteria tab but be aware that conditional formatting and conditional action links for the column are not saved to the catalog. Regular action links are saved for the column as well as the format properties.

The saved column can be edited from the catalog pane and catalog page. All the changes that you make in the catalog are applied to every analysis that used this column. If you modify it inside a particular analysis, these changes are valid only for the analysis that you are working on.

editnew_measure

To be able to save new measures as columns you should have the Save Column privilege.

calculatedmeasure_priv

Global Variables

In this version we find a new new type of variables: the Global Variables. Global Variables can be defined in the context of an analysis and can be used in other analysis. Useful to do some intermediate reusable calculations.

To create a new global variable, select the Edit Formula option for the required column. When you press the Variable button in the Formula Editor you will see the new Global option. The Insert Global Variable dialog appears and you can select an existing global variable to be used in the formula, or you can create a new one. To create a new one you need to enter a name, type and value. If you want to use an expression like in the example (Min(Revenue)) as value, the data type should be Text.

global_variable

To reference a global variable you need to use the fully qualified name including the context: @{global.variables.variablename}.

global_var_result

Also to manage global variables you should have the Manage Global Variables privilege.

New Visualisation: Treemap

There is a new visualisation called Treemap since OBIEE 11.1.1.7.10. Treemap groups the dimensional data that you selected in the analysis in tiles. By default, the tiles size is based in the content of first measure of the analysis and the tiles colour is based in the content of the second measure. If you have only one measure in your analysis, is used for both size and colour.

treemap

You can edit it as any other view and change the group by information as well as the measures which affects the size and colour of the tile and the range of colours that you want to use. Also you can choose the style between seeing the information in coloured bins or using continuous colour fill. If you selected the first one you can also select the amount of different coloured bins do you want to use.

treemap2

New Preview Styles

When you want to preview an analysis in the Results tab, instead of showing immediately how the analysis will look, OBIEE offers you different Styles to see your analysis. These feature was also included since the 11.1.1.7.10 version. In case you want to create a custom style, Oracle recommends to use the new Skyros style as a starting point.

preview

Horizontal Layout for Radio Button and Check Box Dashboard Prompts

Radio button and check box dashboards prompts can be showed horizontally. When you create a new dashboard prompt using check box or radio button under Options, you can select between horizontal or Vertical Layout appears under Options.

horizontal_prompts

 

Enhancements in Export & Print Options

In this version, you will find more options in the Print and Export Options dialog. You can select if you want to include charts, images and formatting or to specify column properties like the column width and to wrap the text in columns.

dashboardprintoptions

In summary, these features are simple but quite useful for business users and front-end developers, and give more flexibility to create better data visualisations.

 

Categories: BI & Warehousing

We Couldn't Have Said It Any Better

Linda Fishman Hoyle - Tue, 2015-05-26 09:54

We can tell Oracle’s cloud story, but it’s even more credible when customers do it for us.

Six of them did just that at the Oracle Cloud Analyst Summit in Half Moon Bay last month. In fact, the international SaaS customer panel was a bellwether on the day’s agenda.

Moderator Shawn Price (pictured left), senior vice president of Oracle’s Global Cloud Business Unit, said, “I firmly believe the most important voice in this market as we transition to the cloud is our customers’ voice and the role it plays in advocacy.”

The six individuals on the panel represented a broad section of companies who use Oracle Cloud applications. They talked about their journeys to the cloud, the benefits, Oracle’s innovation, cross-cloud integration, and the future.

You can view a webcast replay of the SaaS panel and hear from these six customers:

  • Panasonic, Rhiannon Ainge, Marketing Manager
  • SePro, Stephen Barkley, Manager – Information and Data Services
  • Ovation Brands, Patrick Benson, CIO
  • Solairus Aviation, Mark Dennen, CFO
  • CorePower Yoga, Kathryn Palin, HR and Payroll Administrator
  • Hardware.com, Paul Wignall, Group Head of Business Development

Lab Report: Oracle Database on EMC XtremIO. A Compression Technology Case Study.

Kevin Closson - Tue, 2015-05-26 01:26

If you are interested in array-level data reduction services and how such technology mixes with Oracle Database application-level compression (such as Advanced Compression Option), I offer the link below to an EMC Lab Report on this very topic.

To read the entire Lab Report please click the following link:   Click Here.

The following is an excerpt from the Lab Report:

Executive Summary
EMC XtremIO storage array offers powerful data reduction features. In addition to thin provisioning, XtremIO applies both deduplication and compression algorithms to blocks of data when they are ingested into the array. These features are always on and intrinsic to the array. There is no added licensing, no tuning nor configuration involved when it comes to XtremIO data reduction.

Oracle Database also supports compression. The most common form of Oracle Database compression is the Advanced Compression Option—commonly referred to as ACO. With Oracle Database most “options” are separately licensed features and ACO is one such option. As of the publication date of this Lab Report, ACO is licensed at $11,000 per processor core on the database host1. Compressing Oracle Database blocks with ACO can offer benefits beyond simple storage savings. Blocks compressed with ACO remain compressed as they pass through the database host. In short, blocks compressed with ACO will hold more rows of data per block. This can be either a blessing or a curse. Allowing Oracle to store more rows per block has the positive benefit of caching more application data in main memory (i.e., the Oracle SGA buffer pool). On the other hand, compacting more data into each block often results in increased block-contention.

Oracle offers tuning advice to address this contention in My Oracle Support note 1223705.12. However, the tuning recommendations for reducing block contention with ACO also lower the compression ratios. Oracle also warns users to expect higher CPU overhead with ACO as per the following statement in the Oracle Database product documentation:

Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.

Application vendors, such as SAP, also produce literature to further assist database administrators in making sensible choices about how and when to employ Advanced Compression Option. The importance of understanding the possible performance impact of ACO are made quite clear in such publications as SAP Note 14363524 which states the following about SAP performance with ACO:

Overall system throughput is not negatively impacted and may improve. Should you experience very long runtimes (i.e. 5-10 times slower) for certain operations (like mass inserts in BW PSA or ODS tables/partitions) then you should set the event 10447 level 50 in the spfile/init.ora. This will reduce the overhead for insertion into compressed tables/partitions.

The SAP note offers further words of caution regarding transaction logging (a.k.a., redo) in the following quote:

Amount of redo data generated can be up to 30% higher

Oracle Database Administrators, with prior ACO experience, are largely aware of the trade-offs where ACO is concerned. Database Administrators who have customarily used ACO in their Oracle Database deployments may wish to continue to use ACO after adopting EMC XtremIO. For this reason Database Administrators are interested in learning how XtremIO compression and Advanced Compression Option interact.

This Lab Report offers an analysis of space savings with and without ACO on XtremIO. In addition, a performance characterization of an OLTP workload manipulating the same application data in ACO and non-ACO tablespaces will be covered…please click the link above to continue reading…

 


Filed under: oracle

IT-centric notes on the future of health care

DBMS2 - Mon, 2015-05-25 23:02

It’s difficult to project the rate of IT change in health care, because:

  • Health care is suffused with technology — IT, medical device and biotech alike — and hence has the potential for rapid change. However, it is also the case that …
  • … health care is heavily bureaucratic, political and regulated.

Timing aside, it is clear that health care change will be drastic. The IT part of that starts with vastly comprehensive electronic health records, which will be accessible (in part or whole as the case may be) by patients, care givers, care payers and researchers alike. I expect elements of such records to include:

  • The human-generated part of what’s in ordinary paper health records today, but across a patient’s entire lifetime. This of course includes notes created by doctors and other care-givers.
  • Large amounts of machine-generated data, including:
    • The results of clinical tests. Continued innovation can be expected in testing, for reasons that include:
      • Most tests exploit electronic technology. Progress in electronics is intense.
      • Biomedical research is itself intense.
      • In particular, most research technologies (for example gene sequencing) can be made cheap enough over time to be affordable clinically.
    • The output of consumer health-monitoring devices — e.g. Fitbit and its successors. The buzzword here is “quantified self”, but what it boils down to is that every moment of our lives will be measured and recorded.

These vastly greater amounts of data cited above will allow for greatly changed analytics.

  • Right now, medical decisions are made based on research that looks at a few data points each for a specially-recruited sample of patients, then draws conclusions based on simplistic and questionable statistical methods.
  • More sophisticated analytic methods are commonly used, but almost always still to aid in the discovery and formation of hypotheses that will then be validated, if at all, using the bad old analytic techniques.
  • State of the art predictive modeling, applied to vastly more data, will surely yield greatly better results.

And so I believe that health care itself will be revolutionized.

  • Diagnosis will be much more accurate, pretty much across the board, except in those limited areas where it’s already excellent today.
  • Medication regimens will be much more personalized. (Pharma manufacturing may have to change greatly as a result.) So will other treatments. So will diet/fitness regimens.
  • The vulnerable (elderly, hospital patients) will be more accurately and comprehensively monitored. Also, their care will likely be aided by robotics.
  • Some of the same things will be true of infants and toddlers. (In other cases, they get such close attention today that I can’t imagine how it could be greatly increased. :) )

I believe that this will all happen because I believe that it will make health care vastly more successful. And if I’m right about that, no obstacles will be able to prevent it from coming into play — not cost (which will keep going down in a quasi-Moore’s-Law way), not bureaucratic inertia (although that will continue to slow things greatly), and not privacy fears (despite the challenges cited below).

So what are the IT implications of all this?

  • I already mentioned the need for new (or newly-used) kinds of predictive modeling.
  • Probably in association with those, event detection — which in many but not all cases will amount to anomaly detection — will be huge. If one goal is to let the elderly and ailing live independently, but receive help when it’s needed — well, recognizing when that help is needed will be crucial. Similar dynamics will occur in hospitals.
  • And in support of that, there will be great amount of monitoring, and hence strong demands upon sensors and recognition. Potentially, all five human senses will be mimicked, among others. These technologies will become even more important in health care if I’m right that robotics will play a big role.
  • Data quality will be a major challenge, especially in the doctors’-notes parts of health records. Reasons start:
    • Different medical professionals might evaluate the same situation differently; diagnosis is a craft rather than a dumb, repeatable skill.
    • If entries are selected from a predefined set of options, none may be a perfect match to the doctor’s actual opinion.
    • Doctors often say what’s needful to have their decisions (care, tests, etc.) approved, whether or not it precisely matches what they really think. Thus, there are significant incentives to enter bad data.
    • Free-text data is more central to health care than to many other application areas, and text data is inherently dirty.
    • Health records are decades later than many other applications in moving from paper to IT.
  • Data integration problems will also be and indeed already are huge, because different health care providers have addressed the tough challenges of record-keeping in different ways.

As for data management — well, almost everything discussed in this blog could come into play.

  • A person’s entire medical record resembles the kind of mess increasingly often dumped these days into NoSQL — typically MongoDB, Cassandra, or HBase.
  • There are plenty of business-transaction records in the mix, of the kind that have long been managed by RDBMS.
  • There are a whole lot of diverse machines in the mix, and managing the data to keep such a menagerie running is commonly the job of Splunk or streaming-enhanced Hadoop.
  • There’s a lot of free text in medical records. Also images, video and so on.
  • Since graph analytics is used in research today, it might at some point make its way into clinical use.

Finally, let me say:

  • Data-driven medicine cannot live up to its potential unless researchers can investigate data sets comprising private information of large numbers of people.
  • Researchers will not have the appropriate permissions unless privacy law moves toward a basis in data use, rather than exclusively regulating data possession.

Related links

Categories: Other