Skip navigation.

Feed aggregator

DRM Free Downloads - Is That OK?

Bradley Brown - 9 hours 58 min ago
I talked to a prospect today that was using a competitive platform to deliver their video products.  We talked extensively about differentiation of InteliVideo over their current platform (from brand to analytics to customers to apps and so on).  But one thing that came up had to do with DRM free downloads.  You might think that's a good thing if you don't understand what it means...as they did.  I said "so you're OK with your customers downloading the MP4 and putting it on YouTube or emailing it to their friends or throwing it on a thumb drive."  He said, "no, our customer's can't do that."  So I showed him the language on their website (DRM free downloads), to which he replied, "right, it's protected."  When I looked puzzled, he said "what is DRM?"  I said DRM stands for Digital Rights Management.  You can think of it as protection.  So that wording is saying "you get this video that has NO protection, use it as you wish."  Of course there's likely wording in the click through contract that says the customer is only supposed to use the content on their own devices.  But...you've just handed them a risky piece of content.
So...in my humble opinion, I was say that "no" - DRM free downloads is not OK.

How the Big Boys Do It!

Bradley Brown - 15 hours 56 min ago
We have a number of VERY large (video on demand) customers.  We've broken our customer-base into 3 categories:

1. Infoprenuers - these are companies who do something and have video, but video is not their primary business.  For example, RunBare.  Their primary business is teaching people how to run in your barefeet.  They do have a video on this topic, along with books and other items.  Another example is BDB Software, which is my original side consulting business.  I started it to sell online Oracle Application Express training.  So all the business does is video, yet, I'm the CEO at InteliVideo.  That's my primary business - so BDB is just a side business.  You could say that Infoprenuers are wannabes.

2. Enterprise Content Providers (ECP) - these are companies who's primary business revolves around selling digital goods, education, etc. online.  These are the "big boys" of our industry.  In this blog post I'm going to talk more about how the Infoprenuer can become an ECP.

3. Corporations - these are companies that use video content to educate their own employees.  They typically aren't interested in selling their videos, they are interested in protecting their videos.

So how do the big boys become so big.  Like any business, it's typically over an extended period of time.  Even those who focus on infomercials are not usually overnight successes.  Take P90X or Beachbody.  They've been selling DVDs for MANY years.  Infomercials contribute to their success, but also the fact that they figured out how to do affiliate marketing is another large piece of their success.

So how do they do it?  Creative marketing, marketing, and more marketing - lots of it!  Promotions, A/B split tests, refining the message and analyzing what works and building on it.  Said another way, they are professional marketers!  You might ask - social marketing or pay per click marketing or SEO or.... - the answer is yes.

You want to get big?  Well then...it's time to focus.  On what?  One guess!

Commit Puzzle

Bobby Durrett's DBA Blog - Fri, 2014-08-22 11:54

This graph represents commit time compared to CPU utilization and redo log write time.  I’ve included only the hourly intervals with more than 1,000,000 commits.  At these peaks the number of commits ranges 1 to 1.6 million commits per hour so each point on the graph represents roughly the same commit rate.  I’m puzzled by why the commit time bounces around peaking above 5 milliseconds when I can’t see any peaks in I/O or CPU that correspond to the commit time peaks.

commitvscpuio

I derived CPU% from DBA_HIST_OSSTAT.  I got the other values by getting wait events from DBA_HIST_SYSTEM_EVENT.  Commit time is log file sync wait time.  Redo write time is log file parallel write wait time.  I converted the wait times to milliseconds so they fit nicely on the chart with CPU%.

I thought I would pass this along as a puzzle that I haven’t figured out.

Here is a zip of the script I used to get the data, its raw output, and the spreadsheet I used to make the chart: zip

- Bobby

P.S.  This is on HP-UX 11.31, Itanium, Oracle 11.2.0.3

P.P.S  Did some more work on this today.  Looks like the high commit time periods have short spikes of long redo log writes even though the average over the hour is still low.  I’m looking at DBA_HIST_SYSTEM_EVENT to get a histogram of the log file parallel write waits and there are a number in the 1024 bucket when the log file sync time is high on average.

END_INTERVAL_TIME   LFPW_MILLI LFPW_COUNT AVG_COMMIT_MS AVG_WRITE_MS
------------------- ---------- ---------- ------------- ------------
21-AUG-14 11.00 AM           1     268136    9.14914833   2.45438987
21-AUG-14 11.00 AM           2     453913    9.14914833   2.45438987
21-AUG-14 11.00 AM           4     168370    9.14914833   2.45438987
21-AUG-14 11.00 AM           8      24436    9.14914833   2.45438987
21-AUG-14 11.00 AM          16       5675    9.14914833   2.45438987
21-AUG-14 11.00 AM          32       6122    9.14914833   2.45438987
21-AUG-14 11.00 AM          64       3369    9.14914833   2.45438987
21-AUG-14 11.00 AM         128       2198    9.14914833   2.45438987
21-AUG-14 11.00 AM         256       1009    9.14914833   2.45438987
21-AUG-14 11.00 AM         512        236    9.14914833   2.45438987
21-AUG-14 11.00 AM        1024         19    9.14914833   2.45438987
21-AUG-14 11.00 AM        2048          0    9.14914833   2.45438987
21-AUG-14 02.00 PM           1     522165    2.97787777   1.64840599
21-AUG-14 02.00 PM           2     462917    2.97787777   1.64840599
21-AUG-14 02.00 PM           4     142612    2.97787777   1.64840599
21-AUG-14 02.00 PM           8      17014    2.97787777   1.64840599
21-AUG-14 02.00 PM          16       4656    2.97787777   1.64840599
21-AUG-14 02.00 PM          32       5241    2.97787777   1.64840599
21-AUG-14 02.00 PM          64       1882    2.97787777   1.64840599
21-AUG-14 02.00 PM         128        820    2.97787777   1.64840599
21-AUG-14 02.00 PM         256        149    2.97787777   1.64840599
21-AUG-14 02.00 PM         512         10    2.97787777   1.64840599
21-AUG-14 02.00 PM        1024          2    2.97787777   1.64840599
21-AUG-14 02.00 PM        2048          0    2.97787777   1.64840599

There were 19 waits over half a second in the first hour and only 2 in the second hour.  Maybe all the log file sync waits pile up waiting for those long writes.  Here is a graph that compares the number of waits over half a second – the 1024 ms bucket – to the average log file sync and log file parallel write times for the hour:

longwrites

You can see that the average redo write time goes up a little but the commit time goes up more.  Maybe commit time is more affected by a few long spikes than by a lot of slightly longer write times.

Found a cool blog post that seems to explain exactly what we are seeing: blog post

 

Categories: DBA Blogs

Best of OTN - Week of August 17th

OTN TechBlog - Fri, 2014-08-22 11:43
Architect CommunityThe Top 3 most popular OTN ArchBeat video interviews of all time:
  1. Oracle Coherence Community on Java.net | Brian Oliver and Randy Stafford [October 24, 2013]
    Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how developers can actively participate in product development through Coherence Community open projects. Visit the Coherence Community at: https://java.net/projects/coherence.

  2. The Raspberry Pi Java Carputer and Other Wonders | Simon Ritter [February 13, 2014]
    Oracle lead Java evangelist Simon Ritter talks about his Raspberry Pi-based Java Carputer IoT project and other topics he presented at QCon London 2014.

  3. Hot Features in Oracle APEX 5.0 | Joel Kallman [May 14, 2014]
    Joel Kallman (Director, Software Development, Oracle) shares key points from his Great Lakes Oracle Conference 2014 session on new features in Oracle APEX 5.0.

Friday Funny from OTN Architect Community Manager Bob Rhubart:
Comedy legend Steve Martin entertains dogs in this 1976 clip from the Carol Burnette show.

Database Community

OTN Database Community Home Page - See all tech articles, downloads etc. related to Oracle Database for DBA's and Developers.

Java Community

JavaOne Blog - JRuby and JVM Languages at JavaOne!  In this video interview, Charles shared the JRuby features he presented at the JVM Language Summit. He'll be at JavaOne read the blog to see all the sessions.

Java Source Blog - IoT: Wearables! Wearables are a subset of the Internet of Things that has gained a lot of attention. Learn More.

I love Java FaceBook - Java Advanced Management Console demo - Watch as Jim Weaver, Java Technology Ambassador at Oracle, walks through a demonstration of the new Java Advanced Management Console (AMC) tool.

Systems Community

OTN Garage Blog - Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11? - Read and learn the answer.

OTN Garage FaceBook - Securing Your Cloud-Based Data Center with Oracle Solaris 11 - Overview of the security precautions a sysadmin needs to take to secure data in a cloud infrastructure, and how to implement them with the security features in Oracle Solaris 11.


SQL Server techniques that mitigate data loss

Chris Foot - Fri, 2014-08-22 10:58

Natural disasters, cybercriminals and costly internal mistakes can cause companies to lose critical information. If the appropriate business continuity strategies aren't employed, organizations could lose customers or be subject to sustaining government penalties. 

These concerns have motivated database administrators to use programs with strong recovery tools, such as replication, mirroring and failover clustering.

SQL Server Pro contributor and DH2i Chief Technology Officer Thanh Ngo outlined a number of SQL Server functions that improve DR and ensure applications are highly available. Some of the most useful features he named are detailed below.

AlwaysOn Availability Groups
This particular application allows a set of user databases to fail over as a cohesive unit. The program forms a primary replication of the cluster, which is then set to be available for read-write tasks. From there, one to eight secondary duplicates are created for read-only, but must be configured to perform this action. 

Availability Groups is supported by the concept of database mirroring, which enhances a database's security and accessibility. The feature essentially copies communications from principal servers to their cloned counterparts. Ngo outlined how mirroring functions in through one of the following modes:

  • High safety mode with automatic failover: Transactions carried out by two or more partners are synchronized while a "witness partner" orchestrates automated failover. 
  • High safety mode without automatic failover: The same operation detailed above is executed, but without the presence of a witness partner.
  • High performance mode: A primary database will employ a transaction without waiting for the mirrored counterpart to write the log to a disk

Replication 
Ngo also acknowledged replication, which consists of a primary server (also known as the publisher) allocating data to one  or more secondary databases (known as subscribers). Replication can be executed in one of three ways: 

  • Transactional allows for real-time data availability because it enables the publisher to distribute information to subscribers immediately, or in regular intervals. 
  • Snapshot copies and provisions data in the primary server and sends the cloned data to the secondary database once it's created.
  • Merge sanctions bi-directional replication, meaning all changes made in both the subscriber and publisher databases are synchronized automatically. 

A contemporary need 
With these techniques in mind, if a company chooses to outsource its database administration needs, it should detail which government standards it needs to abide by. From there, DBAs can carry out a thorough risk assessment of how much customer data is vulnerable – a task MSPmentor contributor CJ Arlotta asserted as critical. 

Employing SQL Server's Replication and Availability Group ensures all data is retained even if a database breach occurs. 

The post SQL Server techniques that mitigate data loss appeared first on Remote DBA Experts.

Remote DML with DBMS_PARALLEL_EXECUTE

Dominic Brooks - Fri, 2014-08-22 10:20

An example of sucking data into a table over a db link using DBMS_PARALLEL_EXECUTE.

This particular example is based on something I needed to do in the real world, copying data from one database into another over a db link. Datapump is not available to me. Tables in question happen to be partitioned by a date-like number (boo!) hence some of the specific actions in the detail.

I think it’s a good example of how to use dbms_parallel_execute but also it might be interesting to see how we might combine that functionality with parallel sessions each operating on a single numeric partition.

For setup, let’s create a suitable source table on a remote db.
In this example, I’m recreating the entries in dba_objects for every day for a couple of years.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20120101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 CONNECT BY ROWNUM <= (TRUNC(SYSDATE) - TO_DATE(20120101,'YYYYMMDD')))
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;
SELECT /*+ parallel(16) */ COUNT(*) FROM remote_px_test;
209957272

SELECT round(sum(bytes)/power(1024,3)) FROM user_segments WHERE segment_name = 'REMOTE_PX_TEST';
31

First step is to see how long it takes to do a parallel INSERT SELECT over a db link.

The benefits of parallelisation in such an operation is severely limited because we have a single session over the db link.

Back to the target database.

First create an empty destination table, same as remote.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20100101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 WHERE 1=0)
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;

Now, let’s see how long it takes to do an INSERT SELECT over a db link.
Time is often not a good measure but in this case I’m primarily interested in how long it takes to copy a whole bunch of tables from A to B over a db link.

insert /*+ append */ into remote_px_test l
select * 
from   remote_px_test@d1 r;

209,957,272 rows inserted.

commit;

This executed in 20 minutes.

As mentioned, you could parallelise bits of it either side but the benefit is limited, it might even make things worse thanks to BUFFER SORT operation.

Next let’s compare to method with DBMS_PARALLEL_EXECUTE.

We want some parallel threads to work on independent partitions, doing direct path inserts, concurrently.

First I’m just going to create a view on the SOURCE DB to make my chunking on daily partition interval simpler.

I could create this on the TARGET DB with references to the dictionary tables over db link but it could be significantly slower depending on the number of partitioned tables and whether predicates are being pushed.

CREATE OR REPLACE VIEW vw_interval_partitions
AS
SELECT table_name, partition_name, partition_position, hi
FROM   (SELECT table_name, partition_name, partition_position
        ,      to_char(
                 extractvalue(
                   dbms_xmlgen.getxmltype
                  ('select high_value from user_tab_partitions x'
                 ||' where x.table_name   = '''||t.table_name||''''
                 ||' and   x.partition_name = '''|| t.partition_name|| ''''),'//text()')) hi
        FROM   user_tab_partitions t);

Secondly, I’m going to create a little helper package which will generate the dynamic SQL for our inserts into specific partitions (PARTITION FOR clause not able to use binds).

		
CREATE OR REPLACE PACKAGE sid_data_pkg
AS
  --
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  );
  --
END sid_data_pkg;
/

CREATE OR REPLACE PACKAGE BODY sid_data_pkg
AS
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  )
  AS
    --
    l_cmd CLOB;
    --
  BEGIN
     --
     l_cmd :=
     q'{INSERT /*+ APPEND */}'||chr(10)||
     q'{INTO   }'||i_table_name||chr(10)||
     q'{PARTITION FOR (}'||i_start_id||')'||chr(10)||
     q'{SELECT *}'||chr(10)||
     q'{FROM   }'||CASE WHEN i_table_owner IS NOT NULL THEN i_table_owner||'.' END
                 ||i_table_name
                 ||CASE WHEN i_dblink IS NOT NULL THEN '@'||i_dblink END
                 ||chr(10)||
     q'{WHERE  }'||i_column_name||' < '||i_end_id||chr(10)||
     CASE WHEN i_start_id IS NOT NULL THEN q'{AND   }'||i_column_name||' >= '||i_start_id END;
     --
     --DBMS_OUTPUT.PUT_LINE(l_cmd);
     --
     EXECUTE IMMEDIATE l_cmd;
     --
     COMMIT;
     --
  END sid_ipt;
  --
END sid_data_pkg;
/

Next, truncate our target table again.

Then create our parallel execute task:

begin
  DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'PX_TEST_TASK');
end;
/

Create the chunks of work to be executed concurrently:

declare
 l_chunk_sql varchar2(1000);
begin
  l_chunk_sql := q'{select (hi - 1) AS partval, hi }'||chr(10)||
                 q'{from   vw_interval_partitions@d1 v }'||chr(10)||
                 q'{where  table_name = 'REMOTE_PX_TEST' }'||chr(10)||
                 q'{order  by partition_position }';
  DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'PX_TEST_TASK',sql_stmt => l_chunk_sql, by_rowid => false);
end;
/

Check our task and our chunks:

select * from dba_parallel_execute_tasks;

TASK_OWNER TASK_NAME    CHUNK_TYPE   STATUS  TABLE_OWNER TABLE_NAME NUMBER_COLUMN TASK_COMMENT JOB_PREFIX SQL_STMT LANGUAGE_FLAG EDITION APPLY_CROSSEDITION_TRIGGER FIRE_APPLY_TRIGGER PARALLEL_LEVEL JOB_CLASS
---------- ------------ ------------ ------- ----------- ---------- ------------- ------------ ---------- -------- ------------- ------- -------------------------- ------------------ -------------- ---------
ME_DBA     PX_TEST_TASK NUMBER_RANGE CHUNKED 
select * from dba_parallel_execute_chunks order by chunk_id;

  CHUNK_ID TASK_OWNER TASK_NAME    STATUS     START_ROWID END_ROWID START_ID END_ID   JOB_NAME START_TS END_TS ERROR_CODE ERROR_MESSAGE
---------- ---------- ------------ ---------- ----------- --------- -------- -------- -------- -------- ------ ---------- -------------
      3053 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120100 20120101 
      3054 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120101 20120102 
        ...
      4017 ME_DBA     PX_TEST_TASK UNASSIGNED                       20140821 20140822 

 965 rows selected 

Then we run our parallel tasks thus, each executing the helper package and working on individual partitions:

set serveroutput on
DECLARE
  l_task     VARCHAR2(24) := 'PX_TEST_TASK';
  l_sql_stmt VARCHAR2(1000);
BEGIN
  --
  l_sql_stmt := q'{begin sid_data_pkg.sid_ipt ('REMOTE_PX_TEST','ME_DBA','DT','D1',:start_id,:end_id); end;}';
  --
  DBMS_PARALLEL_EXECUTE.RUN_TASK(l_task, l_sql_stmt, DBMS_SQL.NATIVE,parallel_level => 16);
  --
  dbms_output.put_line(DBMS_PARALLEL_EXECUTE.TASK_STATUS(l_task));
  --
end;
/

This executed in 2 minutes and returned code 6 which is FINISHED (without error).

Status of individual chunks can be checked via DBA_PARALLEL_EXECUTE_CHUNKS.


On ECAR data and ed tech purgatory

Michael Feldstein - Fri, 2014-08-22 09:24

Recently I wrote a post about many ed tech products being stuck in pilots without large-scale adoption.

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses, and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications. [snip]

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

With the Chronicle’s Almanac coming out this week, there is an interesting chart that on the surface might contradict the above information, showing ~20 technologies with above 50% adoption.

 Educause Center for Analysis and Research

Note: Data are drawn from responses by a subset of more than 500 of the nearly 800 institutions that participated in a survey conducted from June to October 2013. Reported statistics are either an estimated proportion of the population or an estimated median.
Source: Educause Center for Analysis and Research [ECAR]

The difference, however, is that ECAR (through The Chronicle) asked how many institutions have different ed tech products and our survey asked how many courses within an institution use different ed tech products.

There are plenty of technologies being piloted but few hitting the mainstream, and adoption within an institution is one of the key indicators to watch.

The post On ECAR data and ed tech purgatory appeared first on e-Literate.

Log Buffer #385, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-22 08:00

This Log Buffer edition combs through top notch blog posts from Oracle, MySQL and SQL Server postings around the globe.

Oracle:

You want to test the Oracle Java Cloud? You can get your own 30 day test account & instance. Or you can get a Java account with our permanent test system.

Some ramblings about Oracle R Distribution 3.1.1.

Scott is demystifying Oracle Unpivot.

Java 8 for Tablets, Pis, and Legos at Silicon Valley JUG – 8/20/2014

A new version of Oracle BPM Suite 11.1.1.7 with Adaptive Case Management (ACM) is now available.

SQL Server:

Data Mining: Part 14 Export DMX results with Integration Services

Should you be planning to move from Exchange to Office 365? If so, why?

Stairway to T-SQL DML Level 12: Using the MERGE Statement

From SQL Server Management Studio it’s hard to look through the first few rows of a whole lot of tables in a database.

Special Characters can lead to many problems. Identifying and reporting on them can save a lot of headache down the road.

MySQL:

MariaDB Galera Cluster 5.5.39 now available

A closer look at the MySQL ibdata1 disk space issue and big tables

How-To: Guide to Database Migration from MS Access using MySQL Workbench

Using resource monitoring to avoid user service overload

How to use MySQL Global Transaction IDs (GTIDs) in production

Categories: DBA Blogs

Oracle Priority Service Infogram for 21-AUG-2014

Oracle Infogram - Thu, 2014-08-21 18:19

OpenWorld
Each week leading up to OpenWorld we will be publishing various announcements, schedules, tips, etc. related to the event.
This week:
The OpenWorld Facebook page.
RDBMS
Oracle Restart to autostart your oracle database, listener and services on linux, from the Amis Technology Blog.
Removing passwords from Oracle scripts: Wallets and Proxy Users, from the DBA Survival Blog.
Restore datafile from service: A cool #Oracle 12c Feature, from The Oracle Instructor.
Performance
A list of excellent past postings on Execution Plans from Oracle Scratchpad.FusionNew Whitepaper: Building a Custom Fusion Application, from Fusion Applications Developer Relations
SOA
Setup GMail as mail provider for SOA Suite 12c – configure SMTP certificate in trust store, from AMIS Technology Blog.
APEX
From grassroots – oracle: APEX Printer Friendliness using CSS Media Queries.

Big Data Appliance
Rittman Mead and Oracle Big Data Appliance, from….Rittman Mead, of course.
WLS
Setting V$SESSION for a WLS Datasource, from The WebLogic Server Blog.

Adapt - Learn New Things

Floyd Teter - Thu, 2014-08-21 16:56
Nothing lasts forever.  Sand-piles crumble.  Companies rise and fall.  Relationships change.  Markets come and go.  It’s just the nature of things.  Adapt or die.  Personally, I like this feature of life…can’t imagine anything worse than stagnation.  There’s nothing better to me than exploring and learning new things.

As Oracle continues their push into cloud-based enterprise applications, we’re seeing some of that fundamental change play out in the partner community; with both partner companies and with the individuals who make up the partners.  This is especially true with the technology-based services partners.  Companies have merged or faded away.  Individuals, many of whom have stellar reputations within the partner and customer communities, have accepted direct employment with Oracle or moved into other technology eco-systems.

Why the big change?  Frankly, it’s because the cloud doesn’t leave much room for the traditional offerings of those technology-based services partners.  As we shift from on-premise to cloud, the need for those traditional offerings are drying up.  Traditional installations, patching, heavy custom development…all those things seem headed the way of the buggy-whip in the enterprise applications space.  It’s time to adapt.

As a guy involved in providing services and complimentary products myself, I’ve watched this change unfold over the past few years with more than a little self-interest and excitement - hey, ya gotta pay the bills, right?  As a result, I’ve identified three adaptation paths for individuals involved with services in the Oracle technology-based eco-system:

1.  Leave.  Find another market where your skills transfer well and take the leap.  This isn’t a bad option at all, especially if you’ve developed leadership and/or “soft skills”.  Believe it or not, there’s a big world outside the Oracle eco-system.

2.  Play the long tail.  The base of traditional, on-premise work will not disappear over night.  It’s shrinking, granted, but it’s a huge base even while shrinking.  I also think there will be a big uptick in small, lightweight projects with traditional footprints that will compliment large cloud-based enterprise application systems (for further information, see “Oracle Apex”).

3.  Learn new things.  Apply your background to build skills in new technologies.  If you’re an Oracle DBA or systems administrator (two skillets that are rapidly merging into one), dig into Oracle Enterprise Manager…and remember that private clouds will continue to flourish with Oracle’s larger customers.  If you’re a developer, begin building skills in middle-tier integration - connecting cloud offerings in new and creative ways is very much an in-demand skill.  Or get smart with building light-weight complimentary applications (ADF, BPM, SOA) - especially mobile (MAF).  If you’re a business analyst or functional type, get familiar with Functional Setup Manager and the Oracle Composers.  Maybe get a solid understanding of User Experience and how you can apply UX in your job.  As a solution architect, I’ve spent a great deal of time learning how the various Oracle Cloud solutions work together from data, integration, business process, and information perspectives…and if I can do it, so can you!

Obviously, my approach has been to explore and learn new things relevant to the market changes.  The opportunities I saw for myself consisted of connecting things together and adding value around the edges.  It’s been a hoot so far and I’m nowhere near done yet.  YMMV.


With Oracle OpenWorld coming up as a huge opportunity to learn new things, it seemed timely to share these thoughts now.  So there you have it.  My worm’s eye view of how the Oracle partner market (or at least the Oracle technology-based services partner market) is changing.  Maybe I nailed it.  Maybe I’m all wet.  Either way, let me know what you think.

Introduction to MongoDB Geospatial feature

Tugdual Grall - Thu, 2014-08-21 15:30
This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries. Storing Geospatial Informations As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries: 2d Index : uses simple coordinate (longitude, Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Up and Running with HCM 9.2 on PT 8.54 via PUM "Demo Image"

Jim Marion - Thu, 2014-08-21 11:57

Yes, you read that correctly. PUM is the new demo image. According to MOS Doc ID 1464619.1 "As of HCM 9.2.006, the PeopleSoft Image and PeopleSoft Demo Image have been combined into one PeopleSoft Update Image. You can use the PeopleSoft Update Image for both your patching and demonstration purposes." If you are current on your PUM images, you probably already knew that. If you are like me, however, and haven't downloaded a demo image for a while, then you may have been looking for demo images on the old MOS demo image page.

Since I use the image for prototyping and demonstrations, I turned off SES and reduced the memory requirements to 2048 MB. It is working great at this lower memory setting.

There are a lot of new and great features in the PT 8.54 PUM:

  • Attribute based branding,
  • Component Branding (add your own CSS and JavaScript to components without hacking a delivered HTML definition)
  • REST Query Access Service,
  • Mobile Application Platform (MAP), and
  • Fluid homepages

Tip: Access the fluid homepages by visiting the URL http://<yourserver>:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL. For example, if you have a hosts entry mapping your PUM image to the hostname hcmdb.example.com, then use the URL http://hcmdb.example.com:8000/psc/ps/EMPLOYEE/HRMS/c/NUI_FRAMEWORK.PT_LANDINGPAGE.GBL.

Panduit Delivers on the Digital Business Promise

WebCenter Team - Thu, 2014-08-21 11:53
Oracle Corporation  Oracle Customer Panduit Delivers on the Digital Business Promis
How a 60-Year-Old Company Transformed into a Modern Digital Business

Connecting with audiences through a robust online experience across multiple channels and devices is a nonnegotiable requirement in today’s digital world. Companies need a digital platform that helps them create, manage, and integrate processes, content, analytics, and more.

Panduit, a company founded nearly 60 years ago, needed to simplify and modernize its enterprise application and infrastructure to position itself for long-term growth. Learn how it transformed into a digital business using Oracle WebCenter and Oracle Business Process Management.

Join this webcast for an in-depth look at how these Oracle technologies helped Panduit:
  • Increase self-service activity on their portal by 75%
  • Improve number and quality of sales leads through increased customer interactions and registration over the web and mobile
  • Create multichannel self-service interactions and content-enabled business processes
Register now for this webcast.

Red Button Top Register Now Red Button Bottom Presented by:

Andy Kershaw
Senior Director, Oracle WebCenter, Oracle BPM and Oracle Social Network Product Management, Oracle

Vidya Iyer
IT Delivery Manager, Panduit

Patrick Garcia
IT Solutions Architect, Panduit Hardware and Software Engineered to Work Together Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Quiz night

Jonathan Lewis - Thu, 2014-08-21 11:05

Here’s a script to create a table, with index, and collect stats on it. Once I’ve collected stats I’ve checked the execution plan to discover that a hint has been ignored (for a well-known reason):

create table t2
as
select
        mod(rownum,200)         n1,
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from
        all_objects
where
        rownum <= 3000
;

create index t2_i1 on t2(n1);

begin
        dbms_stats.gather_table_stats(
                user,
                't2',
                method_opt => 'for all columns size 1'
        );
end;
/

explain plan for
select  /*+ index(t2) */
        n1
from    t2
where   n2 = 45
;

select * from table(dbms_xplan.display);

----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    15 |   120 |    15 |
|*  1 |  TABLE ACCESS FULL| T2   |    15 |   120 |    15 |
----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N2"=45)

Of course we don’t expect the optimizer to use the index because we didn’t declare n1 to be not null, so there may be rows in the table which do not appear in the index. The only option the optimizer has for getting the right answer is to use a full tablescan. So the question is this – how come Oracle will obey the hint in the following SQL statement:


explain plan for
select
        /*+
                leading (t2 t1)
                index(t2) index(t1)
                use_nl(t1)
        */
        t2.n1, t1.n2
from
        t2      t2,
        t2      t1
where
        t2.n2 = 45
and     t2.n1 = t1.n1
;

select * from table(dbms_xplan.display);

-------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Rows  | Bytes | Cost  |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |   225 |  3600 |  3248 |
|   1 |  NESTED LOOPS                         |       |   225 |  3600 |  3248 |
|   2 |   NESTED LOOPS                        |       |   225 |  3600 |  3248 |
|*  3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T2    |    15 |   120 |  3008 |
|   4 |     INDEX FULL SCAN                   | T2_I1 |  3000 |       |     8 |
|*  5 |    INDEX RANGE SCAN                   | T2_I1 |    15 |       |     1 |
|   6 |   TABLE ACCESS BY INDEX ROWID         | T2    |    15 |   120 |    16 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T2"."N2"=45)
   5 - access("T2"."N1"="T1"."N1")

I ran this on 11.2.0.4, but it does the same on earlier versions.

Update:

This was clearly too easy – posted at 18:04, answered correctly at 18:21. At some point in it’s evolution the optimizer acquired a rule that allowed it to infer unwritten “is not null” predicates from the join predicate.

 

 

 


Identifying Deadlocks Using the SQL Server Error Log

Chris Foot - Thu, 2014-08-21 09:30

Deadlocking in SQL Server can be one of the more time consuming issues to resolve. The script below can reduce the time it takes to gather necessary information and troubleshoot the cause of the deadlocks. Using this script requires your SQL Server version to be 2005 or newer and for Trace Flag 1222 to be enabled to capture the deadlocking information in the error log.

The first portion of the script collects the data written to the error log and parses it for the information needed. With this data, the script can return many different data points for identifying the root cause of your deadlocks. It begins with a query to return the number of deadlocks in the current error log.

select
distinct top 1 deadlockcount
from @results
order by deadlockcount desc

The next script will allow you to review all of the deadlock information in the current error log. It will output the raw InputBuffer details, but if the queries running in your environment have extraneous tabs or spaces, you can modify the commented portion to remove them.

select 
deadlockcount, logdate, processinfo, 
logtext
--,rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @results
order by id

An important piece of information when identifying and resolving deadlocks is the resource locks. This next query returns all of the error log records containing details for the locks associated with deadlocks. In some situations, the object and/or index name may not be included in this output.

select distinct
logtext
from @results 
where 
logtext like '%associatedobjectid%'

In order to find the objects involved with the deadlock occurrences, run the next query’s results to text. Then, copy the output into a new query window and remove the ‘union’ from the end. When run, it will return the object and index names.

select distinct
'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname
      FROM sys.partitions AS p
      INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id
      WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+'
	  union
	  '
from @results 
where logtext like '   keylock hobtid=%'
union
select distinct
'SELECT OBJECT_NAME(i.object_id) as objectname, i.name as indexname
      FROM sys.partitions AS p
      INNER JOIN sys.indexes AS i ON i.object_id = p.object_id AND i.index_id = p.index_id
      WHERE p.partition_id = '+convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext)))))+'
	  union
	  '
from @results
where logtext like '   pagelock fileid=%'

In my experience, situations can arise where there are a large number of deadlocks but only a few queries involved. This portion of the script will return the distinct queries participating in the deadlocks. The commented lines can be modified to remove extra tabs and spaces. To avoid issues caused by the InputBuffer data being on multiple lines, you should cross-reference these results with the results of the next query.

select
max(deadlockcount) as deadlockcount, max(id) as id, 
logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @results
where logtext not in (
'deadlock-list',
'  process-list',
'    inputbuf',
'    executionStack',
'  resource-list',
'    owner-list',
'    waiter-list'
)
and logtext not like '     owner id=%'
and logtext not like '     waiter id=%'
and logtext not like '   keylock hobtid=%'
and logtext not like '   pagelock fileid%'
and logtext not like ' deadlock victim=%'
and logtext not like '   process id=%'
and logtext not like '     frame procname%'
group by 
logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' ')))
order by id asc, deadlockcount asc

This query will return the execution stack and InputBuffer details for each deadlock.

select 
deadlockcount, logdate, processinfo, logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @executionstack 
WHERE logtext not like '%process id=%'
and logtext not like '%executionstack%'
order by id asc

For documentation purposes, this query will return the distinct InputBuffer output for the deadlock victims. If the InputBuffer data is on multiple lines, you should cross-reference these results with the results of the next query.

select max(d.deadlockcount) as deadlockcount, max(d.executioncount) executioncount, max(d.id) as id, logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @executionstack d
right join (
	select e.executioncount
	from @results r
	join (
		select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim
		from @results
		where logtext like ' deadlock victim=%'
	) v on r.deadlockcount=v.deadlockcount
	left join (
		select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart,
		substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid
		from @results
		where logtext like '   process id=%'
	) p on r.id=p.id
	join @executionstack e on r.id=e.id
	where v.victim=p.processid
) q on d.executioncount=q.executioncount
where d.logtext not like '   process id=%'
and d.logtext <> '    executionStack'
and d.logtext not like '     frame%'
group by logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' ')))
order by id asc, deadlockcount asc, executioncount asc

This query will return the execution stack and InputBuffer details for each victim.

select d.deadlockcount, d.logdate, d.processinfo, logtext
--rtrim(ltrim(replace(replace(replace(replace(replace(replace(replace(replace(d.logtext,'               ',' '),'       ',' '),'     ',' '),'   	',' '),'    ',' '),'  ',' '),'  ',' '),'	',' '))) as logtext_cleaned
from @executionstack d
right join (
	select e.executioncount
	from @results r
	join (
		select deadlockcount, logtext, convert(varchar(250),REVERSE(SUBSTRING(REVERSE(logtext),0,CHARINDEX('=', REVERSE(logtext))))) victim
		from @results
		where logtext like ' deadlock victim=%'
	) v on r.deadlockcount=v.deadlockcount
	left join (
		select id, logtext, substring(logtext, charindex('=', logtext)+1,50) processidstart,
		substring(substring(logtext, charindex('=', logtext)+1,50),0, charindex(' ', substring(logtext, charindex('=', logtext)+1,50))) processid
		from @results
		where logtext like '   process id=%'
	) p on r.id=p.id
	join @executionstack e on r.id=e.id
	where v.victim=p.processid
	--order by r.id
) q on d.executioncount=q.executioncount
where d.logtext not like '   process id=%'
and d.logtext <> '    executionStack'
order by d.id asc

The script, which can be downloaded here, includes all of these queries for you to use. Each one is independent, so if you are only interested in the results for a single query, the other sections can be commented out.

Any feedback you have is always appreciated. In my opinion, that is one of the best parts about writing T-SQL! Don’t forget to check back for my next post in which I will be using the AdventureWorks2008R2 database to provide an in-depth deadlock analysis.

The post Identifying Deadlocks Using the SQL Server Error Log appeared first on Remote DBA Experts.

Oracle 12.1.0.2.1 Set to Join Conversion

Yann Neuhaus - Thu, 2014-08-21 01:17

Recently I described the Partial Join Evaluation transformation that appeared last year in 12c. I did it as an introduction for another transformation that appeared long time ago in 10.1.0.3 but was not used by default. And even in the latest 12c patchset 1 (aka 12.1.0.2.0) it is still not enabled. But it's there and you can use it if you set optimizer_features_enabled to 12.1.0.2.1 (that's not a typo!)
Yes that number looks like the future PSU for the 12c Release 1 Patchset 1 that was available recently and has no PSU yet. Lost in the release numbers? No problem. This is only default values for the _convert_set_to_join paramter but you can also use the hint to get that transformation, which is available in previous versions as well.

So what does that transformation? It transforms an INTERSECT or MINUS into a join. When the tables are large but the result is small, that transformation can bring new access path avoiding full table scans and deduplication for each branch. And thanks to the Partial Join Evaluation the performance is even better in 12c. Let's look at an example.



SQL*Plus: Release 12.1.0.2.0 Production on Sun Jul 27 22:10:57 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create table DEMO1(n constraint DEMO1_N primary key) as select rownum n from (select * from dual connect by level <= 1000),(select * from dual connect by level <= 100);
Table created.

SQL> create table DEMO2(n constraint DEMO2_N primary key) as select rownum n from dual connect by level <= 10;
Table created.

SQL> alter session set statistics_level=all;
Session altered.

So I have two tables, one with 100000 rows and one with only 10. And I want the rows from DEMO1 which are not in DEMO2:


SQL> alter session set optimizer_features_enable='12.1.0.2.1';
Session altered.

SQL> select * from DEMO1 intersect select * from DEMO2;

         N
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

Let's have a look at the plan:


SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
SQL_ID 9fpg8nyjaqb5f, child number 0
-------------------------------------
select * from DEMO1 intersect select * from DEMO2

Plan hash value: 4278239763

------------------------------------------------------------------------------
| Id  | Operation           | Name    | Starts | E-Rows | A-Rows || Used-Mem |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |         |      1 |        |     10 ||          |
|   1 |  INTERSECTION       |         |      1 |        |     10 ||          |
|   2 |   SORT UNIQUE       |         |      1 |    100K|    100K|| 4078K (0)|
|   3 |    TABLE ACCESS FULL| DEMO1   |      1 |    100K|    100K||          |
|   4 |   SORT UNIQUE NOSORT|         |      1 |     10 |     10 ||          |
|   5 |    INDEX FULL SCAN  | DEMO2_N |      1 |     10 |     10 ||          |
------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SET$1
   3 - SEL$1 / DEMO1@SEL$1
   5 - SEL$2 / DEMO2@SEL$2


This is the expected plan. There is an INTERSECTION operation that implements our INTERSECT. But look: each branch had to be deduplicated (SORT UNIQUE). Note that the SORT UNIQUE NOSORT has a funny name - it's just a SORT UNIQUE that doesn't have to sort because its input comes from an index. Each branch had to read all the rows. Look at the big table: we read 100000 rows and use 4MB of memory to sort them in order to deduplicate them. But it's an intersection and we have a small table that has only 10 rows. We know that the result cannot be large. Then a more efficient way would be to read the small table and for each row check if they are in the big one - through an index access. We still have to deduplicate, but we do that at the end, on the small rowset.

And this is exactly what the Set to Join Conversion is doing. Let's force it with a hint:


SQL> select /*+ SET_TO_JOIN(@"SET$1") */ * from DEMO1 intersect select * from DEMO2;

         N
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10

10 rows selected.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------
SQL_ID        01z69x8w7fmu0, child number 0
-------------------------------------
select /*+ SET_TO_JOIN(@"SET$1") */ * from DEMO1 intersect select *
from DEMO2

Plan hash value: 169945296

------------------------------------------------------------------
| Id  | Operation           | Name    | Starts | E-Rows | A-Rows |
------------------------------------------------------------------
|   0 | SELECT STATEMENT    |         |      1 |        |     10 |
|   1 |  SORT UNIQUE NOSORT |         |      1 |     10 |     10 |
|   2 |   NESTED LOOPS SEMI |         |      1 |     10 |     10 |
|   3 |    INDEX FULL SCAN  | DEMO2_N |      1 |     10 |     10 |
|*  4 |    INDEX UNIQUE SCAN| DEMO1_N |     10 |    100K|     10 |
------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("DEMO1"."N"="DEMO2"."N")

The intersect has been transformed to a join thanks to the Set to Join transformation, and the join has been transformed to a semi-join thanks to the Partial Join Evaluation transformation. The result is clear here:

  • No full table scan on the big table because the join is able to access with an index
  • No deduplication which needs a large workarea
  • The join can stop as soon as one row matches thanks to the semi-join
  • Deduplication occurs only on result, which is small. And here it does not even require a workarea because the rows comes sorted from the index.

We can see the SET_TO_JOIN and PARTIAL_JOIN hints in the outline:


Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$02B15F54")
      MERGE(@"SEL$1")
      MERGE(@"SEL$2")
      OUTLINE(@"SET$09AAA538")
      SET_TO_JOIN(@"SET$1")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      OUTLINE(@"SET$1")
      INDEX(@"SEL$02B15F54" "DEMO2"@"SEL$2" ("DEMO2"."N"))
      INDEX(@"SEL$02B15F54" "DEMO1"@"SEL$1" ("DEMO1"."N"))
      LEADING(@"SEL$02B15F54" "DEMO2"@"SEL$2" "DEMO1"@"SEL$1")
      USE_NL(@"SEL$02B15F54" "DEMO1"@"SEL$1")
      PARTIAL_JOIN(@"SEL$02B15F54" "DEMO1"@"SEL$1")
      END_OUTLINE_DATA
  */

So we are in 12.1.0.2 and we need a hint for that. Let's go to 12.1.0.2.1 (which implicitely set _convert_set_to_join=true).


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
SQL_ID        9fpg8nyjaqb5f, child number 1
-------------------------------------
select * from DEMO1 intersect select * from DEMO2

Plan hash value: 118900122

------------------------------------------------------------------------------
| Id  | Operation           | Name    | Starts | E-Rows | A-Rows || Used-Mem |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |         |      1 |        |     10 ||          |
|   1 |  HASH UNIQUE        |         |      1 |     10 |     10 || 1260K (0)|
|   2 |   NESTED LOOPS SEMI |         |      1 |     10 |     10 ||          |
|   3 |    INDEX FULL SCAN  | DEMO2_N |      1 |     10 |     10 ||          |
|*  4 |    INDEX UNIQUE SCAN| DEMO1_N |     10 |    100K|     10 ||          |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("DEMO1"."N"="DEMO2"."N")

Note
-----
   - this is an adaptive plan


Ok, we have the Set to Join Conversion here in 12.1.0.2.1

But don't you see another difference?
.
.
.
.
The deduplication needs a workarea here. It is not a NOSORT operation - even if the result comes from the index. It seems that the CBO cannot guarantee that the result comes sorted. The clue is in the execution plan Note.
But that's for a future blog post.

Personal Assistant or Creepy Stalker? The Rise of Cognitive Computing

Oracle AppsLab - Wed, 2014-08-20 23:44

I just got back to my hotel room after attending the first of a two day Cognitive Computing Forum, a conference running in parallel to the Semantic Technology (SemTech) Business Conference and the NoSQL Conference here in San Jose. Although the forum attracts less attendees and has only a single track, I cannot remember attending a symposium where so many stimulating ideas and projects were presented.

What is cognitive computing? It refers to computational systems that are modeled on the human brain – either literally by emulating brain structure or figuratively through using reasoning and semantic associations to analyze data. Research into cognitive computing has become increasingly important as organizations and individuals attempt to make sense of the massive amount of data that is now commonplace.

The first forum speaker was Chris Welty, who was an instrumental part of IBM’s Watson project (the computer that beat the top human contestants on the gameshow Jeopardy). Chris gave a great overview of how cognitive computing changes the traditional software development paradigm. Specifically, he argued that rather than focus on perfection, it is ok to be wrong as long as you succeed often enough to be useful (he pointed to search engine results as a good illustration of this principle). Development should focus on incremental improvement – using clearly defined metrics to measure whether new features have real benefit. Another important point he made was that there is no one best solution – rather, often the most productive strategy is to apply several different analytical approaches to the same problem, and then use a machine learning algorithm to mediate between (possibly) conflicting results.

There were also several interesting – although admittedly esoteric – talks by Dave Sullivan of Ersatz Labs (@_DaveSullivan) on deep learning, Subutai Ahmad of Numenta on cortical computing (which attempts to emulate the architecture of the neocortex) and Paul Hofmann (@Paul_Hofmann) of Saffron Technology on associative memory and cognitive distance. Kristian Hammond (@KJ_Hammond) of Narrative Science described technology that can take structured data and use natural language generation (NLG) to automatically create textual narratives, which he argued are often much better than data visualizations and dashboards in promoting understanding and comprehension.

However, the highlight of this first day was the talk entitled ‘Expressive Machines’ by Mark Sagar from the Laboratory for Animate Technologies. After showing some examples of facial tracking CGI from the movies ‘King Kong’ and ‘Avatar’, Mark described a framework modeled on human physiology that emulates human emotion and learning. I’ve got to say that even though I have a solid appreciation and understanding for the underlying science and technology, Mark’s BabyX – who is now really more a virtual toddler than an infant – blew me away. It was amazing to see Mark elicit various emotions from BabyX. Check out this video about BabyX from TEDxAukland 2013.

At the end of the day, the presentations helped crystallize some important lines of thought in my own carbon-based ‘computer’.

First, it is no surprise that human computer interactions are moving towards more natural user interfaces (NUIs), where a combination of artificial intelligence, fueled by semantics and machine learning and coupled with more natural ways of interacting with devices, result in more intuitive experiences.

Second, while the back end analysis is extremely important, what is particularly interesting to me is the human part of the human computer interaction. Specifically, while we often focus on how humans manipulate computers, an equally  interesting question is how computers can be used to ‘manipulate’ humans in order to enhance our comprehension of information by leveraging how our brains are wired. After all, we do not view the world objectively, but through a lens that is the result of idiosyncrasies from our cultural and evolutionary history – a fact exploited by the advertising industry.

For example, our brains are prone to anthropomorphism, and will recognize faces even when faces aren’t there. Furthermore, we find symmetrical faces more attractive than unsymmetrical faces.  We are also attracted to infantile features – a fact put to good use by Walt Disney animators who made Mickey Mouse appear more infant-like over the years to increase his popularity (as documented by paleontologist Stephen Jay Gould). In fact, we exhibit a plethora of cognitive biases (ever experience the Baader Meinhof phenomenon?), including the “uncanny valley”, which describes a rapid drop off in comfort level as computer agents become almost – but not quite perfectly – human-looking.  And as Mark Sagar’s work demonstrates, emotional, non-verbal cues are extremely important (The most impressive part of Sagar’s demo was not the A.I. – afer all, there is a reason why BabyX is a baby and not an fully conversant adult – but rather the emotional response it elicited in the audience).

The challenge in designing intelligent experiences is to build systems that are informative and predictive but not presumptuous, tending towards the helpful personal assistant rather than the creepy stalker. Getting it right will depend as much on understanding human psychology as it will on implementing the latest machine learning algorithms.Possibly Related Posts:

Oracle 12C - In-Memory Option Resources

Karl Reitschuster - Wed, 2014-08-20 23:17

Hi folks,

Introduced as an Option Oracles In-Memory option will change the world of databasing also like SAP HANA does; Since July the release is out but the search for resources and documentation is  poor;

Here some useful links I found.

First the Home of Oracle In-Memory

Employee or Member Training

Bradley Brown - Wed, 2014-08-20 19:12
Do you have a group of employees or members that you would like to train?  Would you like to make the training available for a limited time only - such as for 2 weeks?  Would you like to have the ability to take the ability to view the training away at your discretion (such as when an employee leaves the company)?  Would you like to know who watched which videos?  For example, did Jim watch the introductory video on Tuesday as he said he did?

If you answered yes to any of these questions, I have great news for you!  The InteliVideo platform supports all of these needs and more.  In fact, you can upload any number of videos, you can group them as you wish, you can give people access or deny access at any time (even if they have downloaded videos to their iPad, they will no longer be able to watch them once you deny access.

Below I'm going to run you through an actual use case for a company that's using our platform to train their employees.

Signing Up for an InteliVideo AccountSigning up for InteliVideo is an easy, painless and free process.  First, go to the InteliVideo site and click on "Sign Up:"


You will be asked for your subdomain name (typically something like your company name), first name, last name and email address.  Finally, just fill in the Captcha (anti-spammer protection) information and click on "Create Account."


You will then receive an email that will provide your InteliVideo account information.  Congratulations!  You're getting closer!Customizing the Site with Your Logo, Color Scheme, etc.Once you create your account, you'll be taken to the "Display" page within the InteliVideo administration / backend pages.  This is where you can update your subdomain name, logo, color schemes, page header, choose a template (look and feel) and so much more.  We work with most of our customers to make sure that the look and feel of their main website matches the look and feel of their InteliVideo website.  If you want to point your domain to our website, we can do that for you too.  For example, if you signed up for coolvideos.intelivideo.com and you want to change it to videos.coolvideos.com, we can even do that for you!Signing Up for a Paid AccountUnder "Account Settings" and then "My Plan" you can sign up for a paid account.  The highest value is in the $249/year account.  It includes more videos, storage, etc. in the plan.  You can always go over the number of hours or minutes provided, we just charge by the minute for those overages.


Uploading Your Video(s)Uploading your videos is easy!  Simply drag and drop your videos into the platform and we do the rest for you!  When you click on the "Videos" tab, you'll see an "Upload Videos" button.  Click on this button, which will present you with a window like this one:


You can drag and drop any video file into the "Drag file here" section or you choose files, import them from Dropbox, download from a URL and there are other options.  If you have 100s of video files, we will work directly with you to get your videos into the platform.  Most of our customers who have more than 100 videos will send us a hard drive with the videos and we'll upload them.

Once the videos are uploaded, we take care of transcoding.  Transcoding is just a fancy way of saying that we convert just about any source video file format (MOV file, AVI file, VOB, etc.) into a number of different resolutions and formats so that your video will play on any device.  Another way of explaining this is that we take care of the hard stuff for you.

You'll see your videos in the list along with a status.  If the video file is corrupt, you would see an error message, but most of the time once your videos are transcoded, you'll see that you can watch the video as you see here below:


You can also edit the details (i.e. the description, if it's a public or private video, etc.) by clicking the edit button:


As you can see, you can edit the short (title) and long description here.  You can also indicate whether a video is public or private here.  Public means anyone can view it for free.  Private means you must be a member (or buyer) to view the video.  The override image allows you to upload an image that should be used as the default background image for the video.  If you don't upload an override image, we extract the first frame of the video and we use that image.

If there is a 1:1 correlation between your video and a product, you can click on "Create Product" in the list of videos page above.  Most of the time a product is made up of more than 1 video, but sometimes this is a good starting point for a product.  You can always add more videos to a product.
Grouping Your Video(s) into a ProductIf you didn't click the "Create Product" button above, you'll need to create a product.  A Product is simply a bundle of videos that you wish to offer to the public, for sale or for members.

Click on the "Products" and then click on "New Product."  You'll see that there are a number of options here:


Again, you can set a short (title) and long description for your product.  You can determine whether the product is available to the public, members only or for sale.  If it's for sale, you can determine if the product is a one time payment, a rental, subscription or installment payment.
Offering Products for SaleIf you want to sell your products, you must connect your InteliVideo account with a Stripe account.  If you don't have an existing Stripe account, you can create one through our platform.  If you already have a Stripe account, you can connect to that account.  Either way, click on "Connect with Stripe" and we'll walk you through step-by-step what if required to connect InteliVideo and Stripe.Granting Access to a Product / VideoAny product that is available for sale or for members only can be granted access (or revoked/denied) manually.  Click on the "Customers" tab, which will show you a list of your existing customers/members.  To add a new customer or member, click on "New Customer:"


Enter the first and last name along with the email of the person and select any products that you would like them to have access to.  IPs allow indicates how many unique locations you want the user to be able to access your video from.  The default is 8.  If you wanted them to be able to access it from one computer only, you could change this to 1.

You can view any customer/member by clicking on their name from the customer page.  You can click on "Edit User" to disable the user.  As you can see here, when you drill into a user, you'll see their usage stats:

When you edit the user, you can disable the user at any time.
Timed or Dripped ContentWithin the details of every product, you can reorder the videos and the order they are displayed in by dragging any video into the correct order:


You can also set up a delayed delivery schedule or "drip schedule" for each video.  In other words, if you want Module 1 to be available for days 0 through 7 (first week), you can set that schedule up.  If you wanted all of the videos to be available for 3 weeks, you could set each video to 0 through 21.


Knowing Who's Watched WhatThe InteliVideo platform tracks all of the usage for every video whether it's watched streaming or downloaded and watched on a plane.  You saw one example of those usage statistics for a specific customer/member above.  There are MANY other ways of slicing and dicing the data to know what your customers/members are watching, what they are having a difficult time with (i.e. they are watching repeatedly), and what they aren't watching.  You can see where (in the world) they were when they were watching, what devices they watched from and so much more.  We are data guys, so believe me when I say "we track everything!"
Employee / Member's Viewing OptionsWe support about every device out there for viewing.  Most video platforms only support streaming videos.  This limits your members to watching when they have an Internet connection.  We have apps for iOS (iPhone and iPad), Android (phones and tablets), Chromecast, Roku, Windows computers, Apple computers and much much more.

The apps manage which content users can access and they keep track of those usage statistics when people are disconnected from the Internet and they upload those stats when the person's phone, tablet, laptop, etc. "phones" home.  This allows your customers or members to download a video, yet they don't have access to the source video file.  Therefore they can't archive it to watch it later.  They can't email it to their friends.  They can't post it on Facebook or YouTube.  It also allows you to control when they can no longer watch the product.  If you deny the user, they won't be able to watch the content any more.  The bottom line is that we protect your content and we don't allow people to access the content when you're done allowing them to have access.
Sign Up Today!If you're ready to sign up for an account at InteliVideo, please do so now!  We would love to have you as a customer!