Skip navigation.

Feed aggregator

Open and Migrate Microsoft Access in Oracle SQL Developer 4

DBASolved - Mon, 2014-03-24 12:40

For many people getting start with databases revolve around using Microsoft Access (MS Access). MS Access is an entry level “database” (if you can call it a database) that Microsoft has been putting out for years. Often people want to move older MS Access “databases” into enterprise databases as they become reliant on the information stored in them. Oracle has recognized this and has enabled Oracle SQL Developer to interact with MS Access and allow for a quick copy of data from MS Access to Oracle.

I have been a baseball fan for as long as I can remember; however, I don’t dwell on stats and win-lose records. I honestly just like to watch the game and watch my boys learn the game at a completive level. With this in mind I went looking for baseball stats that I can put into a database and use for demo purposes. What I found was an MS Access “database” full of information from 1996 up through 2013 thanks to Sean Lahman (here).

Now that I have data I want for testing, I really want to test it in Oracle! Using Oracle SQL Developer I’m able to access the data stored in MS Access and with a right-click of the mouse move the data into Oracle. Sounds simple, right!?  Let’s take a look.

The tools I’m using to do this migration are:

  • Microsoft Access 2010
  • Oracle SQL Developer 4.0  (4.0.1.14.48 w/ JDK 1.7.0_51)
  • Oracle Database 12c (12.1.0.1)
Setup Microsoft Access

In order to access the data in a MS Access “database”, you need to enable the viewing of system objects in Access. In MS Access 2010 this can be accomplished by doing the following once the MS Access database is open.

Open the options for the database.

clip_image002

Once the Access Options dialog is open, then go to the Navigation button.

image

After clicking on the navigation button, the Navigation Options dialog will open. On this dialog you want to enable “Show System Objects”.

image

After enabling “Show System Objects”, click OK all the way out to the “database”. You will notice in the All Access Objects tree there are some system tables that appear to be greyed out.

clip_image007

These are the system tables that Oracle SQL Developer needs access to in order to connect.
Connect to MS Access from SQL Developer

To setup a connection to MS Access in Oracle SQL Developer, is just like setting up a connection for Oracle. From the Connections dialog, click on the green plus sign. This will open the connections dialog box.

clip_image008

You will see a tab that says Access. Click on the tab to open up the dialog to use an MS Access MDB file. Use the Browse button to locate the MDB and give it a connection name. The dot in the password field is just there upon connection. Username is not needed since connections to MS Access is as Admin by default.

clip_image010

Once connected to the MS Access database from SQL Developer, you will see the connection and tables that are in the database.

clip_image011

From here, you can see all the tables in the database by expanding the TABLE category of the connection.

clip_image013

With the connection to MS Access set and usable, I wanted to move all these tables into an Oracle Database 12c for testing.
Quickly moving data to Oracle

All the baseball data can be quickly moved to Oracle in a few simple steps. No, this process does not involve using the migration features of SQL Developer; instead it is a simple right-click.

Start by highlighting one or more tables you would like to copy to Oracle.

clip_image014

Next perform a right-click on the highlighted tables. You will see an option for Copy to Oracle. Click this option.

clip_image016

After clicking Copy To Oracle, a connection dialog will open. For my Oracle connection, I’m connecting to a PDB with a local user named SCOUT. Click Apply.

clip_image018

At this point, SQL Developer is copying all the metadata and data over to my Oracle 12c PDB under the user SCOUT.

clip_image019

When the process is done copying, I can verify that all the tables are there by looking at my Oracle connection pane and opening the Tables node on the tree.

image

Now I have all the baseball data I want to play with loaded into Oracle Database 12c (PDB). Let the fun times begin!

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database
Categories: DBA Blogs

What's New in Dodeca 6.7.1?

Tim Tow - Mon, 2014-03-24 10:28
Last week, we released Dodeca version 6.7.1.4340 which focuses on some new relational functionality. The major new features in 6.7.1 are:
  • Concurrent SQL Query Execution
  • Detailed SQL Timed Logging
  • Query and Display PDF format
  • Ability to Launch Local External Processes from Within Dodeca
Concurrent SQL Query ExecutionDodeca has a built-in SQLPassthroughDataSet object that supports queries to a relational database.  The SQLPassthroughDataSet functionality was engineered such that a SQLPassthroughDataSet object can include multiple queries that get executed and returned on a single trip to the server and several of our customers have taken great advantage of that functionality.  We have at least one customer, in fact, that has some SQLPassthroughDataSets that execute up to 20 queries in a single trip to the server.  The functionality was originally designed to run the queries sequentially, but in some cases it would be better to run the queries concurrently.  Because this is Dodeca, of course concurrent query execution is configurable at the SQLPassthroughDataSet level.

Detailed SQL Timed LoggingIn Dodeca version 6.0, we added detailed timed logging for Essbase transactions.  In this version, we have added similar functionality for SQL transactions and have formatted the logs in pipe-delimited format so they can easily be loaded into Excel or into a database for further analysis.  The columns of the log include the log message level, timestamp, sequential transaction number, number of active threads, transaction GUID, username, action, description, and time to execute in milliseconds.

Below is an example of the log message.

Click to enlarge

Query and Display PDF formatThe PDF View type now supports the ability to load the PDF directly from a relational table via a tokenized SQL statement.  This functionality will be very useful for those customers who have contextual information, such as invoice images, stored relationally and need a way to display that information.  We frequently see this requirement as the end result of a drill-through operation from either Essbase or relational reports.

Ability to Launch Local External Processes from Within DodecaCertain Dodeca customers store data files for non-financial systems in relational data stores and use Dodeca as a central access point.  The new ability to launch a local process from within Dodeca is implemented as a Dodeca Workbook Script method which provides a great deal of flexibility in how the process is launched.

The new 6.7.1 functionality follows closely on the 6.7.0 release that introduces proxy server support and new MSAD and LDAP authentication services.  If you are interested in seeing all of the changes in Dodeca, highly detailed Dodeca release notes are available on our website at http://www.appliedolap.com/resources/downloads/dodeca-technical-docs.

Categories: BI & Warehousing

Index suggestion from the access advisor

Laurent Schneider - Mon, 2014-03-24 08:04

Test case :


create table t(x varchar2(8) primary key, 
  y varchar2(30));
insert into t(x,y) select 
  to_char(rownum,'FM00000000'), 
  object_name from all_objects where rownum<1e4;
commit;
exec dbms_stats.gather_table_stats(user,'T')

One user wants to filter on x but does not do the casting properly


SQL> select * from t where x=00000001;

X        Y                             
-------- ------------------------------
00000001 CON$

He received the expected data.

Let’s check his plan

 
SQL> explain plan for 
  select * from t where x=00000001;
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------
Plan hash value: 2153619298
----------------------------------
| Id  | Operation         | Name |
----------------------------------
|   0 | SELECT STATEMENT  |      |
|*  1 |  TABLE ACCESS FULL| T    |
----------------------------------

Predicate Information 
  (identified by operation id):
-----------------------------------
   1 - filter(TO_NUMBER("X")=00000001)

Obviously, he is not using the primary key index. He should use single quotes literal


select * from t where x='00000001'

Okay, let’s tune ;)


SQL> VAR start_time VARCHAR2(32)
SQL> VAR end_time VARCHAR2(32)
SQL> exec select to_char(sysdate, 
  'MM-DD-YYYY HH24:MI:SS') into :start_time 
  from dual
SQL> select * from t where x=00000001;

X        Y                             
-------- ------------------------------
00000001 CON$
SQL> exec select to_char(sysdate, 
  'MM-DD-YYYY HH24:MI:SS') into :end_time
   from dual
SQL> VAR task_id NUMBER
SQL> VAR task_name VARCHAR2(32)
SQL> EXEC :task_name := 'ADV01'
SQL> EXEC DBMS_ADVISOR.CREATE_TASK (
  DBMS_ADVISOR.SQLACCESS_ADVISOR, 
  :task_id, :task_name)
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'EXECUTION_TYPE', 'INDEX_ONLY')
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'VALID_TABLE_LIST', 'SCOTT.T')
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'START_TIME', :start_time)
SQL> exec DBMS_ADVISOR.SET_TASK_PARAMETER 
  (:task_name, 'END_TIME', :end_time)
SQL> exec DBMS_SQLTUNE.CREATE_SQLSET ('STS01')
SQL> declare
  c DBMS_SQLTUNE.SQLSET_CURSOR;
begin
  open c for select value(t) from table(
    DBMS_SQLTUNE.SELECT_CURSOR_CACHE) t;
  DBMS_SQLTUNE.LOAD_SQLSET('STS01', c);
end;
SQL> exec DBMS_ADVISOR.ADD_STS_REF
  (:task_name, null, 'STS01')
SQL> EXEC DBMS_ADVISOR.EXECUTE_TASK (:task_name)
SQL> select
  dbms_advisor.get_task_script(:TASK_NAME)
  from dual;

DBMS_ADVISOR.GET_TASK_SCRIPT(:TASK_NAME)
----------------------------------------------
Rem  SQL Access Advisor: Version 11.2.0.4.0 - 
Rem
Rem  Username:        SCOTT
Rem  Task:            TASK_54589
Rem  Execution date:
Rem

CREATE INDEX "SCOTT"."T_IDX$$_D53D0000"
    ON "SCOTT"."T"
    (TO_NUMBER("X"))
    COMPUTE STATISTICS;

I have retrieved the index suggestion from the SQL Cache for the table T.

Let’s blindly implement it…


SQL> CREATE INDEX "SCOTT"."T_IDX$$_D5150000"
    ON "SCOTT"."T"
    (TO_NUMBER("X"))
    COMPUTE STATISTICS;
SQL> explain plan for 
  select * from t where x=00000001
Explain complete.
SQL> select * from table(dbms_xplan.display)

PLAN_TABLE_OUTPUT
---------------------------------------------
Plan hash value: 4112678587

-----------------------------------------------
| Id  | Operation                   | Name    |
-----------------------------------------------
|   0 | SELECT STATEMENT            |         |
|   1 |  TABLE ACCESS BY INDEX ROWID| T       |
|*  2 |   INDEX RANGE SCAN | T_IDX$$_D5150000 |
-----------------------------------------------

Predicate Information 
  (identified by operation id): 
-----------------------------------------------
   2 - access(TO_NUMBER("X")=00000001)

Much better. But …


SQL> insert into t(x) values('UNKNOWN');
insert into t(x) values('UNKNOWN')
Error at line 1
ORA-01722: invalid number

Adding a function-based-index on to_number(x) to the table also implies that no number is allowed in that column. This is an application change. Be aware…

Watch for Pythian Speakers at Collaborate 14

Pythian Group - Mon, 2014-03-24 07:50

This year COLLABORATE 14 is being held at The Venetian and Sands Expo Center in Las Vegas, April 7-11. Some Pythian folks will be attending and speaking at the event, so be sure to watch for their presentations:

Session Date Start Time Session Room Session Title Presenter Name April 9, 2014 8:30 AM Level 3, Lido 3101B Database Private Clouds with Oracle Database 12c Marc Fielding April 9, 2014 8:30 AM Level 3, Lido 3005 DBA 101 : Calling all New Database Administrators René Antunez April 9, 2014 4:30 PM Sands, Level 1 – 309 What’s New in Oracle E-Business Suite R12.2 for Database Administrators? Vasu Balla April 10, 2014 3:00 PM Level 3, San Polo 3403 Accelerate Your Exadata Deployment with the Skills You Already Have Marc Fielding April 10, 2014 9:45 AM Level 3, Lido 3101B 12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management Fahd Mirza April 10, 2014 4:15 PM Level 3, Murano 3306 Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden Paul Vallee April 10, 2014 11:00 AM Level 3, San Polo 3403 My First 100 days with an Exadata René Antunez April 11, 2014 9:45 AM Level 3, Lido 3005 Ensuring Your Physical Standby is Usable Michael Abbey April 11, 2014 8:30 AM Level 3, Lido 3103 RMAN in 12c: The Next Generation René Antunez April 11, 2014 9:45 AM Level 3, San Polo 3405 Practical Machine Learning for DBAs Alex Gorbachev
Categories: DBA Blogs

Vim::X – VimL is Eldritch, Let’s Write Perl!

Pythian Group - Mon, 2014-03-24 07:49

Last week, I finally got around writing a few macros to help with conflict resolution in Vim:


" conflict resolution - pick this one / option 1 / option 2
map ,. $?\v^[<=]{7}jy/\v^[=>]{7}?\v^[<]{7}"_d/\v^\>{7}"_ddP
map ,<  $?\v^[<]{7}<,>.
map ,>  $?\v^[<]{7}/\v^[=]{7}<,>. 
" ... or keep'em both
map ,m  $?\v^[<]{7}"_dd/\v[=]{7}cc/\v[>]{7}"_dd

With that, I can go from conflict to conflict and pick sides with the ease of two keystrokes, never have to manually delete those <<<<<<<, ======= and >>>>>>> lines again. Sweet, eh?

Now, any sane person would have stopped there. I found myself thinking it’d be nice to transform that line of garbage into a neat little function.

There is an obvious problem, though: my VimL-fu is pretty weak. However, my vim is always compiled with Perl support. Sure, the native interface is kinda sucky, but… maybe we can improve on that?

Interfacing Vim with Perl

That’s where Vim::X enter the picture (yes, I know, rather poor name. Anybody has a better suggestion?). The module has two functions:

  1. give us a bunch of helper functions to interact with Vim as painlessly as possible.
  2. deal with all the fiddly bridgey things required to give us access to functions defined in Perl modules from Vim.
Putting the ‘V’ back in ‘DWIM’

Vim::X comes with a small, but growing, bag of helper functions, as well as with helper classes — Vim::X::Window, Vim::X::Buffer, Vim::X::Line — that provide nice wrappers to the Vim entities. I still have to document them all, but the implementation of my ‘ResolveConflict’ function should give you an idea of how to use them:


package Vim::X::Plugin::ResolveConflict;

use strict;
use warnings;

use Vim::X;

sub ResolveConflict {
        my $side = shift;

        my $here   = vim_cursor;
        my $mine   = $here->clone->rewind(qr/^<{7}/);
        my $midway = $mine->clone->ff( qr/^={7}/ );
        my $theirs = $midway->clone->ff( qr/^>{7}/ );

        $here = $side eq 'here'   ? $here
              : $side eq 'mine'   ? $mine
              : $side eq 'theirs' ? $theirs
              : $side eq 'both'   ? $midway
              : die "side '$side' is invalid"
              ;

        vim_delete( 
                # delete the marker
            $mine, $midway, $theirs, 
                # and whichever side we're not on
            ( $midway..$theirs ) x ($here < $midway), 
            ( $mine..$midway )   x ($here > $midway),
        );
};

1;

Sure, it’s more verbose than the original macros. But now, we have a fighting chance to understand what is going on. As it my habit, I am overloading the heck of my objects. For example, the line objects will be seen as their line number, or their content, depending of the context. Evil? Probably. But make for nice, succinct code:



sub Shout {
    my $line = vim_cursor;
    $line <<= uc $line;
}

Fiddly bridgey things

This is where I expect a few ‘oooh’s and ‘aaaah’s. So we have ‘ResolveConflict’ in a Perl module. How do we make Vim see it?

First, you add a ‘:Vim’ attribute to the function:


sub ResolveConflict :Vim(args) {
    ...

Then, in your .vimrc:


" only if the modules aren't already in the path
perl push @INC, '/path/to/modules/';

perl use Vim::X::Plugin::ResolveConflict

map ,<  call ResolveConflict('mine')
map ,>  call ResolveConflict('theirs')
map ,.  call ResolveConflict('here')
map ,m  call ResolveConflict('both')

Isn’t that way more civilized than the usual dark invocations?

One more step down the rabbit hole

Once I had my new ‘ResolveConflict’ written, it goes without saying that I wanted to test it. At first, I wrote a vspec test suite:


describe 'basic'

    perl push @INC, './lib'
    perl use Vim::X::Plugin::ResolveConflict

    before
        new
        read conflict.txt
    end

    after
        close!
    end

    it 'here mine'
        normal 3G
        call ResolveConflict('here')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
    end

    it 'here theirs'
        normal 6G
        call ResolveConflict('here')

        Expect getline(1) == "d"
        Expect getline(2) == "e"
        Expect getline(3) == "f"
    end

    it 'mine'
        normal 6G
        call ResolveConflict('mine')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
    end

    it 'theirs'
        normal 6G
        call ResolveConflict('theirs')

        Expect getline(1) == "d"
        Expect getline(2) == "e"
        Expect getline(3) == "f"
    end

    it 'both'
        normal 6G
        call ResolveConflict('both')

        Expect getline(1) == "a"
        Expect getline(2) == "b"
        Expect getline(3) == "c"
        Expect getline(4) == "d"
        Expect getline(5) == "e"
        Expect getline(6) == "f"
    end

end

But then I found myself missing my good ol’ TAP. If only there was an interface to run those Perl modules within v–

oh.

So I changed the test suite to now look like:


package ResolveConflictTest;

use Vim::X;
use Vim::X::Plugin::ResolveConflict;

use Test::Class::Moose;

sub test_setup {
    vim_command( 'new', 'read conflict.txt' );
}

sub test_teardown {
    vim_command( 'close!' );
}

sub here_mine :Tests {
    vim_command( 'normal 3G' );
    vim_call( 'ResolveConflict', 'here' );

    is join( '', vim_lines(1..3) ) => 'abc', "here, mine";
    is vim_buffer->size => 3, "only 3 lines left";
};

sub here_theirs :Tests { 
    vim_command( 'normal 6G' );
    vim_call( 'ResolveConflict', 'here' );

    is join( '', vim_lines(1..3) ) => 'def';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub mine :Tests {
    vim_call( 'ResolveConflict', 'mine' );

    is join( '', vim_lines(1..3) ) => 'abc';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub theirs :Tests {
    vim_call( 'ResolveConflict', 'theirs' );

    is join( '', vim_lines(1..3) ) => 'def';
    is vim_buffer->size => 3, "only 3 lines left";
};

sub both :Tests {
    vim_call( 'ResolveConflict', 'both' );

    is join( '', vim_lines(1..6) ) => 'abcdef';
    is vim_buffer->size => 6, "only 6 lines left";
};

__PACKAGE__->new->runtests;

I also wrote a little vim_prove script to run the show:


#!perl -s

exec 'vim', qw/ -V -u NONE -i NONE -N -e -s /,
    ( map { 1; '-c' => "perl push \@INC, '$_'" } split ":", $I ),
    '-c', "perl do '$ARGV[0]' or die $@",
    '-c', "qall!";

Aaaand whatdyaknow:


$ perl bin/vim_prove -I=lib contrib/test.vim
#
# Running tests for ResolveConflictTest
#
    1..5
        ok 1
        ok 2 - only 6 lines left
        1..2
    ok 1 - both
        ok 1 - here, mine
        ok 2 - only 3 lines left
        1..2
    ok 2 - here_mine
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 3 - here_theirs
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 4 - mine
        ok 1
        ok 2 - only 3 lines left
        1..2
    ok 5 - theirs
ok 1 - ResolveConflictTest
What’s Next?

The current prototype is on GitHub. I’ll try to push it to CPAN once I have a little bit of documentation and a little more order in the code. But if you are interested, please, fork away, write plugins, and PR like there is no tomorrow.

Categories: DBA Blogs

Why Doctors Still Use Pen and Paper

WebCenter Team - Mon, 2014-03-24 07:03

Earlier this month in the Atlantic magazine, an interview with David Blumenthal was published where he discussed the challenges in getting the American healthcare system modernized.  The article sparked some debate amongst the medical community and provoked some interesting questions for all of us in the information management arena.

Think about your work environment and attempts to modernize processes there.  Sometimes these upgrades go well and the benefits are so obvious that everyone jumps on board and embraces the updated system without looking back.  But in other cases, the new technology or solution is not so readily adopted, whether it be due to economical issues, complexity or a lack of understanding about the benefits.
the healthcare issue in America is a classic example of a technological solution that faces significant business and cultural hurdles. One of the doctors that replied stated, "There is a very American tendency to look for technological fixes for significant problems.  In general, technological fixes only work in the context of appropriate institutional structures."
What institutional structures exist in your environment that keep your business from modernizing?  The next time you visit the doctor and he or she pulls out a clipboard and starts filling out a paper form about your health, consider what your company does that is analogous to this behavior.  Does it make sense to modernize and go digital? If you did, would this change in approach be readily adopted?

In healthcare solutions, as with almost every other technical solution out there, it all comes down to the "Digital Experience" provided to the users.  If they can see the benefits in an obvious fashion and the software is pleasing to use from any device, they will come running. We hope the next technological advancement in your organization is a great success!

Mobile device users try to contend with new threat

Chris Foot - Mon, 2014-03-24 04:47

Cybercriminals are beginning to realize that many business employees are now accessing company data through smartphones and tablets, providing them with a new avenue to exploit. Database administration services have worked to deter a malicious program known as CryptoLocker, a malware execution that convinces victims that failure to pay the software author's demands will result in serious real-world consequences.

How it works
According to a report conducted by Dell SecureWorks, the ransomware traditionally encrypts files stored on a PC and informs the user that all control will be returned to them once the ransom is paid. The earliest versions of CryptoLocker were delivered through spam emails targeting business professionals, masking itself as a "consumer complaint" against recipients. The objective of this particular species of malware is to connect with a command and control (C2) server and encipher the files located on related drives, causing a major headache for those without database administration support to identify the hidden problem before it reveals itself.

"The threat actors have offered various payment methods to victims since the inception of CryptoLocker," the source reported, citing its appearance in early 2013. "The methods are all anonymous or pseudo-anonymous, making it difficult to track the origin and final destination of payments."

Extending its reach
If such a program could be engineered to hold entire databases hostage, the financial consequences could be catastrophic for multimillion-dollar enterprises. As if this prospect wasn't intimidating enough, CryptoLocker and other related ransomware are now targeting mobile device users, diverting database experts' attention toward those access points. Because the average business employee now uses more than one remote-access machine, organizations may have to halt operations in the event these assets are compromised.

CIO reported that malevolent figures employing this technology are more interested in the data smartphones and tablets handle than the devices themselves. Thankfully, there are a number of simple, routine steps business professionals and remote database support personnel can follow to protect the information:

  • Educate those utilizing mobile technology on data loss prevention. If employees are aware of the techniques implemented by hackers, than they'll be well-prepared for attacks.
  • Regularly perform data backups
  • Create and deploy a data classification standard so that workers know how to treat particular kinds of information, whether it's highly sensitive or public knowledge.
  • Develop a security policy that establishes requirements on how to handle all types of media.
  • Get a remote DBA group to constantly monitor all mobile connections and actions.

If these points are implemented into a company's general practices, it will provide a solid framework for mobile device management.

WebLogic Server - Using OSGi Bundles with Java EE Applications

Steve Button - Sun, 2014-03-23 23:07
The WLS 12c (12.1.2) release includes a new feature that enables OSGi bundles to be installed and used by deployed applications.

The full documentation is here:

http://docs.oracle.com/middleware/1212/wls/WLPRG/osgi.htm

In short this feature enables WLS to create an OSGi framework (Apache Felix 4.03) in which OSGi Bundles are installed and accessed by applications.

Applications provide an XML reference element to define the named OSGi framework and Bundle of interest, which is then  published into the JNDI tree from where it can be referenced and used.

An OSGi Bundle can be included as part of the application archive being deployed or it can be provided as part of the server library set.

To provide a simple example, the Apache Felix tutorial Dictionary Service was first implemented and packaged as an OSGi Bundle.

The Bundle manifest looks as follows:

Manifest-Version: 1.0
Bnd-LastModified: 1378960044511
Build-Jdk: 1.7.0_17
Built-By: sbutton
Bundle-Activator: tutorial.example2.Activator
Bundle-ManifestVersion: 2
Bundle-Name: DictionaryService OSGi Bundle
Bundle-SymbolicName: tutorial.example2.service.DictionaryService
Bundle-Version: 1.0.0.SNAPSHOT
Created-By: Apache Maven Bundle Plugin
Export-Package: tutorial.example2.service;version="1.0.0.SNAPSHOT"
Import-Package: org.osgi.framework;version="[1.6,2)"
Tool: Bnd-1.50.0

The Bundle archive contains only a few classes, the DictionaryService interface and an Activator implementation which provides an inner class implementation of the DictionaryService that is registered when the Bundle is activated.


 
A small web application was then developed to make use of the DictionaryService Bundle using a Servlet.

The Servlet performs the following tasks:
  • Injects the Bundle reference from JNDI using the @Resource annotation
  • Looks up the ServiceReference for the DictionaryService.class to be used 
  • Obtains an instance of the DictionaryService from the ServiceReference
  • Makes calls on the DictionaryService to check whether a word is in the known word list

Inject Bundle reference from its JNDI location:

@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet", "/test", "/osgi"})
public class TestServlet extends HttpServlet {

@Resource(lookup = "java:app/osgi/Bundle")
Bundle bundle;
...
}

The Bundle provides access to the registered Services, in this case the DictionaryService:

if (bundle != null) {

BundleContext bc = bundle.getBundleContext();

ServiceReference dictionaryServiceRef =
bc.getServiceReference(DictionaryService.class);
DictionaryService dictionaryService =
(DictionaryService) bc.getService(dictionaryServiceRef);

...

}
 
The methods on the DictionaryService can then be used:


out.printf("<div>wordlist: %s </div>",
Arrays.toString(dictionaryService.wordlist()));

out.printf("<div>checkWord(\"%s\"): %s</div>",
wordToCheck, dictionaryService.checkWord(wordToCheck));


The Bundle then needs to be defined for the web application to use, which is done using the weblogic deployment descriptor.  In this example, the weblogic.xml file contains the following entry:

<osgi-framework-reference>
<name>test</name>
<application-bundle-symbolic-name>
tutorial.example2.service.DictionaryService
</application-bundle-symbolic-name>
</osgi-framework-reference>

The bundle-symbolic-name is used to specify the bundle to be used.

The name element specifies the name of the OSGi framework that has been configured.  With WebLogic Server 12c (12.1.2) there are several ways to define and configure OSGi frameworks such as the Admin Console, WLST, programmatically with Java or by directly editing the domain config.xml file.

With this small web application, the DictionaryService Bundle was deployed as part of the WAR file itself.  This is performed by placing the JAR file in the WEB-INF/osgi-lib directory, whereupon WLS will detect it and install it.  The DictionaryService Bundle could also be installed by copying it into the $ORACLE_HOME/wlserver/server/osgi-lib directory.

With the WAR file packaged and deployed to WLS,  the console logs show the DictionaryService Bundle being Activated, where System.out.println() calls were inserted into the start and stop Activator methods to view them being called:


*** START Bundle org.apache.felix.framework.BundleContextImpl@21052189
tutorial.example2.service.DictionaryService ***
Finally the TestServlet is accessed, demonstrating the DictionaryService being accessed and used:



Using Oracle R Enterprise to Analyze Large In-Database Datasets

Rittman Mead Consulting - Sun, 2014-03-23 12:18

The other week I posted an article on the blog about Oracle R Advanced Analytics for Hadoop, part of Oracle’s Big Data Connectors and used for running certain types of R analysis over a Hadoop cluster. ORAAH lets you move data in and out of HDFS and Hive and into in-memory R data frames, and gives you the ability to create Hadoop MapReduce jobs but using R commands and syntax. If you’re looking to use R to analyse, prepare and explore your data, and you’ve got access to a large Hadoop cluster, ORAAH is a useful way to go beyond the normal memory constraints of R running on your laptop.

But what if the data you want to analyse is currently in an Oracle database? You can export the relevant tables to flat files and then import them into HDFS, or you can use a tools such as sqoop to copy the data directly into HDFS and Hive tables. Another option you could consider though is to run your R analysis directly on the database tables, avoiding the need to move data around and taking advantage of the scalability of your Oracle database – which is where Oracle R Enterprise comes in.

Oracle R Enterprise is part of the Oracle Database Enterprise Edition “Advanced Analytics Option”, so it’s licensed separately to ORAAH and the Big Data Connectors. What it gives you is three things:

image2

  • Some client packages to install locally on your desktop along. installed into regular R (or ideally, Oracle’s R distribution)
  • Some database server-side R packages to provide a “transparency layer”, converting R commands into SQL ones, along with extra SQL stats functions to support R
  • The ability to spawn-off R engines within the Oracle Database’s using the extproc mechanism, for performing R analysis directly on the data rather than through the client on your laptop

Where this gets interesting for us is that the ORE transparency layer makes it simple to move data in and out of the Oracle Database, but more importantly it allows us to use database tables and views as R “ore.frames” – proxies for “data frames”, the equivalent to database tables in R and the basic data set that R commands work on. Going down this route avoids the need to export the data we’re interesting out of the Oracle Database, with the ORE transparency layer converting most R function calls to Oracle Database SQL ones – meaning that we can use the data analyst-friendly R language whilst using Oracle under the covers for the heavy lifting.

NewImage

There’s more to ORE than just the transparency layer, but let’s take a look at how you might use ORE and this feature, using the same “flight delays” dataset I used in my post a couple of months ago on Hadoop, Hive and Impala. We’ll use the OBIEE 11.1.1.7.1 SampleApp v309R2 that you can download from OTN as it’s got Oracle R Enterprise already installed, although you’ll need to follow step 10 in the accompanying deployment guide to install the R packages that Oracle couldn’t distribute along with SampleApp.

In the following examples, we’ll:

  • Connect to the main PERFORMANCE fact table in the BI_AIRLINES schema, read in it’s metadata (columns), and then set it up as a “virtual” R data frame that actually  points through to the database table
  • Then we’ll perform some basic analysis, binning and totalling for that table, to give us a sense of what’s in it
  • And then we’ll run some more R analysis on the table, outputting the results in the form of graphs and answering questions such as “which days of the week are best to fly out on?” and “how have airlines relative on-time performance changed over time?”

Let’s start off them by starting the R console and connecting to the database schema containing the flight delays data.

[oracle@obieesample ~]$ R
 
Oracle Distribution of R version 2.15.1  (--) -- "Roasted Marshmallows"
Copyright (C)  The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-unknown-linux-gnu (64-bit)
 
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
 
  Natural language support but running in an English locale
 
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
 
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
 
You are using Oracle's distribution of R. Please contact
Oracle Support for any problems you encounter with this
distribution.
 
[Previously saved workspace restored]
 
> library(ORE)
Loading required package: OREbase
 
Attaching package: ‘OREbase’
 
The following object(s) are masked from ‘package:base’:
 
    cbind, data.frame, eval, interaction, order, paste, pmax, pmin,
    rbind, table
 
Loading required package: OREstats
Loading required package: MASS
Loading required package: OREgraphics
Loading required package: OREeda
Loading required package: OREdm
Loading required package: lattice
Loading required package: OREpredict
Loading required package: ORExml
> ore.connect("bi_airlines","orcl","localhost","BI_AIRLINES",all=TRUE)
Loading required package: ROracle
Loading required package: DBI
> 

Note that “library(ORE)” loads up the Oracle R Enterprise R libraries, and “ore.connect” connects the R session to the relevant Oracle database.

I then synchronise R’s view of the objects in this database schema with its own metadata views, list out what tables are available to us in that schema, and attach that schema to my R session so I can manipulate them from there.

> ore.sync()
> ore.ls()
 [1] "AIRCRAFT_GROUP"           "AIRCRAFT_TYPE"           
 [3] "AIRLINE_ID"               "AIRLINES_USER_DATA"      
 [5] "CANCELLATION"             "CARRIER_GROUP_NEW"       
 [7] "CARRIER_REGION"           "DEPARBLK"                
 [9] "DISTANCE_GROUP_250"       "DOMESTIC_SEGMENT"        
[11] "OBIEE_COUNTY_HIER"        "OBIEE_GEO_AIRPORT_BRIDGE"
[13] "OBIEE_GEO_ORIG"           "OBIEE_ROUTE"             
[15] "OBIEE_TIME_DAY_D"         "OBIEE_TIME_MTH_D"        
[17] "ONTIME_DELAY_GROUPS"      "PERFORMANCE"             
[19] "PERFORMANCE_ENDECA_MV"    "ROUTES_FOR_LINKS"        
[21] "SCHEDULES"                "SERVICE_CLASS"           
[23] "UNIQUE_CARRIERS"         
> ore.attach("bi_airlines")
> 

Now although we know these objects as database tables, what ORE does is present them to R as “data frames” using ore.frame as a proxy, the fundamental data structure in R that looks just like a table in the relational database world. Behind the scenes though, ORE maps these data frames to the underlying Oracle structures using the ore.frame proxy, and turns R commands into SQL function calls including a bunch of new ones added specifically for ORE. Note that this is conceptually different to Oracle R Advanced Analytics for Hadoop, which doesn’t map (or overload) standard R functions to their Hadoop (MapReduce or Hive) equivalent – it instead gives you a set of new R functions that you can use to create MapReduce jobs, which you can then submit to a Hadoop cluster for processing, giving you a more R-native way of creating MapReduce jobs; ORE in-contrast tries to map all of R functionality to Oracle database functions, allowing you to run normal R sessions but with Oracle Database allowing you process bigger R queries closer to the data.

Let’s use another two R commands to see how it views the PERFORMANCE table in the flight delays data set, and get some basic sizing metrics.

> class(PERFORMANCE)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> dim(PERFORMANCE)
[1] 6362422     112

Now at this point I could pull the data from one of those tables directly into an in-memory R data frame, like this:

> carriers <- ore.pull(UNIQUE_CARRIERS)
Warning message:
ORE object has no unique key - using random order 
> class(UNIQUE_CARRIERS)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> class(carriers)
[1] "data.frame"
> 

As you see, R sees the UNIQUE_CARRIERS object as an ore.frame, whilst carriers (into which data from UNIQUE_CARRIERS was loaded) is a regular data.frame object. In some cases you might want to load data from Oracle tables into a regular data.frame, but what’s interesting here is that we can work directly with ore.frame objects and let the Oracle database do the hard work. So let’s get to work on the PERFORMANCE ore.frame object and do some initial analysis and investigation.

> df <- PERFORMANCE[,c("YEAR","DEST","ARRDELAY")]
> class(df)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> head(df)
  YEAR DEST ARRDELAY
1 2010  BOI      -13
2 2010  BUF       44
3 2010  BUF      -14
4 2010  BUR       -6
5 2010  BUR       -2
6 2010  BUR       -9
Warning messages:
1: ORE object has no unique key - using random order 
2: ORE object has no unique key - using random order 
> options(ore.warn.order = FALSE)
> head(PERFORMANCE[,c(1,4,23)])
  YEAR DAYOFMONTH DESTWAC
1 2010         16      83
2 2010         16      22
3 2010         16      22
4 2010         16      91
5 2010         16      91
6 2010         16      91
>

In the above script, the first command creates a temporary ore.frame object made up of just three of the columns from the PERFORMANCE table / ore.frame. Then I switch off the warning about these tables not having unique keys (“options(ore.warn.order = FALSE)”), and then I select three more columns directly from the PERFORMANCE table / ore.frame.

> aggdata <- aggregate(PERFORMANCE$DEST,
+                      by = list(PERFORMANCE$DEST),
+                      FUN = length)
> class(aggdata)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
 
> head(aggdata)
    Group.1     x
ABE     ABE  4104
ABI     ABI  2497
ABQ     ABQ 33298
ABR     ABR     5
ABY     ABY  1028
ACK     ACK   346
 
> (t <- table(PERFORMANCE$DAYOFWEEK))
 
     1      2      3      4      5      6      7 
943305 924442 932113 942066 956123 777203 887170
 
> dat = PERFORMANCE[PERFORMANCE$ARRDELAY<100 & PERFORMANCE$ARRDELAY>-100,]
> ad = with(dat, split(ARRDELAY,UNIQUECARRIER))
> boxplot(ad,col = "blue", notch = TRUE, cex = 0.5, varwidth = TRUE)

In the above set of scripts, I first aggregate flights by destination airports, then count flights by day of week. In the final set of commands I get a bit more advanced and create a box plot graph showing the range of flight delays by airline, which produces the following graph from the R console:

NewImage

whereas in the next one I create a histogram of flight delays (minutes), showing the vast majority of delays are just a few minutes.

> ad = PERFORMANCE$ARRDELAY
> ad = subset(ad, ad>-200&ad<200)
> hist(ad, breaks = 100, main = "Histogram of Arrival Delay")

NewImage

All of this so far, to be fair, you could do just as easily in SQL or in a tool like Excel, but they’re the sort of commands an R analyst would want to run before getting onto the interesting stuff, and it’s great that they could now do this on the full dataset in an Oracle database, not just on what they can pull into memory on their laptop. Let’s do something more interesting now, and answer the question “which day of the week is best for flying out, in terms of not hitting delays?”

> ad = PERFORMANCE$ARRDELAY
> ad = subset(ad, ad>-200&ad<200)
> hist(ad, breaks = 100, main = "Histogram of Arrival Delay")
> ontime <- PERFORMANCE
> delay <- ontime$ARRDELAY
> dayofweek <- ontime$DAYOFWEEK
> bd <- split(delay, dayofweek)
> boxplot(bd, notch = TRUE, col = "red", cex = 0.5,
+         outline = FALSE, axes = FALSE,
+         main = "Airline Flight Delay by Day of Week",
+         ylab = "Delay (minutes)", xlab = "Day of Week")

NewImage

Looks like Tuesday’s the best. So how has a selection of airlines performed over the past few years?

> ontimeSubset <- subset(PERFORMANCE, UNIQUECARRIER %in% c("AA", "AS", "CO", "DL","WN","NW")) 
> res22 <- with(ontimeSubset, tapply(ARRDELAY, list(UNIQUECARRIER, YEAR), mean, na.rm = TRUE))
> g_range <- range(0, res22, na.rm = TRUE)
> rindex <- seq_len(nrow(res22))
> cindex <- seq_len(ncol(res22))
> par(mfrow = c(2,3))
> res22 <- with(ontimeSubset, tapply(ARRDELAY, list(UNIQUECARRIER, YEAR), mean, na.rm = TRUE))
> g_range <- range(0, res22, na.rm = TRUE)
> rindex <- seq_len(nrow(res22))
> cindex <- seq_len(ncol(res22))
> par(mfrow = c(2,3))
> for(i in rindex) {
+   temp <- data.frame(index = cindex, avg_delay = res22[i,])
+   plot(avg_delay ~ index, data = temp, col = "black",
+        axes = FALSE, ylim = g_range, xlab = "", ylab = "",
+        main = attr(res22, "dimnames")[[1]][i])
+        axis(1, at = cindex, labels = attr(res22, "dimnames")[[2]]) 
+        axis(2, at = 0:ceiling(g_range[2]))
+        abline(lm(avg_delay ~ index, data = temp), col = "green") 
+        lines(lowess(temp$index, temp$avg_delay), col="red")
+ } 
>

NewImage

See this presentation from the BIWA SIG for more examples of ORE queries against the flight delays dataset, which you can adapt from the ONTIME_S dataset that ships with ORE as part of the install.

Now where R and ORE get really interesting, in the context of BI and OBIEE, is when you embed R scripts directly in the Oracle Database and use them to provide forecasting, modelling and other “advanced analytics” features using the database’s internal JVM and an R engine that gets spun-out on-demand. Once you’ve done this, you can expose the calculations through an OBIEE RPD, as Oracle have done in the OBIEE 11.1.1.7.1 SampleApp, shown below:

NewImage

But that’s really an article in itself – so I’ll cover this process and how you surface it all through OBIEE in a follow-up post soon.

Categories: BI & Warehousing

The consequences of NOLOGGING in Oracle

Yann Neuhaus - Sun, 2014-03-23 11:04

While answering to a question on Oracle forum about NOLOGGING consequences, I provided a test case that deserves a bit more explanation. Nologging operations are good to generate minimal redo on bulk operations (direct-path inserts, index creation/rebuild). But in case we have to restore a backup that was made before the nologging operation, we loose data. And even if we can accept that, we have some manual operations to do.

Here is the full testcase.

 

I create a tablespace and backup it:


RMAN> create tablespace demo datafile '/tmp/demo.dbf' size 10M;
Statement processed

RMAN> backup tablespace demo;
Starting backup at 23-MAR-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=30 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/tmp/demo.dbf
channel ORA_DISK_1: starting piece 1 at 23-MAR-14
channel ORA_DISK_1: finished piece 1 at 23-MAR-14
piece handle=/u01/app/oracle/fast_recovery_area/U1/backupset/2014_03_23/o1_mf_nnndf_TAG20140323T160453_9lxy0pfb_.bkp tag=TAG20140323T160453 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 23-MAR-14

 

 

I create a table and an index, both in NOLOGGING


RMAN> create table demo ( dummy not null ) tablespace demo nologging as select * from dual connect by level Statement processed

RMAN> create index demo on demo(dummy) tablespace demo nologging;
Statement processed

 

Note how I like 12c for doing anything from RMAN...

Because I will need it later, I do a treedump of my index:


RMAN> begin
2>  for o in (select object_id from dba_objects where owner=user and object_name='DEMO' and object_type='INDEX')
3>   loop execute immediate 'alter session set tracefile_identifier=''treedump'' events ''immediate trace name treedump level '||o.object_id||'''';
4> end loop;
5> end;
6> /
Statement processed

 

Here is the content of my treedump trace file:


----- begin tree dump
branch: 0x140008b 20971659 (0: nrow: 2, level: 1)
   leaf: 0x140008c 20971660 (-1: nrow: 552 rrow: 552)
   leaf: 0x140008d 20971661 (0: nrow: 448 rrow: 448)
----- end tree dump

 

Because of the nologging, the tablespace is 'unrecoverable' and we will see what it means.


RMAN> report unrecoverable;
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
5    full or incremental     /tmp/demo.dbf

 

RMAN tells me that I need to do a backup, which is the right thing to do after nologging operations. But here my goal is to show what happens when we have to restore a backup that was done before the nologging operations.

I want to show that the issue does not only concern the data that I've loaded, but any data that may come later in the blocks that have been formatted by the nologging operation. So I'm deleteing the rows and inserting a new one.


2> delete from demo;
Statement processed

RMAN> insert into demo select * from dual;
Statement processed

 

Time to restore the tablespace from the backup that has been done before the nologging operation:


RMAN> alter tablespace demo offline;
Statement processed

RMAN> restore tablespace demo;
Starting restore at 23-MAR-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /tmp/demo.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/U1/backupset/2014_03_23/o1_mf_nnndf_TAG20140323T160453_9lxy0pfb_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/U1/backupset/2014_03_23/o1_mf_nnndf_TAG20140323T160453_9lxy0pfb_.bkp tag=TAG20140323T160453
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 23-MAR-14

RMAN> recover tablespace demo;
Starting recover at 23-MAR-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 23-MAR-14

RMAN> alter tablespace demo online;
Statement processed

 

We can check the unrecoverable tablespace


RMAN> report unrecoverable;
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
5    full or incremental     /tmp/demo.dbf

 

but we don't know which objects are concerned until we try to read from them:


RMAN> select /*+ full(demo) */ count(*) from demo;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 03/23/2014 16:05:03
ORA-01578: ORACLE data block corrupted (file # 5, block # 131)
ORA-01110: data file 5: '/tmp/demo.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option


RMAN> select /*+ index(demo) */ count(*) from demo;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 03/23/2014 16:05:04
ORA-01578: ORACLE data block corrupted (file # 5, block # 140)
ORA-01110: data file 5: '/tmp/demo.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

 

So I can't read from the table because of block (file # 5, block # 131) which is corrupted and I can't read from the index because of block (file # 5, block # 140) which is corrupted. The reason is that recovery was not possible on them as there was no redo to protect them from the time they were formatted (by the nologging operation).

Let's see which blocks were reported:


RMAN> select segment_type,header_file,header_block , dbms_utility.make_data_block_address(header_file,header_block) from dba_segments where owner=user and segment_name='DEMO';
SEGMENT_TYPE       HEADER_FILE HEADER_BLOCK
------------------ ----------- ------------
DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(HEADER_FILE,HEADER_BLOCK)
--------------------------------------------------------------
INDEX                        5          138
                                                      20971658
 
TABLE                        5          130
                                                      20971650
 
RMAN> select dbms_utility.make_data_block_address(5, 140) from dual;

DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(5,140)
-------------------------------------------
                                   20971660

 

The full scan failed as soon as it reads the block 131 which is the first one that contains data. The segment header block itself was protected by redo.

For the index the query failed on block 140 which is the first leaf (this is why I did a treedump above). The root branch (which is always the next after the segment header) seem to be protected by redo even for nologging operation. The reason why I checked that is because in the first testcase I posted in the forum, I had a very small table for which the index was so small that it had only one leaf - which is the root branch as well - so the index was still recovrable.

The important point to know is that the index is still valid:


RMAN> select status from all_indexes where index_name='DEMO';
STATUS  
--------
VALID   

 

And the only solution is to truncate the table:


RMAN> truncate table demo;
Statement processed


RMAN> select /*+ full(demo) */ count(*) from demo;
  COUNT(*)
----------
         0

RMAN> select /*+ index(demo) */ count(*) from demo;
  COUNT(*)
----------
         0

 

no corruption anymore, but no data either...

Last point: if you have only the indexes that are unrecoverable, you can rebuild them. But because the index is valid, Oracle will try to read it in order to rebuild it - and fail with ORA-26040. You have to make then unusable before.

The core message is:

  • Use nologging only when you accept to loose data and you accept to have some manual operations to do after recovery (so document it): truncate table, make indexes unusable and rebuild.
  • Backup the unrecoverable tablespaces as soon as you can after your nologging operations
  • If you need redo for other goals (such as standby database) use force logging.

DBMS2 revisited

DBMS2 - Sun, 2014-03-23 05:52

The name of this blog comes from an August, 2005 column. 8 1/2 years later, that analysis holds up pretty well. Indeed, I’d keep the first two precepts exactly as I proposed back then:

  • Task-appropriate data managers. Much of this blog is about task-appropriate data stores, so I won’t say more about them in this post.
  • Drastic limitations on relational schema complexity. I think I’ve vindicated on that one by, for example:
    • NoSQL and dynamic schemas.
    • Schema-on-read, and its smarter younger brother schema-on-need.
    • Limitations on the performance and/or allowed functionality of joins in scale-out short-request RDBMS, and the relative lack of complaints about same.
    • Funky database design from major Software as a Service (SaaS) vendors such as Workday and Salesforce.com.
    • A whole lot of logs.

I’d also keep the general sense of the third precept, namely appropriately-capable data integration, but for that one the specifics do need some serious rework.

For starters, let me say:

  • I’ve mocked the concept of “logical data warehouse” in the past, for its implausible grandiosity, but Gartner’s thoughts on the subject are worth reviewing even so.
  • I generally hear that internet businesses have SOAs (Service-Oriented Architectures) loosely coupling various aspects of their systems, and this is going well. Indeed, it seems to be going well that it’s not worth talking about, and so I’m unclear on the details; evidently it just works. However …
  • … evidently these SOAs are not set up for human real-time levels of data freshness.
  • ETL (Extract/Transform/Load) is criticized for two reasons:
    • People associate it with the kind of schema-heavy relational database design that’s now widely hated, and the long project cycles it is believed to be bring with it.
    • Both analytic RDBMS and now Hadoop offer the alternative of ELT, in which the loading comes before the transformation.
    • There are some welcome attempts to automate aspects of ETL/ELT schema design. I’ve written about this at greatest length in the context of ClearStory’s “Data Intelligence” pitch.
    • Schema-on-need defangs other parts of the ETL/ELT schema beast.
    • If you have a speed-insensitive problem with the cost or complexity of your high-volume data transformation needs, there’s a good chance that Hadoop offers the solution. Much of Hadoop’s adoption is tied to data transformation.

Next, I’d like to call out what is generally a non-problem — when a query can go to two or more systems for the same information, which one should it tap? In theory, that’s a much harder problem in theory than ordinary DBMS optimization. But in practice, only the simplest forms of the challenge tend to arise, because when data is stored in more than one system, they tend to have wildly different use cases, performance profiles and/or permissions.

So what I’m saying is that most traditional kinds of data integration problems are well understood and on their way to being solved in practice. We have our silos; data is replicated as needed between silos; and everything is more or less cool. But of course, as traditional problems get solved, new ones arise, and those turn out to be concentrated among real-time requirements.

“Real-time” of course means different things in different contexts, but for now I think we can safely partition it into two buckets:

  • Human real-time — fast enough so that it doesn’t make a human wait.
  • Machine real-time — as fast as ever possible, because machines are racing other machines.

The latter category arises in the case of automated bidding, famously in high-frequency securities trading, but now in real-time advertising auctions as well. But those vertical markets aside, human real-time integration generally is fast enough.

Narrowing the scope further, I’d say that real-time transactional integration has worked for a while. I date it back to the initially clunky EAI (Enterprise Application Integration) vendors of the latter 1990s. The market didn’t turn out to be that big, but neither did the ETL market, so it’s all good. SOAs, as previously noted, are doing pretty well.

Where things still seem to be dicier is in the area of real-time analytic integration. How can analytic processing be tougher in this regard than transactional? Two ways. One, of course, is data volume. The second is that it’s more likely to involve machine-generated data streams. That said, while I hear a lot about a BI need-for-speed, I often suspect it of being a want-for-speed instead. So while I’m interested in writing a more focused future post on real-time data integration, there may be a bit of latency before it comes out.

Categories: Other

Wants vs. needs

DBMS2 - Sun, 2014-03-23 05:51

In 1981, Gerry Chichester and Vaughan Merlyn did a user-survey-based report about transaction-oriented fourth-generation languages, the leading application development technology of their day. The report included top-ten lists of important features during the buying cycle and after implementation. The items on each list were very similar — but the order of the items was completely different. And so the report highlighted what I regard as an eternal truth of the enterprise software industry:

What users value in the product-buying process is quite different from what they value once a product is (being) put into use.

Here are some thoughts about how that comes into play today.

Wants outrunning needs

1. For decades, BI tools have been sold in large part via demos of snazzy features the CEO would like to have on his desk. First it was pretty colors; then it was maps; now sometimes it’s “real-time” changing displays. Other BI features, however, are likely to be more important in practice.

2. In general, the need for “real-time” BI data freshness is often exaggerated. If you’re a human being doing a job that’s also often automated at high speed — for example network monitoring or stock trading — there’s a good chance you need fully human real-time BI. Otherwise, how much does a 5-15 minute delay hurt? Even if you’re monitoring website sell-through — are your business volumes really high enough that 5 minutes matters much? eBay answered “yes” to that question many years ago, but few of us work for businesses anywhere near eBay’s scale.

Even so, the want for speed keeps growing stronger. :)

3. Similarly, some desires for elastic scale-out are excessive. Your website selling koi pond accessories should always run well on a single server. If you diversify your business to the point that that’s not true, you’ll probably rewrite your app by then as well.

4. Some developers want to play with cool new tools. That doesn’t mean those tools are the best choice for the job. In particular, boring old SQL has merits — such as joins! — that shiny NoSQL hasn’t yet replicated.

5. Some developers, on the other hand, want to keep using their old tools, on which they are their employers’ greatest experts. That doesn’t mean those tools are the best choice for the job either.

6. More generally, some enterprises insist on brand labels that add little value but lots of expense. Yes, there are many benefits to vendor consolidation, and you may avoid many headaches if you stick with not-so-cutting-edge technology. But “enterprise-grade” hardware failure rates may not differ enough from “consumer-grade” ones to be worth paying for.

7. Some enterprises still insist on keeping their IT operations on-premises. In a number of cases, that perceived need is hard to justify.

8. Conversely, I’ve steered clients away from data warehouse appliances and toward, say, Vertica, because they had a clear desire to be cloud-ready. However, I’m not aware that any of those companies ever actually deployed Vertica in the cloud.

Needs ahead of wants

1. Enterprises often don’t realize how much their lives can be improved via a technology upgrade. Those queries that take 6 hours on your current systems, but only 6 minutes on the gear you’re testing? They’d probably take 15 minutes or less on any competitive product as well. Just get something reasonably modern, please!

2. Every application SaaS vendor should offer decent BI. Despite their limited scope, dashboards specific to the SaaS application will likely provide customer value. As a bonus, they’re also apt to demo well.

3. If your customer personal-identity data that resides on internet-facing systems isn’t encrypted – why not? And please don’t get me started on passwords that are stored and mailed around in plain text.

4. Notwithstanding what I said above about elasticity being overrated, buyers often either underrate their needs for concurrent usage, or else don’t do a good job of testing concurrency. A lot of performance disappointments are really problems with concurrency.

5. As noted above, it’s possible to underrate one’s need for boring old SQL goodness.

Wants and needs in balance

1. Twenty years ago, I thought security concerns were overwrought. But in an internet-connected world, with customer data privacy and various forms of regulatory compliance in play, wants and needs for security seem pretty well aligned.

2. There also was a time when ease of set-up and installation were to be underrated. Not any more, however; people generally understand its great importance.

Categories: Other

Alert for ADF Security - JSF 2.0 Vulnerability in ADF 11g R2

Andrejus Baranovski - Sun, 2014-03-23 05:06
You must be concerned about your system security, if you are running ADF runtime based on ADF 11.1.2.1.0 - 11.1.2.4.0 versions. These versions are using JSF 2.0, with known security vulnerability - Two Path Traversal Defects in Oracle's JSF2 Implementation. This vulnerability allows to download full content of WEB-INF through any browser URL. There is a fix, but this fix is not applied by JDeveloper IDE automatically, when creating new ADF application. To prevent WEB-INF content download, you must set javax.faces.RESOURCE_EXCLUDES parameter in web.xml - make sure to provide all file extensions, you want to prevent to be accessible through URL.

By default, when vulnerability fix is not applied, we can access WEB-INF content using similar path: http://host:port/appname/faces/javax.faces.resource.../WEB-INF/web.xml. Unless you want to allow your users to download the source code, make sure to apply the fix in web.xml manually:


My test case - VulnerabilityTestCase.zip (this sample comes with vulnerability fix disabled - default version) is implemented with JDeveloper 11.1.2.4.0, I will demonstrate how to reproduce JSF 2.0  vulnerability with this version:


Test case consists of two basic applications, one of them is packaged as ADF library:


ADF library is imported into main sample application:


Two reproduce vulnerability is pretty easy - run main sample application, login with redsam/welcome1 user:


Default URL is generated in the browser and first page is rendered - all good so far:


1. web.xml vulnerability

Remove main page name and after faces/ type javax.faces.resource.../WEB-INF/web.xml, you will access web.xml content:


2. weblogic.xml vulnerability

Access content with javax.faces.resource.../WEB-INF/weblogic.xml:


3. Local ADF Task Flows vulnerability

Access in WEB-INF, using Task Flow path and name:


4. ADF Library JAR vulnerability

All ADF Library JAR's by default are packaged into WEB-INF folder, this means we could download these JARs and get entire code. You only need to type JAR file name. It is possible to get JAR file names from ADF BC configuration file, for such JAR's imported into ADF Model:


5. adfm.xml configuration file vulnerability

Here we can get a list of DataBinding files:


6. DataBindings.cpx file vulnerability

We have a list of DataBindings files from previous step. No we could open each DataBindings file and get a list of pages/fragments together with Page Definition mappings. We can read path information for ADF BC:


7. ADF BC vulnerability

Based in ADF BC path information from previous step, we could access Model.jpx file and read information about ADF BC packages:


8. ADF BC configuration vulnerability

We could go down and download every ADF BC component - EO/VO/AM. From bc4j.xcfg we can read info about each AM configuration, data source name, etc.:

Plan HASH_VALUE remains the same for the same Execution Plan, even if ROWS and COST change

Hemant K Chitale - Sun, 2014-03-23 02:41
Here is a simple demo that shows that the Plan Hash_Value does not consider the ROWS and COST but only the Execution Plan.  Thus, even with more rows added into a table, if the Execution Plan remains the same for a query, it is independent of the number of ROWS and the COST changing.

SQL> -- create the table
SQL> create table branch_list
2 (country_code varchar2(3), branch_code number, branch_city varchar2(50));

Table created.

SQL>
SQL> -- create an index
SQL> create index branch_list_cntry_ndx
2 on branch_list(country_code);

Index created.

SQL>
SQL>
SQL>
SQL> -- populate it with 100 rows, one third being 'IN'
SQL> insert into branch_list
2 select decode(mod(rownum,3),0,'IN',1,'US',2,'US'), rownum, dbms_random.string('X',32)
3 from dual
4 connect by level < 101
5 /

100 rows created.

SQL>
SQL> -- gather statistics
SQL> exec dbms_stats.gather_table_stats('','BRANCH_LIST');

PL/SQL procedure successfully completed.

SQL>
SQL> -- get an execution plan
SQL> explain plan for
2 select branch_code, branch_city
3 from branch_list
4 where country_code = 'IN'
5 /

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 513528032

-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50 | 1950 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| BRANCH_LIST | 50 | 1950 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | BRANCH_LIST_CNTRY_NDX | 50 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("COUNTRY_CODE"='IN')

14 rows selected.

SQL>
SQL>
SQL> -- add another 400 rows, none of them being 'IN'
SQL> insert into branch_list
2 select decode(mod(rownum,6),0,'SG',1,'US',2,'US',3,'US',4,'AU',5,'UK'), rownum+100, dbms_random.string('X',32)
3 from dual
4 connect by level < 401
5 /

400 rows created.

SQL>
SQL> -- update statistics
SQL> exec dbms_stats.gather_table_stats('','BRANCH_LIST');

PL/SQL procedure successfully completed.

SQL>
SQL> -- get the execution plan again
SQL> explain plan for
2 select branch_code, branch_city
3 from branch_list
4 where country_code = 'IN'
5 /

Explained.

SQL>
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 513528032

-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 33 | 1320 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| BRANCH_LIST | 33 | 1320 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | BRANCH_LIST_CNTRY_NDX | 33 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("COUNTRY_CODE"='IN')

14 rows selected.

SQL>
SQL> select column_name, histogram
2 from user_tab_columns
3 where table_name = 'BRANCH_LIST';

COLUMN_NAME HISTOGRAM
------------------------------ ---------------
COUNTRY_CODE FREQUENCY
BRANCH_CODE NONE
BRANCH_CITY NONE

SQL> select count(*) from user_tab_histograms
2 where table_name = 'BRANCH_LIST'
3 and column_name = 'COUNTRY_CODE'
4 /

COUNT(*)
----------
5

SQL>


After the addition of 400 rows to a 100 row table, the distribution of rows has changed. At the second Gather_Table_Stats call, Oracle has properly omputed a Frequency Histogram on the COUNTRY_CODE column for the 5 countries ('IN','US','SG','AU','UK').  The estimate for the COUNTRY_CODE='IN' is now more accurate.

However, what I want to demonstrate here is that although "ROWS" (and "BYTES for that many ROWS) and "COST" have changed in the new Execution Plan, the PLAN HASH VALUE ("513528032") remains the same.  Thus, the PLAN HASH VALUE is independent of changes to the ROWS/BYTES and COST.  The Execution Plan, per se, hasn't changed.
.
.
.



Categories: DBA Blogs

Why VCs Usually Get Ed Tech Wrong

Michael Feldstein - Sat, 2014-03-22 10:26

I don’t often get to write these words, but there is a new must-read blog post on educational technology by a venture capitalist. Rethink Education’s Matt Greenfield argues that there is no generalized bubble in ed tech investment; rather, the problem is that the venture community has a habit of systematically betting on the wrong horses.

It’s worth noting that Matt is not your typical VC. For starters, he doesn’t live in the Valley echo chamber. Perhaps more importantly, he has a background as an academic. He has a PhD in English from Yale, taught at Bowdoin and CUNY, and taught graduate classes in literature to teachers from the New York City public schools. As such, he has an unusual perspective for an ed tech venture capitalist.

Matt uses digital textbook platforms as his example of the problem he wants to highlight:

What type of ed tech have venture capitalists approached with the greatest enthusiasm and the largest piles of cash? The answer is new textbook solutions, including digital textbook platforms like Kno and renters of physical textbooks like Chegg, which just went public. Venture capitalists have put over $500 million into just the top ten companies in this sector….

I talked to the CEO of an academic bookstore company recently. How many digital textbook platforms would you guess that his stores handle? Five? No, more than that. Ten? Nope, guess again. Twenty? Still too low. The answer is forty-two different digital textbook platforms. Forty-two. Now try to imagine each of those textbook platform companies pitching a book store. Or an author. Or a publisher. Or a venture capitalist. “Choose my platform, choose me! Our platform is totally different!” How many of those platforms does the world really need? How many of those platforms can make money? What do you think the meaningful differences between those forty-two platforms might be?…Meanwhile, even the century-old publishing incumbents are moving away from book-like things to adaptive courseware: learning objects that simply will not fit into the wrappers being built by companies like Kno.

So there is a bubble in venture funding for education ventures that are obsolete at birth. Meanwhile, there are large opportunities in areas where few venture capitalists will invest.

This is a fascinating case study. Why would VCs, with their much vaunted drive for innovation, be so taken with the idea of rebuilding an aging instructional modality (i.e., the textbook) on a digital platform—particularly when, as Matt spells out in detail in his blog post, it’s clearly a bad bet for a lot of reasons? It’s worth unpacking this a bit to get at the underlying pathology.

Disrupting Innovation

A big part of the problem is the Valley’s obsession with disruption. These days, “disruptive” and “innovation” seem to always come together in the same sentence. It’s a bit like “big galoot.” Theoretically, “big” is a modifier for “galoot.” But you never hear people talking about small galoots, average sized galoots, or galoots of indeterminate size. In modern common usage, galoots are always big. “Big galoot” has pretty much become an open form compound word, like “post office” or “real estate.” But “disruptive innovation” is not a compound word. Disruptive innovation is a particular kind of innovation, and a fairly narrow kind at that. Specifically, disruptive innovation is a phenomenon in which a new market entrant can overtake a leader in an established market by offering cheaper and simpler solutions. It’s important to remember that some of Clayton Christensen’s seminal examples of disruptive innovations were steam shovels and disk drives. This is not the same kind of innovation that produced the iPhone. It’s essentially about identifying the slow fat rich kid and taking his lunch money. To be fair, it’s not that inherently mean-spirited, because presumably one takes the rich kid’s lunch money (or market share) by providing solutions that consumers prefer. But the point is that disruptive innovation is generally not about solving new problems with brilliant out-of-the-box ideas. It’s primarily about solving old problems better because the old solutions have gotten overbuilt.

Who are the slow fat rich kids in the education markets? The textbook vendors. They make tons of money, they are clearly dysfunctional, and they are having trouble adjusting to change. It should be easy to take their lunch money, the theory goes. And since nobody really likes textbook vendors, you get to feel like a hero. Plus, it shouldn’t be that hard because disruption. Disruptive innovation, valid though it may be as a theory for explaining how established market leaders get upended, also encourages a certain intellectual laziness if you start to think that a disruptive innovation is like a big galoot. In that worldview, all you have to do, in any case, ever, is deliver a simpler or cheaper solution, and you win. That’s what Chegg was all about. The used book market was eating into the textbook publishers’ market; what if we could make buying and selling used textbooks easier? Disruptive innovation!

Ron Paul and Mrs. Paul Capitalism

Another reason that Chegg was attractive to VCs is that the used textbook market is targeted directly at students and doesn’t require any involvement with faculty unions, departmental committees, or (heaven forbid) governmental regulation. There tends to be a lot of libertarian chest thumping around this approach in the Valley. In Matt’s post, he quotes noted investor Marc Andreessen as saying,

I wouldn’t want to back a business that’s selling to public schools or characterized by public financing, unions, or government-run institutions. Those institutions are incredibly hostile to change.

The narrative here is that change equals innovation and therefore no self-respecting change agent (like a VC or a startup) would want to be held back by any institution that makes change slower or more difficult. But the truth is more mundane and less idealistic. The truth is that it’s just harder to run a company that sells to institutions with complex purchasing processes than it is to run a company that sells to individual consumers. Famed investor Peter Lynch once advised, “Invest in businesses any idiot could run, because someday one will.” Under this theory, it is safer to invest in a company that sells fish sticks than it is to put your money in one that sells airplane navigation and safety devices that requires more skill to run well because the product has to be shepherded through an FAA approval process. This is a particularly useful maxim when you invest in startups. While I have met very few ed tech entrepreneurs who are idiots in the shouldn’t-handle-sharp-objects sense, I have met many who are idiots in the I-just-hit-puberty sense. They tend to be extremely smart young people who nevertheless lack critical experience for certain kinds of business-building. Finding a 21-year-old who understands how to sell to a university system with a state-mandated procurement process and no single clear decision-maker is hard. Finding a 21-year-old who understands how to sell to 19-year-olds seems easier.

A Toxic Combination

The combination of the obsession with disruption and the phobia of institutions is a particularly bad one for the education markets. When I say that, I don’t mean that it is harmful to education (although that is probably also true). What I mean is that it leads to consistently unprofitable investment decisions. For Exhibit A, let’s return to the case of Boundless, which I wrote several posts about a while ago. Now, despite what some in the industry think, I do not particularly enjoy overtly bashing companies, even when I think they deserve it. But since my diplomacy in those posts appears to have been lost on at least some who read them, I shall abandon all subtlety here:

I think that Boundless’ entire business is specifically designed to attract investment by appealing to investors’ prejudices. (Who’s the slow fat rich kid now?)

Update: Upon further consideration, I softened the language of the above paragraph a bit. My point is not that I think Boundless deliberately deceived anyone but rather that I think they designed their company around ideas they thought investors would like, instead of around a sound product concept.

This is a company whose pitch was that they are Cheggier than Chegg. “It’s like Chegg, but with no warehouses! We’re disrupting the disrupters!” They came up with a strategy that makes a show of punching the slow fat rich kid—the textbook publisher, not the VC—in the face. Yay disruption! They also made as conspicuous an end run around teachers and institutions as possible. They didn’t just sell a used version of the textbook that the teacher required (which the teacher probably wouldn’t care about). Nor did they attempt to persuade teachers that they offer less expensive but high quality alternatives the way, say, Lumen Learning does. Instead, they marketed what amounts to the CliffsNotes of the textbook directly to the students. Take that, you change-hostile, union-joining classroom bureaucrats! I don’t think it would be possible to come up with a business plan that is less subtle about playing to VC prejudices. It’s like the Sand Hill Road equivalent of an email from a Nigerian prince.

But what is their business, really? Boundless basically sells Rolex knockoffs in Times Square. OK, I’m not being fair. Most textbooks are hardly Rolexes. Really, Boundless is selling Timex knockoffs in Times Square. There is no innovation in this disruption. In a market that is overrun by e-book platforms and in which downward price pressures are causing low-cost options to proliferate, why in the world would any rational investor think that Boundless is a good bet? And yet, the company has received $9.7 million in venture funding.

A Better Idea

A while back, I made the following observation in the context of Udacity’s fall from grace:

Silicon Valley can’t disrupt education because, for the most part, education is not a product category. “Education” is the term we apply to a loosely defined and poorly differentiated set of public and private goods (where “goods” is meant in the broadest sense, and not just something you can put into your Amazon shopping cart). Consider the fact that John Adams included the right to an education in the constitution for the Commonwealth of Massachusetts. The shallow lesson to be learned from this is that education is something so integral to the idea of democracy that it never will and never should be treated exclusively as a product to be sold on the private markets. The deeper lesson is that the idea of education—its value, even its very definition—is inextricably tangled up in deeper cultural notions and values that will be impossible to tease out with A/B testing and other engineering tools. This is why education systems in different countries are so different from each other. “Oh yes,” you may reply, “Of course I’m aware that education in India and China are very different from how it is here.” But I’m not talking about India and China. I’m talking about Germany. I’m talking about Italy. I’m talking about the UK. All these countries have educational systems that are very substantially different from the U.S., and different from each other as well. These are often not differences that a product team can get around through “localization.” They are fundamental differences that require substantially different solutions. There is no “education.” There are only educations.

I just don’t see disruptive innovation as a good guide for investment in education. And for similar reasons, I think the notion of avoiding institutional entanglements is pretty much hopeless, since the very idea of education is inextricably bound up in those institutions. Disruptive innovation and direct-to-consumer are both investment strategies that are designed to avoid complexities that lead to investment risk. But in education, complexity is unavoidable, which means strategies that attempt to avoid it usually result in risk-increasing ignorance rather than risk-avoiding safety. And as Warren Buffett said, “When you combine ignorance and leverage, you get some pretty interesting results.”

Buffett also said, “I am a better investor because I am a businessman, and a better businessman because I am no investor.” Call me old-fashioned, but I believe that if you want to find a good ed tech investment, you have to understand what the company does. In the real world, not in some Ayn Randian fantasy where technology unleashes the power of the individual. How will this product or service impact real students? Who would want to buy it and how would it help them? Very, very often, that will mean dealing with companies that sell to institutions or deal with institutional politics, because that’s how education works today in America and around the globe. If you want to find a good business to invest in, then think like a consumer. Better yet, think like a parent. Ask yourself, “Would my kid benefit from this? Would I like to see her have this? Would I, as a parent, take steps to make sure she can have this?” These are often businesses that can’t be run by any idiot, which makes them risky. But they are less risky than giving your money to a Nigerian prince.

The post Why VCs Usually Get Ed Tech Wrong appeared first on e-Literate.

Moving Forward

Floyd Teter - Sat, 2014-03-22 09:55
Seems to be quite a bit of buzz in the enterprise software user community these days about moving forward.  Budgets have loosened up, users want better experiences, in-house IT providers want to reduce maintenance and infrastructure investments, C-level officers want better and more timely information on strategic initiatives, and everybody wants to be agile (even though there are multiple visions of agile, we all want it).  So it seems the big question lately is "how do we move forward"?

Most of my posts lately sound like "blasts from the past"...you can probably add this post to that category.  I'd recommend four things you can begin with right now in preparing to move forward:

1.  Move To The Latest Applications Release
If you're not on the latest release of PeopleSoft, Campus Solutions, E-Business Suite, or whatever packaged products you're using, get there.  Doing so will assure that you have the best platform to move forward from, in addition to making most transitions substantially less complicated.

2.  Prepare A Business Roadmap For Moving Forward
Another way to state this is is develop a description, in well-defined behavioral terms, for where you want your enterprise to be.  Note that this is not a technical roadmap, but more of a business-oriented roadmap.  Some considerations for that business roadmap may include:








3.  Inventory Your Enterprise Assets
Understand what assets you have on-hand that may help or hinder your way forward.  Were it me, I'd want five categories of existing enterprise assets:
  • Business processes
  • Applications (custom and packaged)
  • Information (including both what we have and what we share with whom - they're different!)
  • Projects
  • Customizatons
4.  Reconsider your customizations
Customizations increase the cost of moving forward and extend the time required. That customizations list we built in step 3?  Why do we have those customizations?  Could we replace any of them with out-of-the-box functionality from shrink-wrapped applications?  What about an extension to a packaged application?  Do we still need the customization at all?  Should we rebuild the customization on a new technology platform?

So, there ya go.  Four things you can do today.  No consulting services or special tools required.  Just serious commitment on your part: get to the latest release of whatever you're using, describe your desired business end-state, catalog your enterprise assets and reconsider your customizations.  The discussion doesn't change, regardless of the tech platform you're currently using.

We'll talk soon about what comes next.  In the meantime, share your thoughts in the comments...and get busy!


The Wolves of Midwinter (The Wolf Gift Chronicles)

Tim Hall - Sat, 2014-03-22 07:20

The Wolves of Midwinter is the second book in The Wolf Gift Chronicles by Anne Rice.

After my enthusiasm for The Wolf Gift, I jumped straight into The Wolves of Midwinter, then kind-of got distracted and took about 3 months to finish it. The long breaks during reading this book made it feel more disjointed than it probably would have done if I had read it in a shorter time frame. The book was divided into several distinct story lines, which in some ways made it easier to take breaks. With the exception of a few scenes of werewolf-on-werewolf love action, which I could have lived without, it was a pretty cool book.

I’m looking forward to the next one!

Cheers

Tim…

The Wolves of Midwinter (The Wolf Gift Chronicles) was first posted on March 22, 2014 at 2:20 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

BOGOF from Packt

Hemant K Chitale - Sat, 2014-03-22 07:14
Packt Publishing is celebrating the publishing of their 2000th book with a Buy One Get One Free offer.
.
.
.
Categories: DBA Blogs

Facebook Groups and Lists

Tim Hall - Sat, 2014-03-22 06:54

For quite some time I’ve had a specific policy on how I use social networks.

  • Google+ : I a have regular G+ profile which is public. I post whatever takes my fancy here, including Oracle and technology stuff. Anything posted on this profile is bounced across to Twitter using ManageFlitter.
  • Google+ (ORACLE-BASE.com) : I have a G+ page that is specific for Oracle and technology related links. I don’t post so much random stuff here.
  • Twitter (@oraclebase) : The usual junk you get on Twitter.
  • Facebook (ORACLE-BASE.com) : I have a Facebook page for the those people who prefer to follow me on Facebook. All my tweets get forwarded to this Facebook page.

In addition to those I’ve had a regular Facebook profile for a long time, but I’ve been very specific about its use. I only accept first-life friends and family. With all the other way of connecting to me, keeping one for myself didn’t seem selfish. Recently, I’ve been playing around with Facebook Groups and Facebook Lists in an attempt to allow connections to more people, but keep groups of people separated from each other. I don’t want to bore my friends with Oracle stuff and I don’t want to bore the Oracle community with tales of my crocodile wrestling.

I created some Facebook Groups and started accepting some Oracle-related people as friends and assigned them to a group called “Oracle”. I figured this was like a Google+ Circle, it’s not. For a start, everyone in the group can see everyone else in the group and they can see what the group is called, so don’t call it “People I Hate!”. :) There are a variety of security options, but none of them really did what I was looking for. I pretty quickly removed the groups and wrote to everyone saying it was not a snub. I just didn’t want to be the leader of some new communities. :) If you are into building communities in Facebook, groups seem like a pretty good idea. You can be a dictator, or let other people in the group join in the administration.

The next thing I tried was Facebook Lists. This is a lot more like Google+ Groups. Hover over the “Friends” section on the left hand side of the page and a “More” link appears. Click on the link and you can see all the lists you’ve already got, which include smartlinks created automatically by Facebook. You can create new lists and manage existing lists from here. When you accept a friend request, you can select the relevant list for the contact. There are some standard lists that come in handy, like “Restricted” and “Limited Profile”. If I’ve not actually met someone before, they tend to get put on one of these lists. This is not so much to hide stuff I post, but it is to provide some layer of protection to my other contacts. I don’t see why something one of my non-Oracle friends posts should be visible to someone I’ve never met. OK, that’s the price you pay for getting involved in social networks, but I don’t want it to be my fault someone else’s posts become public. When you write a status update, you can select which list it is visible to. Alternatively, you can click on the list of interest, then post the status update.

I’m still not sure if altering my policy on Facebook usage was the correct thing to do. I also reserve the right to unfriend everyone and revert to my previous policy at any time. :)

Cheers

Tim…

 

Facebook Groups and Lists was first posted on March 22, 2014 at 1:54 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle APEX Cookbook : Second Edition

Surachart Opun - Sat, 2014-03-22 05:18
 Second EditionOracle Application Express is the great tool for developing Web Application with SQL and PL/SQL. Users can develop and deploy Web based applications for desktops and mobile devices.  If We will be talking about books about APEX. I mention a book title Oracle APAC Cookbook - Second Edition by Michel Van Zoest and Marcel Van Der Plas. I had a chance as technical viewer on this book. So, I found it is interesting book about Oracle APEX. It was written to cover Oracle APEX 4.0 - 4.2 with 14 Chapters. 
  • Explore APEX to build applications with the latest techniques in AJAX and Javascript using features such as plugins and dynamic actions
  • With HTML5 and CSS3 support, make the most out of the possibilities that APEX has to offer
  • Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible

Examples in a book are very useful. Readers can follow each topic (example) and practice it. A book is for some people who are new with Oracle APEX and want to start it. For some people who know and work with Oracle APEX, they can use this book like reference and learn something new in APEX. So, I think it's a good idea to keep this book in your APEX shelf. I still believe a book is easy for reading... and learning in APEX. 
Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs