Skip navigation.

Feed aggregator

Delivering the Moments of Engagement Across the Enterprise

WebCenter Team - Tue, 2014-04-15 07:00
12.00

 Delivering Moments of Engagement Across the Enterprise

A Five Step Roadmap for Mobilizing a Digital business

Geoffrey Bock, Principal, Bock & Company
Michael Snow, Principal Product Marketing Director, Oracle WebCenter

Over the past few years, we have been fascinated by the impact of mobility on business. As employees, partners, and customers, we now carry powerful devices in our pockets and handbags. Our smartphones and tablets are always with us, always on, and always collecting information. We are no longer tethered to fixed work places; we can frequently find essential information with just a few taps and swipes. More and more, this content is keyed to our current context. Moreover, we often are immersed in an array of sensors that track our actions, personalize the results, and assist us in innumerable ways. Our business and social worlds are in transition. This is not the enterprise computing environment of the 1990’s or even the last decade.

Yet while tracking trends with the mobile industry, we have encountered a repeated refrain from many technology and business leaders. Sure, mobile apps are neat, they say. But how do you justify the investments required? What are the business benefits of enterprise mobility? When should companies harness the incredible opportunities of the mobile revolution?

To answer these questions, we think that it is important to recognize the steps along the mobile journey. Certainly companies have been investing in their enterprise infrastructure for many years. In fact, enterprise-wide mobility is just the latest stage in the development of digital business initiatives.

What is at stake is not simply introducing nifty mobile apps as access points to existing enterprise applications. The challenge is weaving innovative digital technologies (including mobile) into the fabric (and daily operations) of an organization. Companies become digital businesses by adapting and transforming essential enterprise activities. As they mobilize key business experiences, they drive digital capabilities deeply into their application infrastructure.

Please join us for a conversation about how Oracle customers are making this mobile journey, our five-step roadmap for delivering the moments of engagement across the enterprise.

Editors note: This webcast is now available On-Demand


Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";}

Creating some Pivotal Cloud Foundry (PCF) PHD services

Pas Apicella - Tue, 2014-04-15 06:52
After installing PHD add on for Pivotal Cloud Foundry 1.1 I quickly created some development services for PHD using the CLI as shown below.

[Tue Apr 15 22:40:08 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hawq-cf free dev-hawq
Creating service dev-hawq in org pivotal / space development as pas...
OK
[Tue Apr 15 22:42:31 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hbase-cf free dev-hbase
Creating service dev-hbase in org pivotal / space development as pas...
OK
[Tue Apr 15 22:44:10 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hive-cf free dev-hive
Creating service dev-hive in org pivotal / space development as pas...
OK
[Tue Apr 15 22:44:22 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-yarn-cf free dev-yarn
Creating service dev-yarn in org pivotal / space development as pas...
OK

Finally using the web console to brow the services in the "Development" space


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

OBIEE Security: Usage Tracking, Logging and Auditing for SYSLOG or Splunk

Enabling OBIEE Usage Tracking and Logging is a key part of most any security strategy. More information on these topics can be found in the whitepaper references below. It is very easy to setup logging such that a centralized logging solution such as SYSLOG or Splunk can receive OBIEE activity.

Usage Tracking

Knowing who ran what report, when and with what parameters is helpful not only for performance tuning but also for security. OBIEE 11g provides a sample RPD with a Usage Tracking subject area. The subject area will report on configuration and changes to the RPD as well as configuration changes to Enterprise Manager.  To start using the functionality, one of the first steps is to copy the components from the sample RPD to the production RPD.

Usage tracking can also be redirected to log files. The STORAGE_DIRECTORY setting is in the NQSConfig.INI file. This can be set if OBIEE usage logs are being sent, for example, to a centralized SYSLOG database.

The User Tracking Sample RPD can be found here:

{OBIEE_11G_Instance}/bifoundation/OracleBIServerComponent/coreapplication_obis1/sample/usagetracking

Logging

OBIEE offers standard functionality for application level logging.  This logging should be considered as one component of the overall logging approach and strategy. The operating system and database(s) supporting OBIEE should be using a centralized logging solution (most likely syslog) and it is also possible to parse the OBIEE logs for syslog consolidation.

For further information on OBIEE logging refer to the Oracle Fusion Middleware System Administrator’s Guide for OBIEE 11g (part number E10541-02), chapter eight.

To configure OBIEE logging, the BI Admin client tool is used to set the overall default log level for the RPD as well as identify specific users to be logged. The log level can differ among users. No logging is possible for a role.

Logging Levels are set between zero and seven.

Level 0 - No logging

Level 1 - Logs the SQL statement issued from the client application.

Level 2 - All level 1 plus OBIEE infrastructure information and query statisics

Level 3 - All level 2 plus Cache information

Level 4 - All level 3 plus query plan execution

Level 5 - All level 4 plus intermediate row counts

Level 6 & 7 - not used

 

OBIEE log files

BI Component

Log File

Log File Directory

OPMN

debug.log

ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn

OPMN

opmn.log

ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn

BI Server

nqserver.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIServerComponent/coreapplication_obis1

BI Server Query

nquery<n>.log <n>=data and timestamp for example nqquery-20140109-2135.log

Oracle BI Server query Log

ORACLE_INSTANCE/diagnostics/logs/OracleBIServerComponent/coreapplication

BI Cluster Controller

nqcluster.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIClusterControllerComponent/coreapplication_obiccs1

Oracle BI Scheduler

nqscheduler.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBISchedulerComponent/coreapplication_obisch1

Useage Tracking

NQAcct.yyymmdd.hhmmss.log

STORAGE_DIRECTORY parameter in the Usage Tracking section of the NQSConfig.INI file determines the location of usage tracking logs

Presentation Services

sawlog*.log (for example, sawlog0.log)

ORACLE_INSTANCE/diagnostics/logs/
OracleBIPresentationServicesComponent/
coreapplication_obips1

 

The configuration of this log (e.g. the writer setting to output to syslog or windows event log) is set in instanceconfig.xml

BI JavaHost

jh.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIJavaHostComponent/coreapplication_objh1

 

If you have questions, please contact us at info@integrigy.com

 -Michael Miller, CISSP-ISSMP

References

 

Tags: Oracle Business Intelligence (OBIEE)AuditorIT Security
Categories: APPS Blogs, Security Blogs

WordPress 3.8.3 – Auto Update

Tim Hall - Tue, 2014-04-15 01:53

WordPress 3.8.3 came out yesterday. It’s a small maintenance release, with the downloads and changelog in the usual places. For many people, this update will happen automatically and they’ll just receive and email to say it has been applied.

I’m still not sure what to make of the auto-update feature of WordPress. Part of me likes it and part of me is a bit irritated by it. For the lazy folks out there, I think it is a really good idea, but for those who are on their blog admin screens regularly it might seem like a source of confusion. I currently self-host 5 WordPress blogs and the auto-update feature seems a little erratic. One blog always auto-updates as soon as the new a new release comes out. A couple sometimes do. I don’t think this blog has ever auto-updated…

I’d be interested to hear if other self-hosting WordPress bloggers have had a similar experience…

Cheers

Tim…

WordPress 3.8.3 – Auto Update was first posted on April 15, 2014 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Database experts try to mitigate the effects of the Heartbleed bug

Chris Foot - Tue, 2014-04-15 01:44

Recently, the Heartbleed Bug has sent a rift through global economic society. The personal information of online shoppers, social media users and business professionals is at risk and database administration providers are doing all they can to either prevent damage from occurring or mitigate detrimental effects of what has already occurred. 

What it does and the dangers involved
According to Heartbleed.com, the vulnerability poses a serious threat to confidential information, as it compromises the protection Open Secure Sockets Layer/Transport Security Layer technology provides for Internet-based communications. The virus allows anyone on the Web – particularly, cybercriminals – to view the memory of the systems protected by affected versions of OpenSSL software, allowing attackers to monitor a wide array of transactions between individuals, governments and enterprises and numerous other connections. 

Jeremy Kirk, a contributor to PCWorld, noted that researchers at CloudFlare, a San Francisco-based security company, found that hackers could steal the SSL/TSL and use it to create an encrypted avenue between users and websites, essentially posing as legitimate webpages in order to decrypt traffic passing between a computer and a server. For online retailers lacking adequate database support services, it could mean the divulgence of consumer credit card numbers. If customers no longer feel safe  in purchasing products online, it could potentially result in the bankruptcy of a merchandiser. 

Think mobile devices are safe? Think again 
Now more than ever, database experts are making concentrated efforts to effectively monitor communications between mobile devices and business information. As the Heartbleed Bug can compromise connections between PCs and websites, the same risk is involved for those with mobile applications bridging the distance between smartphones and Facebook pages. CNN reported that technology industry leaders Cisco and Juniper claimed that someone can potentially hack into a person's phone and log the details of his or her conversations. Sam Bowling, a senior infrastructure engineer at web hosting service Singlehop, outlined several devices that could be compromised:

  • Cisco revealed that select versions of the company's WebEx service are vulnerable, posing a threat to corporate leaders in a video conference. 
  • If work phones aren't operating behind a defensive firewall, a malicious entity could use Heartbleed to access the devices' memory logs. 
  • Smartphone users accessing business files from iPhones and Android devices may be exposed, as hackers can view whatever information a person obtained through select applications. 

Upgraded deployments of OpenSSL are patching liable avenues, but remote database services are still exercising assiduous surveillance in order to ensure that client information remains confidential. 

Oracle TNS_ADMIN issues due to bad environment settings

Yann Neuhaus - Mon, 2014-04-14 18:11

Recently, I faced a TNS resolution problem at a customer. The reason was a bad environment setting: The customer called the service desk because of a DBLINK pointing to a bad database.

The users were supposed to be redirected to a development database, and the DBLINK was redirecting to a validation database instead. The particularity of the environment is that development and validation databases are running on the same server, but on different Oracle homes, each home having its own tnsnames.ora. Both tnsnames.ora contain common alias names, but pointing on different databases. Not exactly best practice, but this is not the topic here.

The problem started with some issues to reproduce the case. Our service desk was not able to reproduce the situation without understanding that the customer was trying to access the database remotely via a development tool (through the listener), while we were connected locally on the server.

Let me present the case with my environment.

First, this is the database link concerned by the issue:

 

SQL> select * from dba_db_links;
OWNER      DB_LINK              USERNAME                       HOST       CREATED
---------- -------------------- ------------------------------ ---------- ---------
PUBLIC     DBLINK               DBLINK                         MYDB       21-MAR-14

 

And this is the output when we try to display the instance name through the DBLINK, when connected locally:

 

SQL> select instance_name from v$instance@DBLINK;
INSTANCE_NAME
----------------
DB2

 

The user is redirected on the remote database, as expected. Now, let's see what happens when connected using the SQL*Net layer:

 

[oracle@srvora01 ~]$ sqlplus system@DB1
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:07:45 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
Enter password:
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> select instance_name from v$instance@DBLINK;
INSTANCE_NAME
----------------
DB1

 

Here we can see that the user is not redirected to the same database (here, for demonstration puproses, on the database itself).

The first thing to check is the TNS_ADMIN variable, if it exists:

 

[oracle@srvora01 ~]$ echo $TNS_ADMIN
/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

There is the content of the tnsnames.ora file on that location:

 

[oracle@srvora01 ~]$ cat /u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora
DB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DB1)
    )
  )
MYDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = DB2)
    )
  )

 

Clearly, we have a problem with the TNS resolution. The local connection resolves the MYDB alias correctly, while the remote connection resolves a different database with the alias. In this case, we have two solutions:

  • The tnsnames.ora is not well configured: this is not the case, as you can see above
  • Another tnsnames.ora file exists somewhere on the server and is used by remote connections

 To confirm that the second hypothesis is the good one, we can use the strace tool:

 

SQL> set define #
SQL> select spid from v$process p join v$session s on p.addr=s.paddr and s.sid=sys_context('userenv','sid');
SPID
------------------------
5578

 

SQL>  host strace -e trace=open -p #unix_pid & echo $! > .tmp.pid
Enter value for unix_pid: 5578
SQL> Process 5578 attached - interrupt to quit

 

SQL> select instance_name from v$instance @ DBLINK;
open("/u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora", O_RDONLY) = 8
open("/etc/host.conf", O_RDONLY)        = 8
open("/etc/resolv.conf", O_RDONLY)      = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 10
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 10
open("/etc/hostid", O_RDONLY)           = -1 ENOENT (No such file or directory)
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 10INSTANCE_NAME
----------------
DB2

 

The DBLINK is resolved using the file /u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora.

Now, when connected remotely:

 

SQL> set define #
SQL> select spid from v$process p join v$session s on p.addr=s.paddr and s.sid=sys_context('userenv','sid');
SPID
------------------------
6838

 

SQL> host strace -e trace=open -p #unix_pid & echo $! > .tmp.pid
Enter value for unix_pid: 6838
SQL> Process 6838 attached - interrupt to quit

 

SQL> select instance_name from v$instance@DBLINK;
open("/u00/app/oracle/network/admin/tnsnames.ora", O_RDONLY) = 8
open("/etc/host.conf", O_RDONLY)        = 8
open("/etc/resolv.conf", O_RDONLY)      = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 9
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 9
open("/etc/hostid", O_RDONLY)           = -1 ENOENT (No such file or directory)
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 9INSTANCE_NAME
----------------
DB1

 

Here the DBLINK is resolved with the file /u00/app/oracle/network/admin/tnsnames.ora.

 

Two different tnsnames.ora files are used according to the connection method! If we query the content of the second tnsnames.ora, we have an explanation for our problem:

 

[oracle@srvora01 ~]$ cat /u00/app/oracle/network/admin/tnsnames.ora
MYDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = DB1)
    )
  )

 

It is not clearly documented by Oracle, but the database session can inherit the environment variables in three different ways:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database

In our case, the database was restarted with the wrong TNS_NAMES value set. Then, the database registered this value for remote connections. We can check this with the following method:

 

[oracle @ srvora01 ~]$ ps -ef | grep pmon
oracle    3660     1  0 09:02 ?        00:00:00 ora_pmon_DB1
oracle    4006     1  0 09:05 ?        00:00:00 ora_pmon_DB2
oracle    6965  3431  0 10:44 pts/1    00:00:00 grep pmon

 

[oracle @ srvora01 ~]$ strings /proc/3660/environ | grep TNS_ADMIN
TNS_ADMIN=/u00/app/oracle/network/admin

 

Note that we can get the value for TNS_ADMIN using the dbms_system.get_env.

The solution was to restart the database with the correct TNS_ADMIN value:

 

[oracle @ srvora01 ~]$ echo $TNS_ADMIN
/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

[oracle@srvora01 ~]$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:46:03 2014
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

 

SQL> startup
ORACLE instance started.Total System Global Area 1570009088 bytes
Fixed Size                  2228704 bytes
Variable Size            1023413792 bytes
Database Buffers          536870912 bytes
Redo Buffers                7495680 bytes
Database mounted.
Database opened.

 

[oracle@srvora01 ~]$ ps -ef | grep pmon
oracle    4006     1  0 09:05 ?        00:00:00 ora_pmon_DB2
oracle    7036     1  0 10:46 ?        00:00:00 ora_pmon_DB1
oracle    7116  3431  0 10:46 pts/1    00:00:00 grep pmon

 

[oracle@srvora01 ~]$ strings /proc/7036/environ | grep TNS_ADMIN
TNS_ADMIN=/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

The value for TNS_ADMIN is now correct.

 

[oracle@srvora01 ~]$ sqlplus system @ DB1
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:47:21 2014
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.Enter password:
 
Enter password:
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> select instance_name from v$instance @ DBLINK;
INSTANCE_NAME
----------------
DB2

 

Remote connections are now using the right tnsnames.ora.

I hope this will help you with your TNS resolution problems.

JasperReportsIntegration - Full path requirement and workaround

Dietmar Aust - Mon, 2014-04-14 16:38
I have just posted an answer to a question that seems like a bug in the JasperReportsIntegration toolkit, that you have to use absolute paths for referencing images or subreports, which is typically a bad thing.

Don't know exactly why it doesn't work, but there is a workaround for that: http://www.opal-consulting.de/site/jasperreportsintegration-full-path-requirement-and-workaround/

Hope that helps,
~Dietmar.

Annonce : Oracle Database Backup & Oracle Storage Cloud

Jean-Philippe Pinte - Mon, 2014-04-14 16:01
Oracle annonce la disponibilité des services Oracle Database Backup et Oracle Storage Cloud
 Plus d'informations :

First blogpost at my own hosted wordpress instance

Dietmar Aust - Mon, 2014-04-14 15:50
I have been blogging at daust.blogspot.com for quite some years now ... and many people have rather preferred wordpress to blogspot, I can now understand why :).

It is quite flexible and easy to use and there are tons of themes available ... really cool ones.

The main decision to host my own wordpress instance was in the end motivated by different means. I have created two products and they needed a platform to be presented:
First I wanted to buy a new theme from themeforest and build an APEX theme for that ... but this is a lot of work.

I then decided to host my content using wordpress since I have already bought a new theme: http://www.kriesi.at/themedemo/?theme=enfold

And this one has a really easy setup procedure for wordpress and comes with a ton of effects and wizards, cool page designer, etc.

Hopefully this will get mit motivated to post more frequently ... we will see ;).

Cheers,
~Dietmar.

OpenSSL Heartbleed (CVE-2014-0160) and Oracle E-Business Suite Impact

Integrigy has completed an in-depth security analysis of the "Heartbleed" vulnerability in OpenSSL (CVE-2014-0160) and the impact on Oracle E-Business Suite 11i (11.5) and R12 (12.0, 12.1, and 12.2) environments.  The key issue is where in the environment is the SSL termination point both for internal and external communication between the client browser and application servers. 

1.  If the SSL termination point is the Oracle E-Business Suite application servers, then the environment is not vulnerable as stated in Oracle's guidance (Oracle Support Note ID 1645479.1 “OpenSSL Security Bug-Heartbleed” [support login required]).

2.  If the SSL termination point is a load balancer or reverse proxy, then the Oracle E-Business Suite environment MAY BE VULNERABLE to the Heartbleed vulnerability.  Environments using load balancers, like F5 Big-IP, or reverse proxies, such as Apache mod_proxy or BlueCoat, may be vulnerable depending on software versions.

Integrigy's detailed analysis of use of OpenSSL in Oracle E-Business Environments is available here -

OpenSSL Heartbleed (CVE-2014-0160) and the Oracle E-Business Suite Impact Analysis

Please let us know if you have any questions or need additional information at info@integrigy.com.

Tags: VulnerabilityOracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Integrigy Collaborate 2014 Presentations

Integrigy had a great time at Collaborate 2014 last week in Las Vegas.  What did not stay in Las Vegas were many great sessions and a lot of good information on Oracle E-Business Suite 12.2, Oracle Security, and OBIEE.  Posted below are the links to the three papers that Integrigy presented.

If you have questions about our presentations, or any questions about OBIEE and E-Business Suite security, please contact us at info@integrigy.com

References Tags: Oracle DatabaseOracle E-Business SuiteOracle Business Intelligence (OBIEE)
Categories: APPS Blogs, Security Blogs

Parallel Execution Skew – Demonstrating Skew

Randolf Geist - Mon, 2014-04-14 12:42
This is just a short notice that the next part of the mini-series "Parallel Execution Skew" is published at AllThingsOracle.com

Final Timetable and Agenda for the Brighton and Atlanta BI Forums, May 2014

Rittman Mead Consulting - Mon, 2014-04-14 07:00

It’s just a few weeks now until the Rittman Mead BI Forum 2014 events in Brighton and Atlanta, and there’s still a few spaces left at both events if you’d still like to come – check out the main BI Forum 2014 event page, and the booking links for Brighton (May 7th – 9th 2014) and Atlanta (May 14th – 16th 2014).

We’re also able now to publish the timetable and running order for the two events – session order can still change between now at the events, but this what we’re planning to run, first of all in Brighton, with the photos below from last year’s BI Forum.

Brighton

Brighton BI Forum 2014, Hotel

Seattle, Brighton

Wednesday 7th May 2014 – Optional 1-Day Masterclass, and Opening Drinks, Keynote and Dinner

  • 9.00 – 10.00 – Registration
  • 10.00 – 11.00 : Lars George Hadoop Masterclass Part 1
  • 11.00 – 11.15 : Morning Coffee 
  • 11.15 – 12.15 : Lars George Hadoop Masterclass Part 2
  • 12.15 – 13.15 : Lunch
  • 13.15 – 14.15 : Lars George Hadoop Masterclass Part 3
  • 14.15 – 14.30 : Afternoon Tea/Coffee/Beers
  • 14.30 – 15.30 : Lars George Hadoop Masterclass Part 4
  • 17.00 – 19.00 : Registration and Drinks Reception
  • 19.00 – Late :  Oracle Keynote and Dinner at Hotel
Thursday 8th May 2014
  • 08.45 – 09.00 : Opening Remarks Mark Rittman, Rittman Mead
  • 09.00 – 10.00 : Emiel van Bockel : Extreme Intelligence, made possible by …
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Chris Jenkins : TimesTen for Exalytics: Best Practices and Optimisation
  • 11.30 – 12.30 : Robin Moffatt : No Silver Bullets : OBIEE Performance in the Real World
  • 12.30 – 13.30 : Lunch
  • 13.30 – 14.30 : Adam Bloom : Building a BI Cloud
  • 14.30 – 14.45 : TED: Paul Oprea : “Extreme Data Warehousing”
  • 14.45 – 15.00 : TED : Michael Rainey :  “A Picture Can Replace A Thousand Words”
  • 15.00 – 15.30 : Afternoon Tea/Coffee/Beers
  • 15.30 – 15.45 : Reiner Zimmerman : About the Oracle DW Global Leaders Program
  • 15.45 – 16.45 : Andrew Bond & Stewart Bryson : Enterprise Big Data Architecture
  • 19.00 – Late: Depart for Gala Dinner, St Georges Church, Brighton

Friday 9th May 2014

  • 9.00 – 10.00 : Truls Bergensen – Drawing in a New Rock on the Map – How will of Endeca Fit in to Your Oracle BI Topography
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Nicholas Hurt & Michael Rainey : Real-time Data Warehouse Upgrade – Success Stories
  • 11.30 – 12.30 : Matt Bedin & Adam Bloom : Analytics and the Cloud
  • 12.30 – 13.30 : Lunch13.30 – 14.30 : Gianni Ceresa : Essbase within/without OBIEE – not just an aggregation engine
  • 14.30 – 14.45 : TED : Marco Klaasen : “Speed up RPD Development”
  • 14.45 – 15:00 : TED : Christian Berg : “Neo’s Voyage in OBIEE:”
  • 15.00 – 15.30 : Afternoon Tea/Coffee/Beers
  • 15.30 – 16.30 : Alistair Burgess : “Tuning TimesTen with Aggregate Persistence”
  • 16.30 – 16.45 : Closing Remarks (Mark Rittman)
Then directly after Brighton we’ve got the US Atlanta event, running the week after, Wednesday – Friday, with last year’s photos below:   Us

Atlanta BI Forum 2014, Renaissance Mid-Town Hotel, Atlanta

Wednesday 14th May 2014 – Optional 1-Day Masterclass, and and Opening Drinks, Keynote and Dinner

  • 9.00-10.00 – Registration
  • 10.00 – 11.00 : Lars George Hadoop Masterclass Part 1
  • 11.00 – 11.15 : Morning Coffee 
  • 11.15 – 12.15 : Lars George Hadoop Masterclass Part 2
  • 12.15 – 13.15 : Lunch
  • 13.15 – 14.15 : Lars George Hadoop Masterclass Part 3
  • 14.15 – 14.30 : Afternoon Tea/Coffee/Beers
  • 14.30 – 15.30 : Lars George Hadoop Masterclass Part 4
  • 16.00 – 18.00 : Registration and Drinks Reception
  • 18.00 – 19.00 : Oracle Keynote & Dinner

Thursday 15th May 2014

  • 08.45 – 09.00 : Opening Remarks Mark Rittman, Rittman Mead
  • 09.00 – 10.00 : Kevin McGinley : Adding 3rd Party Visualization to OBIEE
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Chris Linskey : Endeca Information Discovery for Self-Service and Big Data
  • 11.30 – 12.30 : Omri Traub : Endeca and Big Data: A Vision for the Future
  • 12.30 – 13.30 : Lunch
  • 13.30 – 14.30 : Dan Vlamis : Capitalizing on Analytics in the Oracle Database in BI Applications
  • 14.30 – 15.30 : Susan Cheung : TimesTen In-Memory Database for Analytics – Best Practices and Use Cases
  • 15.30 – 15.45 : Afternoon Tea/Coffee/Beers
  • 15.45 – 16.45 : Christian Screen : Oracle BI Got MAD and You Should Be Happy
  • 18.00 – 19.00 : Special Guest Keynote : Maria Colgan : An introduction to the new Oracle Database In-Memory option
  • 19.00 – leave for dinner

Friday 16th May 2014

  • 09.00 – 10.00 : Patrick Rafferty : More Than Mashups – Advanced Visualizations and Data Discovery
  • 10.00 – 10.30 : Morning Coffee 
  • 10.30 – 11.30 : Matt Bedin : Analytics and the Cloud
  • 11.30 – 12.30 : Jack Berkowitz : Analytic Applications and the Cloud
  • 12.30 – 13.30 : Lunch
  • 13.30 – 14.30 : Philippe Lions : What’s new on 2014 HY1 OBIEE SampleApp
  • 14.30 – 15.30 : Stewart Bryson : ExtremeBI: Agile, Real-Time BI with Oracle Business Intelligence, Oracle Data Integrator and Oracle GoldenGate
  • 15.30 – 16.00 : Afternoon Tea/Coffee/Beers
  • 16.00 – 17.00 : Wayne Van Sluys : Everything You Know about Oracle Essbase Tuning is Wrong or Outdated!
  • 17.00 – 17.15 : Closing Remarks (Mark Rittman)
Full details of the two events, including more on the Hadoop Masterclass with Cloudera’s Lars George, can be found on the BI Forum 2014 home page.

Categories: BI & Warehousing

Head in the Oven, Feet in the Freezer

Michael Feldstein - Mon, 2014-04-14 05:19

Some days, the internet gods are kind. On April 9th, I wrote,

We want talking about educational efficacy to be like talking about the efficacy of Advil for treating arthritis. But it’s closer to talking about the efficacy of various chemotherapy drugs for treating a particular cancer. And we’re really really bad at talking about that kind of efficacy. I think we have our work cut out for us if we really want to be able to talk intelligently and intelligibly about the effectiveness of any particular educational intervention.

On the very same day, the estimable Larry Cuban blogged,

So it is hardly surprising, then, that many others, including myself, have been skeptical of the popular idea that evidence-based policymaking and evidence-based instruction can drive teaching practice. Those doubts have grown larger when one notes what has occurred in clinical medicine with its frequent U-turns in evidence-based “best practices.” Consider, for example, how new studies have often reversed prior “evidence-based” medical procedures. *Hormone therapy for post-menopausal women to reduce heart attacks wasfound to be more harmful than no intervention at all. *Getting a PSA test to determine whether the prostate gland showed signs of cancer for men over the age of 50 was “best practice” until 2012 when advisory panels of doctors recommended that no one under 55 should be tested and those older  might be tested if they had family histories of prostate cancer. And then there are new studies that recommend women to have annual mammograms, not at age  50 as recommended for decades, but at age 40. Or research syntheses (sometimes called “meta-analyses”) that showed anti-depressant pills worked no better than placebos. These large studies done with randomized clinical trials–the current gold standard for producing evidence-based medical practice–have, over time, produced reversals in practice. Such turnarounds, when popularized in the press (although media attention does not mean that practitioners actually change what they do with patients) often diminished faith in medical research leaving most of us–and I include myself–stuck as to which healthy practices we should continue and which we should drop. Should I, for example, eat butter or margarine to prevent a heart attack? In the 1980s, the answer was: Don’t eat butter, cheese, beef, and similar high-saturated fat products. Yet a recent meta-analysis of those and subsequent studies reached an opposite conclusion. Figuring out what to do is hard because I, as a researcher, teacher, and person who wants to maintain good health has to sort out what studies say and  how those studies were done from what the media report, and then how all of that applies to me. Should I take a PSA test? Should I switch from margarine to butter?

He put it much better than I did. While the gains in overall modern medicine have been amazing, anybody who has had even a moderately complex health issue (like back pain, for example) has had the frustrating experience of having a billion tests, being passed from specialist to specialist, and getting no clear answers.1 More on this point later. Larry’s next post—actually a guest post by Francis Schrag—is an imaginary argument between an evidence-based education proponent and a skeptic. I won’t quote it here, but it is well worth reading in full. My own position is somewhere between the proponent and the skeptic, though leaning more in the direction of the proponent. I don’t think we can measure everything that’s important about education, and it’s very clear that pretending that we can has caused serious damage to our educational system. But that doesn’t mean I think we should abandon all attempts to formulate a science of education. For me, it’s all about literacy. I want to give teachers and students skills to interpret the evidence for themselves and then empower them to use their own judgment. To that end, let’s look at the other half of Larry’s April 9 post, the title of which is “What’s The Evidence on School Devices and Software Improving Student Learning?”

Lies, Damned Lies, and…

The heart of the post is a study by John Hattie, a Professor at the University of Auckland (NZ). He’s done meta-analysis on an enormous number of education studies, looking at effect sizes, measured on a scale from 0.1, which is negligible, to 1.0, which is a full standard deviation.

He found that the “typical” effect size of an innovation was 0.4. To compare different classroom approaches shaped student learning, Hattie used the “typical” effect size (0.4) to mean that a practice reached the threshold of influence on student learning (p. 5). From his meta-analyses, he then found that class size had a .20 effect (slide 15) while direct instruction had a .59 effect (slide 21). Again and again, he found that teacher feedback had an effect size of .72 (slide 32). Moreover, teacher-directed strategies of increasing student verbalization (.67) and teaching meta-cognition strategies (.67) had substantial effects (slide 32). What about student use of computers (p. 7)? Hattie included many “effect sizes” of computer use from distance education (.09), multimedia methods (.15), programmed instruction (.24), and computer-assisted instruction (.37). Except for “hypermedia instruction” (.41), all fell below the “typical ” effect size (.40) of innovations improving student learning (slides 14-18). Across all studies of computers, then, Hattie found an overall effect size of .31 (p. 4).

The conclusion is that changing a classroom practice can often produce a significant effect size while adding a technology rarely does. But as my father likes to say, if you stick your head in the oven and your feet in the freezer, on average you’ll be comfortable. Let’s think about introducing clickers to a classroom, for example. What class are you using them in? How often do you use them? When do you use them? What do you use them for? Clickers in and of themselves change nothing. No intervention is going to be educationally effective unless it gets students to perceive, act, and think differently. There are lots of ways to use clickers in the classroom that have no such effect. My guess is that, most of the time, they are used for formative assessments. Those can be helpful or not, but generally when done in this way are more about informing the teacher than they are directly about helping the student. But there are other uses of clicker technologies. For example, University of Michigan professor Perry Samson recently blogged about using clickers to compare students’ sense of their physical and emotional well-being with their test performance:

Figure 2.  Example of results from a student wellness question for a specific class day.  Note the general collinearity of physical and emotional wellness.
FIGURE 2. EXAMPLE OF RESULTS FROM A STUDENT WELLNESS QUESTION FOR A SPECIFIC CLASS DAY. NOTE THE GENERAL COLLINEARITY OF PHYSICAL AND EMOTIONAL WELLNESS.

I have observed over the last few years that a majority of the students who were withdrawing from my course in mid-semester commented on a crisis in health or emotion in their lives.  On a lark this semester I created an image-based question to ask students in LectureTools at the beginning of each class (example, Figure 2) that requested their self assessment of their current physical and emotional state. Clearly there is a wide variation in students’ perceptions of their physical and emotional state.  To analyze these data I performed cluster analysis on students’ reported emotional state prior to the first exam and found that temporal trends in this measure of emotional state could be clustered into six categories.

Figure 3.  Trends in students' self reported emotional state prior to the first exam in class are clustered into six categories.  The average emotional state for each cluster appears to be predictive of median first exam scores.
FIGURE 3. TRENDS IN STUDENTS’ SELF REPORTED EMOTIONAL STATE PRIOR TO THE FIRST EXAM IN CLASS ARE CLUSTERED INTO SIX CATEGORIES. THE AVERAGE EMOTIONAL STATE FOR EACH CLUSTER APPEARS TO BE PREDICTIVE OF MEDIAN FIRST EXAM SCORES.

Perhaps not surprisingly Figure 3 shows that student outcomes on the first exam were very much related to the students’ self assessment of their emotional state prior to the exam.  This result is hard evidence for the intuitive, that students perform better when they are in a better emotional state.

I don’t know what Perry will end up doing with this information in terms of a classroom intervention. Nor do I know whether any such intervention will be effective. But it seems common sense not to lump it in with a million billion professors asking quiz questions on their clickers to aggregate it into an average of how effective clickers are. To be fair, that’s not Larry’s point for quoting the Hattie study. He’s arguing against the reductionist argument that technology fixes everything—an argument which seems obviously absurd to everybody except, sadly, the people who seem to have the power to make decisions. But my point is that it is equally absurd to use this study as evidence that technology is generally not helpful. What I think it suggests is that it makes little sense to study the efficacy of educational technologies or products outside the context of the efficacy of the practices that they enable. More importantly, it’s a good example of how we all need to get much more sophisticated about reading the studies so we can judge for ourselves what they do and do not prove.

Of Back Mice and Men

I have had moderate to severe back pain for the past seven years. I have been to see orthopedists, pain specialists, rheumatologists, urologists, chiropractors, physical therapists, acupuncturists, and massage therapists. In many cases, I have seen more than one in any given category. I had X-rays, CAT scans, MRIs, and electrical probes inserted into my abdomen and legs. I had many needles of widely varying gauges stuck in me, grown humans walking on my back, gallons of steroids injected into me. I had the protective sheathes of my nerves fried with electricity. If you’ve ever had chronic pain, you know that you would probably go to a voodoo priest and drink goat urine if you thought it might help. (Sadly, there are apparently no voodoo priests in my area of Massachusetts—or at least none who have a web page.) Nobody I went to could help me. Not too long ago, I had cause to visit my primary care physician, who is a good old country doctor. No specialist certificates, no Ivy League medical school degrees. Just a solid GP with some horse sense. In a state of despair, I explained my situation to him. He said, “Can I try something? Does it hurt when I touch you here?” OUCH!!!! It turns out that I have a condition called “back mice,” also called “episacral lipomas” when it is referred to in the medical literature, which, it turns out, happens rarely. I won’t go into the details of what they are, because that’s not important to the story. What’s important is what the doctor said next. “There’s hardly anything on them in the literature,” he said. “The thing is, they don’t show up on any scans. They’re impossible to diagnose unless you actually touch the patient’s back.” I thought back to all the specialists I had seen over the years. None of the doctors ever once touched my back. Not one. My massage therapist actually found the back mice, but she didn’t know what they were, and neither of us knew that they were significant. It turns out that once my GP discovered that these things exist, he started finding them everywhere. He told me a story of an eighty-year-old woman who had been hospitalized for “non-specific back pain.” They doped her up with opiates and the poor thing couldn’t stand up without falling over. He gave her a couple of shots in the right place, and a week later she was fine. He has changed my life as well. I am not yet all better—we just started treatment two weeks ago—but I am already dramatically better. The thing is, my doctor is an empiricist. In fact, he is one of the best diagnosticians I know. (And I have now met many.) He knew about back mice in the first place because he reads the literature avidly. But believing in the value of evidence and research is not the same thing as believing that only that which has been tested, measured, and statistically verified has value. Evidence should be a tool in the service of judgment, not a substitute for it. Isn’t that what we try to teach our students?

  1. But I’m not bitter.

The post Head in the Oven, Feet in the Freezer appeared first on e-Literate.

Big Data Oracle NoSQL in No Time - It is time to Upgrade

Senthil Rajendran - Mon, 2014-04-14 03:54
Big Data Oracle NoSQL in No Time - It is time to Upgrade 
Oracle NoSQL upgrade from 11gR2 to 12cR1 ( 2.0 to 3.0 )

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

The upgrade is simple , nosql is brilliant with its simplicity.














The below are the steps

  • verify prerequisite - here we verify that the storage nodes are meeting the required prerequisite for upgrading.
  • show upgrade-order - here we get the list of storage nodes in order that can be upgraded
  • replace the software - unzip the new software
  • verify upgrade - we verify if the storage nodes are upgraded to the version that we downloaded.
In our scenario we have 4x4 deployment topology with one admin node and here we will upgrade from 11gR2 to 12cR1First let us upgrade on of the admin node.

$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server1/storage$ cd $KVBASE/server1/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server1/storage &$ nohup: appending output to `nohup.out'$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:33:50 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn3sn4sn2kv->
In our case the upgrade order is determined to be sn3,sn4 and then sn2. We can verify the upgrade order at each stage.
Now let us upgrade SN3
$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server3/storage$$ cd $KVBASE/server3/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server3/storage &$
kv->  verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:40:31 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn4sn2
kv->

Now let us upgrade SN4
$  export KVHOME=$KVBASE/server4/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server4/storage$$ cd $KVBASE/server4/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server4/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server4/storage &$
kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:42:30 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->
kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23sn2
kv->
Now let us upgrade the last pending storage node SN2
$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-2.0.39$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server2/storage$$ cd $KVBASE/server2/oraclesoftware$ cp -Rf $KVBASE/stage/kv-3.0.5 .$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server2/storage &$
kv-> verify prerequisiteVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:12 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verification complete, no violations.kv->
kv-> show upgrade-orderCalculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23There are no nodes that need to be upgradedkv->
Let us quickly verify the upgrade process
kv-> verify upgradeVerify: starting verification of mystore based upon topology sequence #8430 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:27 UTCSee server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messagesVerify upgrade: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407Verify upgrade: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.kv->

As a Oracle DBA I know the complexity in upgrade but upgrading NoSQL is different.

Mobile device management is a two-sided battle

Chris Foot - Mon, 2014-04-14 01:39

The rise of the Internet of Things and the bring-your-own-device phenomenon have shaped the way database administration specialists conduct mobile device management. Many of these professionals are employed by retailers using customer relationship management applications that collect and analyze data from smartphones, tablets and numerous other devices. This level of activity creates a web of connectivity that's difficult to manage and often necessitates expert surveillance. 

Managing the remote workplace 
Merchandisers are challenged with the task of effectively securing all mobile assets used by their employees. Many of these workers have access to sensitive corporate information, whether it be a product development files, customer loyalty account numbers or consumer payment data. According to CIO, some organizations lack the in-house IT resources to effectively manage the avenues through which intelligence flows from smartphones to servers. 

As a result, small and midsize businesses often outsource to remote database support services to gain a comprehensive overview of their BYOD operations. David Lingenfelter, an information security officer at Fiberlink, told the news source that the problem many SMBs face is that their employees are using their own individual mobile devices to access company information. Many large enterprises often provide their workers with such machines, so there's inherent surveillance over the connections they're making. 

Moving to the home front 
Small, medium and large retailers alike are continuing to use CRM, which provides these commodity-based businesses with specific information regarding individuals. IoT has launched the capabilities of these programs, delivering data from a wide variety of smart mechanisms such as cars, watches and even refrigerators. Information being funneled into company servers comes from remote devices, creating a unique kind of mobile device management for database administration services to employ. 

Frank Gillett, a contributor to InformationWeek, noted that many consumers are connecting numerous devices to a singular home-based network, providing merchandisers with a view of how a family or group of apartment mates interacts with the Web. In addition, routers and gateways are acting as defaults for making network-connected homes ubiquitous. 

"These devices bring the Internet to every room of the house, allowing smart gadgets with communications to replace their dumb processors," noted Gillett.

However, it's not as if the incoming information submitted by these networks can be thrown into a massive jumble. In order to provide security and organize the intelligence appropriately, remote DBA providers monitor the connections and organize the results into identifiable, actionable data. 

OOW : Call4Proposals ... J-2

Jean-Philippe Pinte - Mon, 2014-04-14 01:09
Si vous souhaitez témoigner lors de la prochaine édition d' Oracle Open World , il ne reste plus que 2 jours pour soumettre votre sujet :
http://www.oracle.com/openworld/call-for-papers/index.html 

Don’t use %NOTFOUND with BULK COLLECT

Michael Dinh - Sun, 2014-04-13 16:54

I was working on a script for the ultimate RMAN backup validation and hoping to submit the article for an Oracle conference.

To my chagrin, one version of the script was failing for one condition and the other version would failed for another condition.

Basically, the script was very buggy.

The objective is to create a RMAN script to validate 8 backupset at a time.

I decided to use Bulk Collect and Limit clause.

Currently there are only 6 backupset.

ARROW:(MDINH@db01):PRIMARY> create table t as SELECT * FROM V$BACKUP_SET WHERE incremental_level > 0;

Table created.

ARROW:(MDINH@db01):PRIMARY> select recid from t;

     RECID
----------
       609
       610
       611
       612
       613
       614

6 rows selected.

ARROW:(MDINH@db01):PRIMARY>

Using LIMIT 8 with only 6 records would result in ZERO records returned.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN c_level1%NOTFOUND;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

Why not output the results before EXIT WHEN clause?  Works just fine.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

But what happens where there are ZERO rows in the table?

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset ;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY>

Error! I was doing something fundamentally wrong.

Finally, I figured it out. Use COUNT=0 versus %NOTFOUND;

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupsets '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19      END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

I knew the article Best practices for knowing your LIMIT and kicking %NOTFOUND by Steven Feuerstein existed, but was not able to find it at the time.

One more thing to leave you with before I go. Bulk Collect will NEVER raise a NO_DATA_FOUND exception.


Don’t use %NOTFOUND with BULK COLLECT

Michael Dinh - Sun, 2014-04-13 16:54

I was working on a script for the ultimate RMAN backup validation and hoping to submit the article for an Oracle conference.

To my chagrin, one version of the script was failing for one condition and the other version would failed for another condition.

Basically, the script was very buggy.

The objective is to create a RMAN script to validate 8 backupset at a time.

I decided to use Bulk Collect and Limit clause.

Currently there are only 6 backupset.

ARROW:(MDINH@db01):PRIMARY> create table t as SELECT * FROM V$BACKUP_SET WHERE incremental_level > 0;

Table created.

ARROW:(MDINH@db01):PRIMARY> select recid from t;

     RECID
----------
       609
       610
       611
       612
       613
       614

6 rows selected.

ARROW:(MDINH@db01):PRIMARY>

Using LIMIT 8 with only 6 records would result in ZERO records returned.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN c_level1%NOTFOUND;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

Why not output the results before EXIT WHEN clause?  Works just fine.

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

But what happens where there are ZERO rows in the table?

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 17      dbms_output.put_line(l_str);
 18      EXIT WHEN c_level1%NOTFOUND;
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset ;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY>

Error! I was doing something fundamentally wrong.

Finally, I figured it out. Use COUNT=0 versus %NOTFOUND;

ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupsets '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19    END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;
validate backupset 609,610,611,612,613,614;

PL/SQL procedure successfully completed.

ARROW:(MDINH@db01):PRIMARY> delete from t;

6 rows deleted.

ARROW:(MDINH@db01):PRIMARY> 
ARROW:(MDINH@db01):PRIMARY> r
  1  DECLARE
  2    CURSOR c_level1 IS
  3      SELECT recid FROM t WHERE incremental_level > 0;
  4    TYPE t_level1 IS
  5      TABLE OF c_level1%ROWTYPE INDEX BY PLS_INTEGER;
  6    l_level1 t_level1;
  7    l_str VARCHAR2(1000);
  8  BEGIN
  9    OPEN c_level1;
 10    LOOP
 11      FETCH c_level1 BULK COLLECT INTO l_level1 LIMIT 8;
 12      FOR i IN 1..l_level1.COUNT
 13      LOOP
 14        l_str := l_str||l_level1(i).recid||',';
 15      END LOOP;
 16      EXIT WHEN l_level1.COUNT=0;
 17      l_str := 'validate backupset '||RTRIM(l_str,',')||';';
 18      dbms_output.put_line(l_str);
 19      END LOOP;
 20    CLOSE c_level1;
 21  EXCEPTION
 22    WHEN others THEN RAISE;
 23* END;

PL/SQL procedure successfully completed.

I knew the article Best practices for knowing your LIMIT and kicking %NOTFOUND by Steven Feuerstein existed, but was not able to find it at the time.

One more thing to leave you with before I go. Bulk Collect will NEVER raise a NO_DATA_FOUND exception.


From Las Vegas to Ottawa

Pakistan's First Oracle Blog - Sun, 2014-04-13 05:27
After a very engaging session at Collaborate14 in sunny Las Vegas amidst the desert of Nevada, I just arrived in not-so-bitterly cold Ottawa, the capital of Canada. Looking forward meeting with various Pythian colleagues and hanging out with the friends I cherish most.

My Exadata IORM session went well. Lots of follow back discussion plus questions are still pouring in. I promise I will answer them as soon as I return to Australia after couple of weeks. That reminds me of my flight from one corner of the globe to the other, but well I need to learn as how to sleep like a baby during flights. Any ideas?

Ottawa reminds me of Australian capital Canberra. It's quite a change after neon-city Vegas. Where Vegas was bathing in lights, simmering with shows, bubbling with bars, swarming with party-goers, and rattling with Casinos; Ottawa is laid-back, quiet, peaceful, and small. Restaurants and cafes look cool. Ottawa River is mostly still frozen and mounds of snow are evident along the road sides with leafless trees.

But spring is here, and things look all set to rock.
Categories: DBA Blogs