Skip navigation.

Feed aggregator

Heuristic Temp Table Transformation - 2

Randolf Geist - Thu, 2015-05-07 15:41
Some time ago I've demonstrated the non-cost based decision for applying the temp table transformation when using CTEs (Common Table/Subquery Expressions). In this note I want to highlight another aspect of this behaviour.

Consider the following data creating a table with delibrately wide columns:

create table a
as
select
rownum as id
, rownum as id2
, rpad('x', 4000) as large_vc1
, rpad('x', 4000) as large_vc2
, rpad('x', 4000) as large_vc3
from
dual
connect by
level <= 1000
;

exec dbms_stats.gather_table_stats(null, 'a')
and this query and plans with and without the temp table transformation:

with cte
as
(
select /* inline */
id
, id2
, large_vc1
, large_vc2
, large_vc3
from
a
where
1 = 1

)
select
*
from
(
select id, count(*) from cte group by id
) a,
(
select id2, count(*) from cte group by id2
) b
where
a.id = b.id2
;

-- Plan with TEMP TABLE transformation
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 52000 | 1341 (1)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6609_26FA32 | | | | |
| 3 | TABLE ACCESS FULL | A | 1000 | 11M| 452 (0)| 00:00:01 |
|* 4 | HASH JOIN | | 1000 | 52000 | 889 (1)| 00:00:01 |
| 5 | VIEW | | 1000 | 26000 | 444 (1)| 00:00:01 |
| 6 | HASH GROUP BY | | 1000 | 4000 | 444 (1)| 00:00:01 |
| 7 | VIEW | | 1000 | 4000 | 443 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6609_26FA32 | 1000 | 11M| 443 (0)| 00:00:01 |
| 9 | VIEW | | 1000 | 26000 | 444 (1)| 00:00:01 |
| 10 | HASH GROUP BY | | 1000 | 4000 | 444 (1)| 00:00:01 |
| 11 | VIEW | | 1000 | 4000 | 443 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6609_26FA32 | 1000 | 11M| 443 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

-- Plan with CTE inlined (turn INLINE into hint)
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 52000 | 907 (1)| 00:00:01 |
|* 1 | HASH JOIN | | 1000 | 52000 | 907 (1)| 00:00:01 |
| 2 | VIEW | | 1000 | 26000 | 453 (1)| 00:00:01 |
| 3 | HASH GROUP BY | | 1000 | 4000 | 453 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL| A | 1000 | 4000 | 452 (0)| 00:00:01 |
| 5 | VIEW | | 1000 | 26000 | 453 (1)| 00:00:01 |
| 6 | HASH GROUP BY | | 1000 | 4000 | 453 (1)| 00:00:01 |
| 7 | TABLE ACCESS FULL| A | 1000 | 4000 | 452 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Looking at the query and plan output the following becomes obvious:

- The mere existence of a WHERE clause, even if it is just "WHERE 1 = 1" and referencing the CTE more than once triggers the transformation (nothing new, already demonstrated in the mentioned previous note, as well as the fact that the inlined CTE variant is cheaper in cost)

- There is a huge difference between the estimated size of the TEMP TABLE and the size of the row sources when using the CTE inline

The latter is particular noteworthy: Usually Oracle is pretty clever in optimizing the projection and uses only those columns required (doesn't apply to the target expression of MERGE statements, by the way), which is reflected in the plan output for the inline CTEs - the wide columns don't matter here because they aren't referenced, although being mentioned in the CTE. But in case of the temp table transformation obviously all columns / expressions mentioned in the CTE become materialized, although not necessarily being referenced when the CTE gets used.

So it would be nice if Oracle only materialized those columns / expressions actually used.

Now you might raise the question why mention columns and expressions in the CTE that don't get used afterwards: Well, generic approaches sometimes lead to such constructs - imagine the CTE part was static, including all possible attributes, but the actual usage of the CTE can be customized by a client. In such cases where only a small part of the available attributes get actually used a temp table transformation can lead to a huge overhead in size of the generated temp table. Preventing the transformation addresses this issue, but then the inlined CTE will have to be evaluated as many times as referenced - which might not be desirable either.

Oracle Enterpise for Eclipse (OEPE) in an existing Eclipse installation

Oracle's developer tools strategy is to offer the best possible developer tools choices to support diverse needs. When it comes to Java IDEs, while JDeveloper is Oracle’s owned developed Java IDE,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Spring, Tulips but not Amsterdam

Pete Scott - Thu, 2015-05-07 15:12
Scott Towers is located in village about 5 minutes from the beach. It is also just 35 minutes from the English end of the channel tunnel (even less from Dover). So short breaks to mainland Europe are just a short drive away. Last weekend we set off to see the Tulips at Keukenhof and on the […]

the fastest way to load 1m rows in postgresql

Yann Neuhaus - Thu, 2015-05-07 13:00
postreslogo.png

There have been several posts on how to load 1m rows into a database in the last days:

Variations on 1M rows insert (1): bulk insert
Variations on 1M rows insert(2): commit write
Variations on 1M rows insert (1): bulk insert - PostgreSQL
Variations on 1M rows insert(2): commit write - PostgreSQL
Variations on 1M rows insert (3): TimesTen

In this post I'll focus on how to prepare a PostgreSQL database for bulk loading in more detail.

Just Under a Week to go Until the Atlanta BI Forum 2015 – Places Still Available!

Rittman Mead Consulting - Thu, 2015-05-07 09:38

The Rittman Mead Brighton BI Forum 2015 is now underway, with presentations from Oracle, Rittman Mead, partners and customers on a range of topics around Oracle BI, DW and Big Data. So far this week we’ve had a one-day masterclass from myself and Jordan Meyer on Delivering the Oracle Information Management & Big Data Reference Architecture, sessions from Oracle on OBIEE12c, the new SampleApp for OBIEE 11.1.1.9, Big Data Discovery, BI Cloud Service and Visual Analyzer. We’ve also had sessions from the likes of Emiel van Bockel, Steve Devine, Christian Screen and others on Exalytics, data visualization, Oracle BI Apps and other topics – and a very entertaining debate on self-service BI.

NewImage

… and we’re doing it all again in Atlanta, GA next week! If you’re interested in coming along to the Rittman Mead BI Forum 2015 in Atlanta, GA, there are still spaces available with details of the event here, and the registration form here. We’re running BI Forum 2015 in the Renaissance Hotel Midtown Atlanta, the masterclass with myself and Jordan Meyer runs on the Wednesday, with the event itself kicking-off with a reception, meal and keynote from Oracle on Wednesday evening, followed by the main event itself starting Thursday morning. Hopefully we’ll see some of you there…!

 

Categories: BI & Warehousing

LMS Is The Minivan of Education (and other thoughts from #LILI15)

Michael Feldstein - Thu, 2015-05-07 07:38

By Phil HillMore Posts (318)

During yesterday’s K-20 learning platform panel at IMS Global’s Learning Impact Leadership Institute (the panel that replaced the LMS Smackdown of year’s past), Scott Jaschik started the discussion off by asking “what is the LMS?”. As I have recently complained about our Saturn Vue that replaced a Chrysler Town & Country, the answer I provided was that the LMS is the minivan of education. Everyone has them and needs them, but there’s a certain shame having one in the driveway.

The Car Committee

It’s popular to gripe about minivans, but in reality they reflect what we (the family set with kids still at home) actually are and what we do. Sure, the minivan encourages us to throw everything in the car and continue soccer mom lives, but they do offer great seating, storage, smooth rides (on boring roads at least). Likewise, the typical LMS is in actuality still a Course Management System (CMS), which reflects how courses are organized and managed in large part.

We’re done with the boring minivan and have moved on to SUVs, but the SUV has morphed into a minivan with bad gas mileage and poor seating. It feels so nice to call it a different name, but it’s still a CMS minivan at its core.

There are new innovations in the car market, like the Tesla. The risk we face in education is falling back on our RFP-driven habits. Great car demo, but the committee is using a family-driven process.  Item #142 includes having more than 5 seats, with a place for little Kenny’s sippy cup in each. You know what, let’s just make it taller and add a hatch in the back. Item #275 requires ethanol percentages (and we read an article that batteries are risky), so  could you add in an standard engine? Two years later . . . “dammit, the LMS”.

Put it together, and the LMS is important and ubiquitous, but we all know we need better options. Despite this, take away the LMS and see if students like a different method to submit assignments or check grades for every class.

Pork Belly Futures?

The metaphor has limitations, of course, as the LMS market has matured over the past few years with new options, better usability and reliability, and the beginnings of true interoperability (largely thanks to LTI).

I also do not think that the LMS is a commodity.

Is the LMS a commodity? Do you have NO opinion which you use and is price your ONLY decision criteria? That’s defines a commodity. #LILI15

— Jeremy Auger (@JeremyAuger) May 6, 2015

My reaction to the observation of the 80/20 rule (LMS has too many features, with most getting little usage) is that we need a system that does fewer things but does them very well. Then take advantage of LTI and Caliper (more on that later) to allow multiple learning tools to be used but with a way to still offer consistent user experience in system access, navigation, and provision of course administration.

I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.

Two caveats:

  • The LMS has enabled blended and fully online courses, where you can see real improvements in access, etc.
  • John Baker from D2L disagreed on this subject, and he listed off internal data of 25% or more (I can’t remember detail) improved retention when clients “pick the right LMS”. John clarified after the panel the whole correlation / causation issue, but I’d love to see that data backing up this and other claims.
Caliper Update

The biggest news out of the conference is the surprisingly fast movement on Caliper. From the press release:

Caliper has progressed through successful alpha and beta specification and software releases, providing code to enable data collection, known as Sensors (or the Sensor API) and data models (known as metric profiles). A developer community web site has been set up for IMS Members while the Caliper v1 work is offered as a candidate final release.

Michael has written about the importance of Caliper here.

We live in an appy world now. The LMS is not going away, but neither is it going to be the whole of the online learning experience anymore. It is one learning space among many now. What we need is a way to tie those spaces together into a coherent learning experience. Just because you have your Tuesday class session in the lecture hall and your Friday class session in the lab doesn’t mean that what happens in one is disjointed from what happens in the other. However diverse our learning spaces may be, we need a more unified learning experience. Caliper has the potential to provide that.

The agile approach that the Caliper team, led by Intellify Learning, is using involves the creation code first, multiple iterations, and documentation in parallel. There were several proofs of concept shown at the conference of companies implementing Caliper sensors and applications.

For now, Caliper appeals to the engineer in me, where I see the novel architecture and possibilities. But that will need to change, as the community needs to see real-world applications and descriptions in educational terms. But this should not diminish the real progress being made, including proofs of concept by vendors and institutions.

And Finally

Can someone tell me why Freeman Hrabowski is not running for state or national office? Great work as president of UMBC, but he would make a great politician with national impact.

The post LMS Is The Minivan of Education (and other thoughts from #LILI15) appeared first on e-Literate.

Access Denied - Access to administration console is restricted

Frank van Bortel - Thu, 2015-05-07 05:56
Access Denied - Access to administration console is restricted. Ran into it, today. Again. This time, I'll make a proper blog entry, not like this one... This time, I actually did follow my own advice, but for the fact, I now am working in a multi-homed WebLogic environment - I simply pasted the wrong WLS home... Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Concrete5 CMS

Yann Neuhaus - Thu, 2015-05-07 03:30

Today, a lot of CMS are existing, WordPress, Joomla, Magento, and others, in this blog I will share my experience Concrete5 through a web agency specialized based in Geneva: 8 Ways Media


What Concrete5?

alt


Not only a CMS (Content Management System) open source based on a webserver, it is coded in PHP using a MySQL database, but also it's a great Framework for developers. A simplified system (optional), URL rewriting is present to increase the performance of indexing sites from search engines.

C5 can also be used in the development of Web applications.


C5 also provides, through its content management and user rights, create intranets for companies (small scale in my opinion, it is better for an intranet, stay on a SharePoint or Alfresco).

This CMS is designed to make life easier for the end user, the handling is simple and intuitive.

Advanced management of dynamic websites with modern design, the editing mode is made directly via the FrontEnd, the possibilities are numerous: Drag and Drop, templates, etc ...

INSTALLATION


Verifying Prerequisites


The following components are required to enable concrete5 run correctly: http://www.concrete5.org/documentation/developers/5.7/installation/system-requirements/


To install the tutorial, please refer to the main site: http://www.concrete5.org/documentation/developers/5.7/installation/installation/


FEATURES


The Dashboard manages the properties related to:
 

alt

  • Rights management

alt

 

  • Management of imported content


  alt alt

  • Templates

alt

  • Block, pages

alt

  • Features: video, forms, presentations, blog, guestbook, etc ...

  alt alt

The GUI is configurable, C5 is based on a system with access to a punctilious customization.
The editing of pages, texts and other is done via the FrontEnd as soon as you are logged on as an administrator or with the writing rights on the site. The editing mode has two modes: HTML or "Composer".
      
Versioning is an asset, it means that in case of error, a trace of the old version before changes is easily restorable.
The updates are performed through a simple upload, followed by a single click.

CONCLUSION


Despite the trio "Wordpress - Joomla - Drupal" according to studies, having discovered Concrete5, I recommend it for its very intuitive look and ease of use, on the other hand the developer of communities seem to be active and growing, what facilitates the resolution of "small issues" in cases. Also, exports of all site content in HTML is possible, this could help if you have to change the web server. However, the majority of bases useful plugins have a high cost, the most "design" themes are not free (expense of taste of each) and the support service is paid if the host is not through their bias.
I highly recommend this CMS! Its simplicity and amplitude adaptation to different needs allows the creation of a site with confidence and ease.

APEX 5.0: Universal Theme Migration Guide available!

Patrick Wolf - Thu, 2015-05-07 02:59
Do you want to use the new Universal Theme for your existing applications? Then you should definitely have a look at the Universal Theme Migration Guide my colleagues Shakeeb Rahman and Tim Chambers have just published.
Categories: Development

Oracle MAF and WebSockets Integration - Live Twitter Stream

Andrejus Baranovski - Wed, 2015-05-06 23:26
Oracle MAF and WebSockets - I will describe how it works together. WebSockets is a protocol providing full-duplex communication channel over a TCP connection. This channel is interactive (communication is both ways) and we can send messages from the server to the client (MAF application running on the device). There is no need to use push notifications, WebSockets provide JSON support and allow to send complex payload data. In a way it competes with REST, however REST is different with request is being initiated by the client. WebSockets data is received automatically - there is no need to trigger any event by the client.

I have implemented sample MAF application with WebSockets integration, you can download it here - AltaMobileApp_v1.zip. Finance screen in the application contains MAF output text component. This component displays latest data received through WebSockets channel. Server side is listening for live Twitter Stream updates and sends each tweet location over WebSockets to the MAF application. See how it works in this screen recording:


WebSockets communication is not blocking MAF application functionality, it runs in the separate thread. User can navigating between MAF screens, run different actions and WebSockets delivers data in parallel.

Sample applications is based on two parts - server side WebSockets implementation with Twitter Stream listener and client side MAF application with WebSockets client.

Twitter Stream is handled with Twitter4J API, JAR's are included with the sample. You would need to provide your own Twitter account details, access keys could be retrieved for your account from Twitter developer portal. Make sure to obtain these keys, before running sample application:


New message from Twitter Stream is received by listener method - onStatus. I'm listening for all the tweets around the world, related to corporate keyword. Once there will be new tweet related to this topic, onStatus listener will be notified. In the listener method, I'm retrieving tweet location and and initiating WebSockets communication:


Method in WebSockets implementation - notifyClient, sends text message to the client (JSON message is supported as well):


Listener for Twitter Stream is started automatically, when application is deployed. This is done through servlet initialisation:


On the client side - MAF application is configured to receive automatic notifications through WebSockets channel. Implementation for WebSockets listener is very similar with the regular ADF Faces - it is done through JavaScript. MAF feature is registered with JavaScript file:


JavaScript contains all required methods for WebSockets communication. Here we open WebSockets channel with connectSocket function and further listen for the new messages/notifications with onMessage method. My goal is to update MAF components with the new data received through WebSockets. For this reason, I'm invoking Java method from onMessage JavaScript function and passing payload data (recently received tweet location):


Invoked Java method - processWebSocketMessage is responsible to update MAF bindings with the new data. Data Bindings class contains standard MAF propertyListener implementation, which ensures data refresh on MAF UI:


MAF output text on the UI displays recent data received through WebSockets channel:

EMC World 2015 - Day 3 at Momentum

Yann Neuhaus - Wed, 2015-05-06 18:53

In this post I would like to relay some advices around the upgrade to Documentum 7.2 I have got in the session from Patrick Walsh about "What's New, what's Next: EMC Documentum Platform".

With Documentum 7.2 the processes for the upgrade is more cleaner and there are less restrictions. For instance more upgrade scenarios are documented, that can help us to define the best upgrade path to use.

There was also a slide which listed the following points to take into account when we have to define the right time to upgrade the current installation.

- Features
- Costs
- Dates

Is there not something missing?
On my point of view, there is at least an additional point to consider. When I do the impact assessment to know if we have to upgrade or not and when, I study the list of issues that have been fixed with this new version - if we are impacted or not by them - but also which open issues (who knows an application without bugs? ) are coming with it that are acceptable.

Another helpful information - which can give an insight to customers - is the time to achieve a typical upgrade project.
Documentum considers 6 to 12 months for the planning (evaluation of the release, gathering business requirements, budget approval aso) and 6 to 24 months for the implementation and testing.
Based on that, customers still using the version 6.7 (End Of Support is April 30 2018) should think to upgrade to version 7.x

To facilitate the upgrade, the client and platform do not have to be upgraded in one big bunch. the D6.x clients can be upgraded first and then the 6.7 platform.

Documentum introduced also "Phased Service Upgrades". For instance, we have the possibility to upgrade xPlore from version 1.3 to 1.5 in phase one and a couple of months later in phase two the platform from 6.7 to 7.2.
Or vice-versa, we start with the platform and later on we upgrade xPlore.
With this approach, having de-coupled services, we have more flexibility and less downtime.

And now, last but not least, the aim for the future is to have no downtime at all during the upgrade. THIS would be wonderfull !

 

Find Contents of RMAN backuppiece

Michael Dinh - Wed, 2015-05-06 18:14

RMAN backuppiece listings from OS

oracle@arrow:hawklas:/home/oracle
$ ll /oradata/backup/
total 216088
-rw-r-----. 1 oracle oinstall  1212416 May  5 11:06 DBF_HAWK_3130551611_20150505_hjq65thu_1_1_KEEP
-rw-r-----. 1 oracle oinstall 50536448 May  5 11:07 DBF_HAWK_3130551611_20150505_hkq65thu_1_1_KEEP
-rw-r-----. 1 oracle oinstall 39059456 May  5 11:07 DBF_HAWK_3130551611_20150505_hlq65thv_1_1_KEEP
-rw-r-----. 1 oracle oinstall  5529600 May  5 11:07 DBF_HAWK_3130551611_20150505_hmq65tie_1_1_KEEP
-rw-r-----. 1 oracle oinstall  1785856 May  5 11:07 DBF_HAWK_3130551611_20150505_hnq65tit_1_1_KEEP
-rw-r-----. 1 oracle oinstall    98304 May  5 11:07 DBF_HAWK_3130551611_20150505_hoq65tjd_1_1_KEEP
-rw-r-----. 1 oracle oinstall     2560 May  5 11:07 DBF_HAWK_3130551611_20150505_hpq65tjf_1_1_KEEP
-rw-r-----. 1 oracle oinstall  1343488 May  5 11:07 DBF_HAWK_3130551611_20150505_hqq65tjh_1_1_KEEP
-rw-r-----. 1 oracle oinstall  1212416 May  4 19:43 HAWK_3130551611_20150504_h9q647ee_1_1
-rw-r-----. 1 oracle oinstall 39051264 May  4 19:43 HAWK_3130551611_20150504_haq647ee_1_1
-rw-r-----. 1 oracle oinstall 50315264 May  4 19:43 HAWK_3130551611_20150504_hbq647ef_1_1
-rw-r-----. 1 oracle oinstall  5529600 May  4 19:43 HAWK_3130551611_20150504_hcq647em_1_1
-rw-r-----. 1 oracle oinstall  1785856 May  4 19:43 HAWK_3130551611_20150504_hdq647ep_1_1
-rw-r-----. 1 oracle oinstall   285184 May  4 19:43 HAWK_3130551611_20150504_hfq647ev_1_1
-rw-r-----. 1 oracle oinstall  1088000 May  4 19:43 HAWK_3130551611_20150504_hgq647ev_1_1
-rw-r-----. 1 oracle oinstall   280064 May  4 19:43 HAWK_3130551611_20150504_hhq647f0_1_1
-rw-r-----. 1 oracle oinstall 11075584 May  4 19:43 HAWK_c-3130551611-20150504-0e
-rw-r-----. 1 oracle oinstall 11075584 May  4 19:43 HAWK_c-3130551611-20150504-0f

Let’s find the backupset and content of backupset for backuppiece.

oracle@arrow:hawklas:/home/oracle
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Wed May 6 17:02:57 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)

RMAN> list backuppiece '/oradata/backup/HAWK_3130551611_20150504_hbq647ef_1_1';

using target database control file instead of recovery catalog

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
555     554     1   1   AVAILABLE   DISK        /oradata/backup/HAWK_3130551611_20150504_hbq647ef_1_1

RMAN> list backupset 554;


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
554     Full    47.98M     DISK        00:00:09     2015-MAY-04 19:43:20
        BP Key: 555   Status: AVAILABLE  Compressed: YES  Tag: TAG20150504T194309
        Piece Name: /oradata/backup/HAWK_3130551611_20150504_hbq647ef_1_1
  List of Datafiles in backup set 554
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  2       Full 1946389    2015-MAY-04 19:43:11 /oradata/HAWKLAS/datafile/o1_mf_sysaux_bg5n9c44_.dbf

RMAN>

The backuppiece is from FULL database backup containing datafile for tablespace SYSAUX.


EMC World Las Vegas – Momentum 2015 third day D2 news

Yann Neuhaus - Wed, 2015-05-06 17:36

This was a more day of networking with EMC partner contact and third party software editors. On the other side I attended a session about D2 news and what is comming next.

 

EMC divided D2 enhancements by 3 main major themes.

 

First was about productivity and a modern look and feel with:

 

    • graphical workflow widget

    • drag and drop from D2 to Desktop

    • better browsing with enhanced facet navigation

    • multi-document support in workflows

    • faster content transfer with new D2-BOCS for distributed environments

 

Second was about Information integrity with

    • more SSO implementation support, like Tivoli

    • folder import with inner documents as virtual document

 

Then finally about software agility with

 

    • ease of new user on-boarding with default configuration settings

    • PDF export of D2 configuration for multi environment comparison

 

I hope you enjoyed reading this summary of today at EMC world – Momentum 2015. Thanks for your attention.

Overcome User Adoption to Drive Sales

Linda Fishman Hoyle - Wed, 2015-05-06 09:36

A Guest Post by Neil Pridham (pictured left), Senior Director, CX Sales Applications, Global Sales Support, Oracle

Driving Sales Systems

The use of technology to drive sales organizations has been a focus for many sales leaders over the past twenty years. In that time, software vendors have struggled to balance business complexity with speed of implementation and change. Sales leaders have struggled to balance user adoption with the burden of data entry. We have seen, as a result, large numbers of CRM projects fail to deliver the promise. What many companies have ended up with is a glorified address book, diary, and list of opportunities.

So what can we do to address this?

A key inhibitor to successful technology use is user adoption. Most companies have spent time improving sales processes, driving sales performance, and increasing efficiency, but they have not really tackled the issue of user adoption. Without good user adoption of sales systems the real value from them is merely a dream. Good user adoption drives the data upon which the remainder (marketing, analytics, workflow, decision making, forecasting, win/loss, quoting, ordering etc) rely.

So is user adoption really that difficult?

You could argue that in the early days of Sales Force Automation (SFA) it probably was. In the early days of SFA there were no mobile devices, analytics was crude, and at that stage, having a single address book and diary was probably a major step forward for many sales organizations that still used paper-based call reporting. In those early days, many sales reps were simply not used to using technology to sell.

But we have moved on, and today, recording those things is simply commodity SFA. It is the nice bed in your hotel room. It is the three-year paint warranty on your car. We just take those things for granted. Solutions that allow you to simply record basic information are not delivering what a modern sales organization needs.

What drives user adoption?

Well, I would suggest that the following elements drive user adoption in the SFA world:

  • Simplicity
  • Mobility
  • Compliance and Gamification
  • Good Sales Management

Simplicity: In order to compel a sales team to use technology, it has to be simple, fast, and easy to use. We all know that reps want to be out selling and not keying in information. Let’s give them the software to help them do this. Let’s ensure they have access to all the information they need, when they need it, and ensure they feel that others are feeding the solution to make their life easier, not the other way around.

Mobility: Today there is no reason to stop reps from being almost entirely field based. From core SFA to quoting, pricing, contracting, forecasting, and communications; empower your reps to operate remotely, at speed, and successfully.

Compliance and Gamification: Increasingly sales organizations are under pressure, both internally and externally, to comply with procedure and/or legislation. Compliance can be mandated through software solutions using workflow, procedure, and gamification. Ensuring that a rep complies with lagging measures such as quota attainment, revenue, and invoices paid is key to hitting your sales numbers. Do this using Sales Performance (SPM) tools. Ensuring your reps comply with softer leading measures such as forecasts, quote quality, and discount management are key to your profitability and growth. Do this using Configure, Price and Quote (CPQ) tools.

Good Sales Management: This is the hard part. Good sales management is key to the success of sales solutions. A manager that can explain, motivate and continually drive the use and benefits of the solution will ensure success. Back away from this and the sales reps will happily return to their ways of working.

If you are looking to improve your sales organization, then look for a software vendor that can help you drive the user adoption of your systems. A vendor that can deliver the basic requirements (SFA1.0), but also the other key areas of Simplicity, Mobility, Compliance and Gamification. This will ensure your teams exhibit the behaviors you need to get the most from your SFA investment and hit your targets. Those elements, aligned with your Good Sales Management, will be the drivers to your sales success.

No space left on device...

Darwin IT - Wed, 2015-05-06 09:20
Today I ran into something curious, that I saw a few weeks ago on a training that I gave: the root filesystem  ran full (Oracle Linux 6). At first I did not find anything that caused the problem, but the command 'df -k' indeed suggested a full root filesystem. Using 'du -sh /home/oracle' we found that that folder consumed an unreasonable amount of space. In my case today I found the same. It turns out that there are 2 hidden files that were the problem:

[oracle@darlin-vce-soa ~]$ ls -al
...
-rw-------. 1 oracle oinstall 4488290304 May 6 16:09 .xsession-errors
-rw-------. 1 oracle oinstall 19752 May 4 16:47 .xsession-errors.old
[oracle@darlin-vce-soa ~]$ rm -rf .xsession-errors
[oracle@darlin-vce-soa ~]$ rm -rf .xsession-errors.old 
 
As you can see the .xsession-errors file is terribly large, in the training we found that it was the .old file. Actually it turns out that these files are rolling errors-logs of the output of applications that use a graphical interface. In this case it logs amongst others the output of JDeveloper, and my grow very large due to java-exceptions. So in case of a regular exception raised by JDeveloper, you might want to keep these files 'in the eye'.

You can savely remove those files, to save up space. But if they have grown that big, you might want to tail those to see what causes the problems.

Another tip might be to remove old kernels: when upgrading to a new kernel, Oracle Linux keeps the old kernel files. You can find a description to remove those here.

Finding the Oracle Database Appliance Plug-in within #em12c

DBASolved - Wed, 2015-05-06 08:26

The Oracle Database Appliance (ODA) has been around for a few years now. It is a great, compact, and powerful machine for running at two-node Oracle Real Application Cluster (RAC). The adoption of the ODA has been mostly seen in medium sized organizations that need a work horse but cannot afford the sticker price of an Oracle Exadata.

Just like all the appliances that Oracle puts out, there is a need to monitor these appliances from top to bottom. This is achived by using Oracle Enterprise Manager 12c Plug-ins. Recently, Oracle let it be known that the ODA Plug-in has been released; however, from searching online it is not easily found. Hence the reason for this blog post…. :)

To find the ODA Plug-in, you need to basically download it from within the Self-Update area inside of Oracle Enterprise Manager 12c. In order to do this, you need to set you MOS credientials to access MOS.

Using Setup -> My Oracle Support -> Set Credentials

Once your MOS credentials are set, then you can got to the Self-Update page and update the plug-ins for your Oracle Enterprise Manager (Setup -> Extensibility -> Self-Update).

From the Self-Update page, select the Check Update.

After clicking the Check Update button, Oracle Enterprise Manager will kick off a job to update all the plug-ins in the software library. Once the job completes, you can look at the status of the job and see that the Oracle Database Appliance plug-in was downloaded successfully.

Now that the plug-in has been downloaded, you can go back to the Plug-in Page and deploy the plug-in to the agents that are running on the ODA targets (Setup -> Extensibility -> Plug-ins).

Listed under the Engineered Systems plug-ins, you will not see version 12.1.0.1.0 of the Oracle Database Appliance plug-in.

Now that the plug-in has been downloaded, it can be deployed to the required targets and configured (more on this later, hopefully).

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Supporting a Day Against DRM with PacktPub

Senthil Rajendran - Wed, 2015-05-06 07:47
What is DRM?DRM stands for “digital rights management,” a bit of technology that hardware and software manufacturers, publishers, and copyright holders use to control the way we use the devices and media that we own. The idea is to limit users’ ability to copy the content without permission, but DRM does much more: it shapes how people tinker with and share devices, software, music, movies, etc. they legally paid for. Have you ever unsuccessfully tried to copy music you “bought” from your computer to your iPhone? Attempted to download an ebook from Amazon only to discover it isn't “compatible” with your device? That’s DRM at work.To celebrate all eBooks and Videos are available in a lesser price with PacktPub. Please read here http://bit.ly/1KgYlv6

Live Webcast with Oracle CIO, More: Introducing Documents Cloud Service

WebCenter Team - Wed, 2015-05-06 05:00

In case you missed it last week, we are gearing up for a live video webcast that is a powerhouse of executives offering a 360-degrees perspective on Oracle Documents Cloud Service - an enterprise grade cloud file sharing and collaboration service. Find out about the current state of the industry,the gaps this solution addresses and what that means to both LoB users and the IT wihin your enterprise. Hear directly from IDC's Program Vice President, Content and Digital Media Technologies, Melissa Webster on the current state of EFSS and why there is a need to broaden the requirements of a solution. Oracle Vice President, Scott Howley will host the discussion with Oracle senior product management executives on product strategy, vision and the value it provides to LoBs like marketing. And don't miss Scott's discussion  with Oracle CIO, Mark Sunday on Oracle's strictest requirements of cloud solutions and in-house use of Documents Cloud Service. Our customer, TekStream Solutions will  also share their take on real world requirements of an enterprise grade cloud content sharing solution.

Register today for the live webcast and get your questions answered live by the executives. And if you are on social media, connect with us using #OracleDOCS

Webcast: Introducing Documents Cloud Service
Date/Time: Wednesday, May 13 at 10 am PT/1 pm ET
Your evite awaits.

See you online on May 13 at 10 am PT/ 1 pm ET.

.



getting started with postgres plus advanced server (4) - setting up the monitoring server

Yann Neuhaus - Wed, 2015-05-06 04:26

If you followed the first, second and the third post the current ppas infrastructure consists of a primary database, a hot standby database and a backup and recovery server.

OpenTSDB and Google Cloud Bigtable

Pythian Group - Wed, 2015-05-06 02:15

Data comes in different shapes. One of the these shapes is called a time series. Time series is basically a sequence of data points recorded over time. If, for example, you measure the height of the tide every hour for 24 hours, then you will end up with a time series of 24 data points. Each data point will consist of tide height in meters and the hour it was recorded at.

Time series are very powerful data abstractions. There are a lot of processes around us that can be described by a simple measurement and a point in time this measurement was taken at. You can discover patterns in your website users behavior by measuring the number of unique visitors every couple of minutes. This time series will help you discover trends that depend on the time of day, day of the week, seasonal trends, etc. Monitoring a server’s health by recording metrics like CPU utilization, memory usage and active transactions in a database at a frequent interval is an approach that all DBAs and sysadmins are very familiar with. The real power of time series is in providing a simple mechanism for different types of aggregations and analytics. It is easy to find, for example, minimum and maximum values over a given period of time, or calculate average, sums and other statistics.

Building a scalable and reliable database for time series data has been a goal of companies and engineers out there for quite some time. With ever increasing volumes of both human and machine generated data the need for such systems is becoming more and more apparent.

OpenTSDB and HBase

There are different database systems that support time series data. Some of them (like Oracle) provide functionality to work with time series that is built on top of their existing relational storage. There are also some specialized solutions like InfluxDB.

OpenTSDB is somewhere in between these two approaches: it relies on HBase to provide scalable and reliable storage, but implements it’s own logic layer for storing and retrieving data on top of it.

OpenTSDB consists of a tsd process that handles all read/write requests to HBase and several protocols to interact with tsd. OpenTSDB can accept requests over Telnet or HTTP APIs, or you can use existing tools like tcollector to publish metrics to OpenTSDB.

OpenTSDB relies on scalability and performance properties of HBase to be able to handle high volumes of incoming metrics. Some of the largest OpenTSDB/HBase installations span over dozens of servers and process ~280k writes per second (numbers from http://www.slideshare.net/HBaseCon/ecosystem-session-6)

There exist a lot of different tools that complete OpenTSDB ecosystem from various metrics collectors to GUIs. This makes OpenTSDB one of the most popular ways to handle large volumes of time series information and one of the major HBase use cases as well. The main challenge with this configuration is that you will need to host your own (potentially very large) HBase cluster and deal with all related issues from hardware procurement to resource management, dealing with Java garbage collection, etc.

OpenTSDB and Google Cloud Bigtable

If you trace HBase ancestry you will soon find out that it all started when Google published a paper on a scalable data storage called Bigtable. Google has been using Bigtable internally for more than a decade as a back end for web index, Google Earth and other projects. The publication of the paper initiated creation of Apache HBase and Apache Cassandra, both very successful open source projects.

Latest release of Bigtable as a publicly available Google Cloud service gives you instant access to all the engineering effort that was put into Bigtable at Google over the years. Essentially, you are getting a flexible, robust HBase-like database that lacks some of the inherited HBase issues, like Java GC stalls. And it’s completely managed, meaning you don’t have to worry about provisioning hardware, handling failures, software installs, etc.

What does it mean for OpenTSDB and time series databases in general? Well, since HBase is built on Bigtable foundation it is actually API compatible with Google Cloud Bigtable. This means that your applications that work with HBase could be switched to work with Bigtable with minimal effort. Be aware of some of the existing limitations though. Pythian engineers are working on integrating OpenTSDB to work with Google Cloud Bigtable instead of HBase and we hope to be able to share results with the community shortly. Having Bigtable as a back end for OpenTSDB opens a lot of opportunities. It will provide you with a managed cloud-based time-series database, which can be scaled on demand and doesn’t require much maintenance effort.

There are some challenges that we have to deal with, especially around a client that OpenTSDB uses to connect to HBase. OpenTSDB uses it’s own implementation of HBase client called AsyncHBase. It is compatible on a wire protocol level with HBase 0.98, but uses a custom async Java library to allow for asynchronous interaction with HBase. This custom implementation allows OpenTSDB to perform HBase operations much faster than using standard HBase client.

While HBase API 1.0.0 introduced some asynchronous behavior using BufferedMutator it is not a trivial task to replace AsyncHBase with a standard HBase client, because it is tightly coupled with the rest of OpenTSDB code. Pythian engineers are working on trying out several ideas on how to make the transition to standard client look seamless from an OpenTSDB perspective. Once we have a standard HBase client working, connecting OpenTSDB to Bigtable should be simple.

Stay tuned.

Categories: DBA Blogs