Feed aggregator

Webcast :Unleash the Power of Oracle Enterprise Manager to Build Private Clouds

Pankaj Chandiramani - Wed, 2012-04-25 02:05

There is an interesting webcast happening on 25th April on Building private cloud using EM which covers everything from Setup to Meeting & Charge-back , below are the details : 



Unleash the Power of Oracle Enterprise Manager to Build Private Clouds


Cloud is undergoing rapid adoption. As IT
organizations get flooded with requests for new resources, they are
looking for large scale automation for provisioning, monitoring and
administering resources. In this highly technical session, you will
learn from architects how the Oracle Enterprise Manager 12c Cloud
Management Pack can enable IaaS (Infrastructure as a Service) and DBaaS
(Database as a Service) involving self-service provisioning, monitoring
and chargeback in your enterprise and help you develop an enterprise
private cloud in quick time.

In this live session, we will demonstrate and review the following cloud-related topics:



  • Plan and Setup the Cloud Environment

  • Enabling Consolidation Cost Savings

  • Self Service Access

  • Monitor and Manage Cloud Operations

  • Meter, Chargeback, and Optimize Cloud Services


Click here to register for a live webcast.


Categories: DBA Blogs

Why Oracle President Mark Hurd won’t be at Collaborate

Andrews Consulting - Mon, 2012-04-23 11:41
At the first few Collaborate conferences after Oracle acquired PeopleSoft and JD Edwards then President Charles Phillips gave the opening keynote speech. A key message was that Oracle cared deeply about all of its application customers and especially those new to Oracle. I was fortunate enough to interview him on two of those occasions. His […]
Categories: APPS Blogs

Teradata Universe - Dublin

Donal Daly - Sun, 2012-04-22 13:29
Back in Dublin, for the Teradata Universe Conference. Looking forward to talking with customers and colleagues from around Europe.

We have a booth for Aster, as part of the main Teradata booth on the Expo floor. Went to check it out today and get it ready. Looks good. You'll find me or Vic Winch or Mike Whelan there. Drop by to say hello. This is how it looks post setup





And from the outside, all quiet at the moment as it is Sunday afternoon

On the Wednesday Duncan Ross & I will host the Analytics for Big Data SIG

It is going to be a very busy week!

Oracle, APEX and Longevity!

Duncan Mein - Fri, 2012-04-20 13:46
I have just read the latest Statement of Direction from Oracle (download here) and there are some comforting statements from Oracle about there long term commitment to Application Express.

For those like me who have been using APEX since the early days, this is good news as it represents a very clear intent that Oracle are very serious about APEX as key development framework.

Nice read!
Cheers Oracle.

Navigating the hype around Big Data to yield business value from it

Donal Daly - Fri, 2012-04-20 07:16
As the Big Data phenomenon continues to gather momentum, more and more organizations are starting to recognize the unexploited value in the vast amounts of data they hold. According to IDC, the Big Data technology and services market will grow to about $17 billion by 2015, seven times the growth rate of the overall IT market.

Despite the strong potential commercial advantage for business, developing an effective strategy to cope with existing and previously unexplored information could prove tough for many enterprises.

In many ways, this is because the term ‘Big Data’ itself is somewhat misleading. One definition is in terms of terabytes and petabytes of information that common database software tools cannot capture, manage and process within an acceptable amount of time. In reality, data volume is just one aspect of the discussion and arguably the most straightforward issue that needs to be addressed.

As Gartner points out; ‘the complexity, variety and velocity with which it is delivered combine to amplify the problem substantially beyond the simple issues of volume implied by the popular term Big Data.’ For this reason, ‘big’ really depends on the starting point and the size of the organization.

With so much being written about Big Data these days, it can prove difficult for enterprises to implement strategies that deliver on the promise of Big Data Analytics. For example I have read many online articles equating "MapReduce" with "Hadoop" and "Hadoop" with "Big Data". 

MapReduce is, of course, a programming model that enables complex processing logic expressed in Java and other programming languages to be parallelised efficiently, thus permitting their execution on "shared nothing", scale-out hardware architectures and Hadoop is one implementation of the MapReduce programming model.  There are other implementations of the MapReduce model – and there are other approaches to parallel processing, which are a better fit with many classes of analytic problems.  However we rarely see these alternatives discussed.

Another interesting assertion I read and sometimes I am confronted with by customers new to Hadoop is the positioning of Hadoop as an alternative to existing, SQL-based technologies that is likely to displace – or even entirely replace – these technologies.  This can often lead to an interesting discussion, but you could summarize that Hadoop lacks important capabilities found in a mature and sophisticated data warehouse RDBMS, for example: query re-write and cost-based query optimization; mixed-workload management; security, availability and recoverability features; support for transactions; etc., etc., etc.
 

There is, of course, a whole ecosystem springing-up around Hadoop – including HBase, Hive, Mahout and ZooKeeper, to name just four – and some commentators argue that in time these technologies may extend Hadoop to the point where this ecosystem could provide an alternative to existing Data Warehouse DBMS technology. 
 Possibly, but I would suggest that they have a long an arduous path to reach such a goal.

None of which is to say that Hadoop is not an extremely interesting and promising new technology – because clearly it is, and has role as enterprises embrace Big Data Analytics.  There is evidence today, from leading e-business companies that Hadoop scales well - and has a unit-cost-of-storage that will increasingly make it possible for organizations to "remember everything", by enabling them to retain data whose value for analytics is as yet unproven.
  
 
Hadoop may become the processing infrastructure that enables us to process raw, multi-structured data and move it into a "Big Analytic" environment - like Teradata-Aster - that can more efficiently support high-performance, high concurrency manipulation of the data, whilst also providing for improved usability and manageability, so that we can bring this data to a wider audience.  The final stage in this “Big Data value chain” will the see us move the insights derived from the processing of the raw multi-structured data in these "up stream" environments into the Data Warehouse, where it can most easily and most efficiently be combined with other data - and shared with the entire organization, so in order to maximize business value.

Teradata continues to invest in partnerships with leading Hadoop distributors Cloudera and Hortonworks - and to develop and enhance integration technology between these environments and the Teradata and Teradata-Aster platforms.

The fact that Big Data is discovery-oriented and its relative immaturity compared with traditional analytics, arguably means that it doesn’t sit well within the IT department because requirements can never be fully defined in advance. Neither should it logically fall to business analysts used to using traditional BI tools.

As a result, a new role has emerged for data scientists, who are not technologists but are also not afraid of leveraging technology. Rather than seeking an answer to a business question, this new professional is more concerned with what the question should be. The data scientist will look for new insights from data and will use it as a visualization tool not a reporting tool.

In future, many believe that having this type of individual on staff will also be key to generating maximum value from Big Data. In the meantime, the onus will invariably fall to the CIO to prepare and act for a changing Big Data landscape.

Customers can be assured that Teradata will continue to be their #1 strategic advisor for their data management and analytics. We continue to provide compelling and innovative solutions with Teradata Aster and Teradata IDW appliances. We will also work with best-in-class partners to provide choices in integrated solutions and reference architectures to help customers maintain competitive advantage with their data.

What is certain is that interesting times lay ahead, and that those enterprises that can successfully execute a Big Data strategy will gain competitive advantage from the valuable insights gained from Big Data Analytics.

Signing back on...

Donal Daly - Fri, 2012-04-20 06:27
Wow... Has it been nearly 5 years since my last post...

Well it has been an incredibly interesting and challenging time, Spent time with Informatica as VP of R&D for their Data Quality business. Met some great guys and learned a lot. Moved then to working for Pocket Kings (company behind Full Tilt Poker) as Director of Database systems. First time in my career on the customer/IT side of the fence. It was challenging, but working with very bright and committed people you learn a lot. It is also where I got the bug about Big Data through running an Aster Data nCluster 30+TB, 40+ nodes. After Teradata acquired Aster Data about a year ago I was lucky to get offered a position to spearhead the adoption of Aster Data in EMEA, both internally within Teradata and also with the Teradata customer base.

I am now living and working in the UK. I am still an Apple nut, still a Porsche nut, maybe still a nut period :-)

So what you can expect from this blog is my musings on Big Data and insight gleaned from working with customers to deliver business value from Big Data.  I still also probably post about my passion for Porsche, I'm lucky to have a tweaked (540 bhp) 996 Turbo as my daily driver.

Enjoy and I welcome your feedback...

Announcing Oracle Optimized Solution for Oracle Unified Directory

Mark Wilcox - Fri, 2012-04-20 02:03
I'm happy today to be able to share that we released an optimized solution for Oracle Unified Directory. It's one of the first public announcements we can make of several cool & useful things we've been working on. We have more coming from identity & access team. Which reminds me - for my loyal readers here - since December 2011 - besides covering directory - I am also now on the Oracle Access Manager Suite team. My colleague Sylvain post summed it up nicely what it is:Oracle Optimized Solution for Oracle Unified Directory is a complete solution - Software and Harware engineered to work together.It implements Oracle Unified Directory software on Oracle's SPARC T4 servers to provide highly available and extremely high performance directory services for the entire enterprise infrastructure and applications. The solution is architected, optimized, and tested to deliver simplicity, performance, security, and savings in enterprise environments. More details available at http://www.oracle.com/us/solutions/1571310 While that post is short - it is dense with information. So to explain it simpler - within Oracle we have a team (Optimized Solutions) who work with our product teams to show how our customers can get the best performance out of our hardware when running a specific software package. Instead of just giving you a generic tuning guide for our product - we've gone through the tuning steps and tested the configuration(s) for you. Thus besides giving you great performance - it's faster & simpler deployment because you can reduce the time it takes to run a tuning exercise from scratch. Optimized solutions simplifies that exercise because we've already done most (if not all) of the work for you. Click here to learn more about our Optimized Solution for Oracle Unified Directory.

Announcing Oracle Optimized Solution for Oracle Unified Directory

Mark Wilcox - Fri, 2012-04-20 02:03
I'm happy today to be able to share that we released an optimized solution for Oracle Unified Directory. It's one of the first public announcements we can make of several cool & useful things we've been working on. We have more coming from identity & access team. Which reminds me - for my loyal readers here - since December 2011 - besides covering directory - I am also now on the Oracle Access Manager Suite team. My colleague Sylvain post summed it up nicely what it is: Oracle Optimized Solution for Oracle Unified Directory is a complete solution - Software and Harware engineered to work together. It implements Oracle Unified Directory software on Oracle's SPARC T4 servers to provide highly available and extremely high performance directory services for the entire enterprise infrastructure and applications. The solution is architected, optimized, and tested to deliver simplicity, performance, security, and savings in enterprise environments. More details available at http://www.oracle.com/us/solutions/1571310 While that post is short - it is dense with information. So to explain it simpler - within Oracle we have a team (Optimized Solutions) who work with our product teams to show how our customers can get the best performance out of our hardware when running a specific software package. Instead of just giving you a generic tuning guide for our product - we've gone through the tuning steps and tested the configuration(s) for you. Thus besides giving you great performance - it's faster & simpler deployment because you can reduce the time it takes to run a tuning exercise from scratch. Optimized solutions simplifies that exercise because we've already done most (if not all) of the work for you. Click here to learn more about our Optimized Solution for Oracle Unified Directory.

SQL Tuning Advisor - 101

Barry McGillin - Thu, 2012-04-19 02:53
The DBMS_SQLTUNE package is the interface for tuning SQL on demand. Its Doc pages are here.  Have a look.  There is a lot of stuff to do to set a tuning task, run it, report on it and then get it to do something useful.  We've wrapped all that into our SQL Tuning Advisor function which means you dont need to start writing plsql API calls to make this work.  Stick in your dodgy query, click the advisor button and visualize the results.

Here's a look at how to do this.  Firstly, we need to grant a few permissions to our user. I'm doing this as sys.
 Then, for this demo, I want to clean out all the statistics on the tables I want to look at.
 Now, here's my initial query, getting total and mean salary grouped by departments.
When we then run the tuning advisor, a new tab appears on the worksheet which has the main results from the tuning sesstion.  This tab has four main sections to it. These are the statistics which the advior found on the objects in the query, changes to the profile which is in use, any indexes which need to be added.  Finally, if needed, there is a restructuring tab which may have some sql to help you restructure your query.


Finally, on the right hand side, we can see the details of tuning job.  The SQL_TUNE package generate text which we organise into the tabs.




We can see from the output above that the statistics are not available and the tool is recommending refreshing statistics on the objects in the original query.

We can then go and analyze the tables to see if that helps.


We can then check that the stats are fresh and at the time of posting, this is current.



Now, going back the tuning advisor and running it again, shows some different stats

Heres the final look at what the Tuning advisor tells us at the end of the second run.  This is the standard text output that comes from the tuning package

GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : staName14054
Tuning Task Owner : HRDEMO
Tuning Task ID : 9295
Workload Type : Single SQL Statement
Execution Count : 1
Current Execution : EXEC_9255
Execution Type : TUNE SQL
Scope : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status : COMPLETED
Started at : 04/19/2012 07:33:50
Completed at : 04/19/2012 07:33:50

-------------------------------------------------------------------------------
Schema Name: HRDEMO
SQL ID : 028hrurkuc6ah
SQL Text : SELECT SUM(E.Salary),
AVG(E.Salary),
COUNT(1),
E.Department_Id
FROM Departments D,
Employees E
GROUP BY E.Department_Id
ORDER BY E.Department_Id

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- Restructure SQL finding (see plan 1 in explain plans section)
----------------------------------------------------------------
An expensive cartesian product operation was found at line ID 2 of the
execution plan.

Recommendation
--------------
- Consider removing the disconnected table or view from this statement or
add a join condition which refers to it.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original
-----------
Plan hash value: 2187233893


---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11 | 77 | 35 (3)| 00:00:01 |
| 1 | SORT GROUP BY | | 11 | 77 | 35 (3)| 00:00:01 |
| 2 | MERGE JOIN CARTESIAN | | 2889 | 20223 | 34 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | EMPLOYEES | 107 | 749 | 3 (0)| 00:00:01 |
| 4 | BUFFER SORT | | 27 | | 32 (4)| 00:00:01 |
| 5 | INDEX FAST FULL SCAN| DEPT_ID_PKX | 27 | | 0 (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
3 - SEL$1 / E@SEL$1
5 - SEL$1 / D@SEL$1

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - (#keys=1) "E"."DEPARTMENT_ID"[NUMBER,22], COUNT(*)[22],
COUNT("E"."SALARY")[22], SUM("E"."SALARY")[22]
2 - (#keys=0) "E"."SALARY"[NUMBER,22], "E"."DEPARTMENT_ID"[NUMBER,22]
3 - "E"."SALARY"[NUMBER,22], "E"."DEPARTMENT_ID"[NUMBER,22]
4 - (#keys=0)

-------------------------------------------------------------------------------

For doing this without SQL Developer, there are several things which you need to do. I have a little graphic which looks at each of the steps which need to be taken to create a tuning job in the normal SQL*Plus interface.  The main steps are creating task tuning tasks, and then interpreting the output.
Finally, this functionality is part of the SQL Worksheet in SQLDeveloper, which together with trace file editing, explain plan and autotrace, hierarchical profiler and monitoring sessions adds more tools to the toolbox for trying to find issues with you code.

Using Hierarchical Profiler in SQL Developer

Barry McGillin - Wed, 2012-04-18 08:30
One of the features exposed since SQL Developer 1.5 is the hierarchical profiler.  There have been several blogs and things about this but none I think that really get into the detail of what you are seeing and how to do it.

The hierarchical profiler allows you to see what happens when your piece of PL/SQL is running.  More specifically, it allows you to see where it is spending most of its time, which means you can concentrate on hammering that down, rather than wondering where to start.

For today, I'm using a really basic reference table with a few rows in it to allow us to do something.  I have also created two procedures, one of which calls the other so we have some nested dependencies.

drop table hier_demo;
create table hier_demo (id number, name varchar2(200));
insert into hier_demo values (1, 'Barry');
insert into hier_demo values (2, 'Lisa');
insert into hier_demo values (3, 'Rebecca');
insert into hier_demo values (4, 'Katie-Ellen');

CREATE OR REPLACE
PROCEDURE PRINTER(
NAME IN VARCHAR2 )
AS
BEGIN
dbms_output.put_line(NAME);
END PRINTER;
/
CREATE OR REPLACE
PROCEDURE SHOW_PEEPS
AS
CURSOR hiercur
IS
SELECT * FROM hier_demo;
-- hierrec hiercur%type;
-- type hiertab is table of hierrec%TYPE;
BEGIN
FOR myrec IN hiercur
LOOP
dbms_output.put_line(myrec.name);
END LOOP;
END;
/

Running the main procedure normally gives us some nice and simple out put.


When we click on the profile button in the plsql editor, SQL Developer will check that you have the proper permissions and the associated table to use the profiler.

When you hit the profiler button , it first comes up with the run dialog to set the parameters for the stored procedure.  Hitting ok on this diualog will run the stored procedure and any issues it has will also pop up while you are profiling.  


As this happens, the profiler  first checks to see if the there is a profiler log directory.  and if there is not one, it will prompt you to create one and get the appropriate permissions.






Hitting ok on this makes the tool then set up the directory for the profile.  To do this, it needs to run some SQL as sys to do it.


If the user agrees with all this, then he is prompted for SYS passwd to actually create the tables for the profiler statistics in the local user, in this case, hrdemo.
Finally, when they agree, the tool asks if it can setup a local set of tables for the profiler,  We'll agree to this too and make sure the profile is captured.

 Now, when we look at the profile tab of the PLSQL editor, we should have a new editor with the results of the profile.


This shows us a breakdown of the how the procedure actually executed all the way down to the actual fetch which returned the rows.  A very slight change to the procedure, in this case adding another procedure as a dependency which we also described above, we can show the nesting in the profile going further down.
create or replace
PROCEDURE SHOW_PEEPS
AS
CURSOR hiercur
IS
SELECT * FROM hier_demo;
-- hierrec hiercur%type;
-- type hiertab is table of hierrec%TYPE;
Begin
FOR myrec IN hiercur
Loop
PRINTER(myrec.name);
END LOOP;
END;

This now shows us that we have another profile in the set and clicking on it gives us the hierarchy of calls in the stored procedures execution.  The main point here is that we can now see the further level of indirection through the printer procedure.


So thats all of this profiler for now,  If you want to see how to do this with your own tables, the easiest thing to do is to download the Oracle Developer Days VM from OTN.  This particular blog will make an appearance as part of a bigger set later which we will discuss Tuning in general..

InteliVideo

Bradley Brown - Mon, 2012-04-16 23:50
Over the last few weeks, as I was working on my "next big thing," I started reflecting on the good, bad and ugly of prior start-ups and companies I've been involved with.  I started writing about building a start-up from the ground up - topics from raising money, pivoting, turds on the table, giving up control, keeping the day job and several more posts.

After being in the services business for over 20 years, I've thought for years about making money while I'm sleeping.  When I created my first Android app, I was so excited when I went from making $1 a day to $5 to $10 to $100 and beyond.  I loved how Google (and Apple) had created a marketplace that allowed you to write an application, put it into a market, decide on your pricing, and collect 70% of whatever price you set.  I had fun experimenting with pricing, features, etc.  I loved hearing the feedback and ideas from my customers.  I was making money while I slept!

At TUSC we provided Oracle consulting services, but we also provided training and education.  Early on I did a lot of training for our customers.  In the mid 90s I did intro (2 days) and advanced training (3 days) on SQR (a report writer for Oracle) nearly 2 months in a row.  I've done 100s of presentations at Oracle (and other) users groups.  I've written 100s of articles for user group newsletters to Oracle magazine.  I consulted for 100s of companies over the years.  But...none of these helped me make money while I slept...I had to be there, usually in person.  Then came the web and I started doing webinars.  I recorded many of those webinars over the years and we published them on our website.  But, I didn't monetize these webinars.

After many many hours of dreaming, working on the concepts, etc. I figured out my next big thing.  I decided to combine the concepts of the Android marketplace and webinars.  I decided to help people monetize their training and how to videos online.  In other words, if you create a webinar and you want to sell it, you can put it on the InteliVideo site, indicate how much you want to charge and we do the rest.  We sell the video, manage distribution to any device (mobile or browser), collect the money, deliver the content and you get 70% of the income.  It's a utility model.  No up front costs to you.  No storage costs.  No costs per byte.  No credit fees.  Simply upload a series of videos (i.e. a grouping you want to sell), tell us how much to charge (i.e. for a 1 day rental it's $20, for 3 days it's $50, etc) and sit back and wait for your monthly check to arrive.

When I first came up with the concept, I thought maybe I'd go out and develop a bunch of video content.  So I brainstormed about what I would develop first.  Oracle content?  I know people who know a lot about fishing, duck hunting, snowmobiling, wake surfing, home building, home fix up, and so many more topics...so I thought maybe I'd start there.  Then I realized that we didn't need more content, the world simply needs a video marketplace!

I'm also combining in something that was my first (paid) Internet project ever...developing a testing engine.  When I first learned Oracle Application Express (which I later co-authored a book on), I asked Larry (my co-author) to develop the testing application in ApEx.  Which he did.  I'm going to combine this functionality into the video side of the business as well.  There will be full event driven actions that you'll be able to take based on the answers.  For example, you can pause the video 3 minutes and 12 seconds in and ask a question.  If the viewer answers right, you can continue.  If they answer wrong, you can take the action you want to take - i.e. you might force them to start over or you might show them the answer in a small video.  Then at the end of the course, you can decide what you want to do based on their answers.  For example, as a part of continuing education, you might give them a certificate.  If it was a Meyers Briggs test that you were guiding them through, you might score them as an ESTJ and then up-sell them on videos about relationships for ESTJs.

What do I need from you?  Well, before long everything will be self service and I'll need your content.  So if you don't have content, please start working on it now.  If you already have content, I'd like you to be a beta customer!  I'll upload your videos, configure the site, develop the questions with you, etc.  You can either be part of the InteliVideo generic videos or you can have a white labeled site with your videos.  If you have a white labeled site, you can choose to include links for other partners (which you'll eventually get a sales commission if your site sells them) or if you want to exclude certain companies (i.e. your competitors).

So tell me...what would you like to see on the site?  There is a B to B side of this - i.e. my customers and the B to C side (i.e. selling videos to consumers).  What do you think I should tell each side?

You can view my SUPER alpha version of my "marketing" site, which I need to get all of the links working on...and you can view my generic and white labelled version of the site.

InteliVideo Marketing Site                                     InteliVideo Videos

By the way, please try out the social side of the site - i.e. enter a Facebook comment!  And, you can VOTE on the logo selection if you click on the link soon.


SQLPLUS multi-line quoted text

Mike Moore - Fri, 2012-04-13 14:49
This document demonstrates some of the problems I've encountered when using SQLPlus to insert text that has multiple lines. You can run the INSERT statements below, to demonstrate the specific problem which I state in the actual text being inserted.

--Note that these problems occur only with SQLPlus, Using TOAD, (and I assume SQL Developer), there are no errors with any of the inserts.
I used SQLPLus 11g client on Windows 7. Different configurations may produce different results.

I'd be interested in hearing about any other anomalies you find with multi-line text.


CREATE TABLE TXT (str varchar2(500));


set echo on
set sqlblanklines off
set define off
spool multiline_text.log

-------- In these examples I am using Oracle 'Q' strings: aka Quoted Strings
-------- For example, instead of quoting a string like this:
--------   'Joe''s Garage is where the ''action'' is'
-------- I do it like this:
--------   q'~Joe's Garage is where the 'action' is~'
-------- In a quoted string, the character after the initial q'
-------- can be any character that is not in the actual string content.
-------- I've chosen to use the tilde character.

---------- THE BLANK LINE --------
INSERT INTO txt ( str) 
VALUES (q'~This shows it is NOT okay to have a 

blank line unless you 'set sqlblanklines on'.~');

set sqlblanklines on

INSERT INTO txt ( str) 
VALUES (q'~This shows it is okay to have a 

blank line after you 'set sqlblanklines on'.~');


---------- THE SLASH CHARACTER --------

INSERT INTO txt ( str) 
VALUES (q'~This shows it is NOT okay to have a 
/
slash by itself on a line~');


INSERT INTO txt ( str) 
VALUES (q'~This shows it is [NOT] okay to have a 
slash followed by a space character. You can't see it, but it's there.~');


INSERT INTO txt ( str) 
VALUES (q'~This shows it is okay to have a 
/slash as a starting character provided that the
slash is [not the only] character~');


INSERT INTO txt ( str) 
VALUES (q'~This shows it is okay to have a 
/ slash as a starting character
even if it is followed by a blank.~');



-----------THE DOT CHARACTER----------

INSERT INTO txt ( str) 
VALUES (q'~This shows it is [not] okay to have a 
.
dot on a blank line by itself.~');


INSERT INTO txt ( str) 
VALUES (q'~This shows it is [not] okay to have a 
.  
dot plus some trailing blanks on a line by itself.~');


INSERT INTO txt ( str) 
VALUES (q'~This shows it is okay to have a 
. dot plus more text
even when dot is the first character~');


---------THE SEMI-COLON CHARACTER ------

INSERT INTO txt ( str) 
VALUES (q'~This shows it is okay to have a; 

blank line provided you set sqlblanklines on~');

INSERT INTO txt ( str) 
VALUES (q'~This shows it is okay to have a; plus more

blank line provided you set sqlblanklines on~');


---------THE POUND SIGN CHARACTER ------


INSERT INTO txt ( str) 
VALUES (q'~This shows it [IS] okay to have a 
#
pound-sign on a line by itself~');

INSERT INTO txt ( str) 
VALUES (q'~This shows it is [not] okay to have a 
# plus more
text when the pound sign is the first character~');

INSERT INTO txt ( str) 
VALUES (q'~This shows it is  okay to have a 
a pound-sign # so long
as the #is not the first character~');

-- ;

rollback;
spool off
--------------------------------RESULTS----------------------

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\mmoore>sqlplus

SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 16 11:16:11 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter user-name: / @mydb

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set echo on
SQL> set sqlblanklines off
SQL> set define off
SQL> spool multiline_text.log
SQL>
SQL> -------- In these examples I am using Oracle 'Q' strings: aka Quoted Strings
SQL> -------- For example, instead of quoting a string like this:
SQL> --------   'Joe''s Garage is where the ''action'' is'
SQL> -------- I do it like this:
SQL> --------   q'~Joe's Garage is where the 'action' is~'
SQL> -------- In a quoted string, the character after the initial q'
SQL> -------- can be any character that is not in the actual string content.
SQL> -------- I've chosen to use the tilde character.
SQL>
SQL> ---------- THE BLANK LINE --------
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is NOT okay to have a
  3
SQL> blank line unless you 'set sqlblanklines on'.~');
SP2-0734: unknown command beginning "blank line..." - rest of line ignored.
SQL>
SQL> set sqlblanklines on
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is okay to have a
  3
  4  blank line after you 'set sqlblanklines on'.~');

1 row created.

SQL>
SQL>
SQL> ---------- THE SLASH CHARACTER --------
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is NOT okay to have a
  3  /
ERROR:
ORA-01756: quoted string not properly terminated


SQL> slash by itself on a line~');
SP2-0734: unknown command beginning "slash by i..." - rest of line ignored.
SQL>
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is [NOT] okay to have a
  3  /
ERROR:
ORA-01756: quoted string not properly terminated


SQL> slash followed by a space character. You can't see it, but it's there.~');
SP2-0734: unknown command beginning "slash foll..." - rest of line ignored.
SQL>
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is okay to have a
  3  /slash as a starting character provided that the
  4  slash is [not the only] character~');

1 row created.

SQL>
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is okay to have a
  3  / slash as a starting character
  4  even if it is followed by a blank.~');

1 row created.

SQL>
SQL>
SQL>
SQL> -----------THE DOT CHARACTER----------
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is [not] okay to have a
  3  .
SQL> dot on a blank line by itself.~');
SP2-0734: unknown command beginning "dot on a b..." - rest of line ignored.
SQL>
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is [not] okay to have a
  3  .
SQL> dot plus some trailing blanks on a line by itself.~');
SP2-0734: unknown command beginning "dot plus s..." - rest of line ignored.
SQL>
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is okay to have a
  3  . dot plus more text
  4  even when dot is the first character~');

1 row created.

SQL>
SQL>
SQL> ---------THE SEMI-COLON CHARACTER ------
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is okay to have a;
ERROR:
ORA-01756: quoted string not properly terminated


SQL>
SQL> blank line provided you set sqlblanklines on~');
SP2-0734: unknown command beginning "blank line..." - rest of line ignored.
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is okay to have a; plus more
  3
  4  blank line provided you set sqlblanklines on~');

1 row created.

SQL>
SQL>
SQL> ---------THE POUND SIGN CHARACTER ------
SQL>
SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it [IS] okay to have a
  3  #
  3  pound-sign on a line by itself~');

1 row created.

SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is [not] okay to have a
  3  # plus more
SP2-0042: unknown command "plus more" - rest of line ignored.
  3  text when the pound sign is the first character~');

1 row created.

SQL>
SQL> INSERT INTO txt ( str)
  2  VALUES (q'~This shows it is  okay to have a
  3  a pound-sign # so long
  4  as the #is not the first character~');

1 row created.

SQL>
SQL> -- ;
SQL>
SQL> rollback;

Rollback complete.

SQL> spool off

Fine Tuning Incremental Updates using LOAD PRUNE

Keith Laker - Fri, 2012-04-13 10:57
If you are like most people, you probably use the LOAD_AND_AGGREGATE cube script that is automatically created by Analytic Workspace Manager. Fine tuning the update process simply involves filtering the fact table for new or changed rows.

With a little bit of effort, you can improve update times by writing your own cube processing script. You can also use MV log tables to automatically captured changes made to the fact table and use them as the data sources to cube updates.

AWM defines and makes the LOAD_AND_AGGREGATE script the default script of the cube. If you don’t specify a different script, LOAD_AND_AGGREGATE is automatically used as shown in the following example (note that the script references the OLAPTRAIN.SALES_CUBE but does not including the USING clause).

BEGIN
DBMS_CUBE.BUILD('OLAPTRAIN.SALES_CUBE','C',false,4,true,true,false);
END;
/

This script will run the LOAD PARALLEL and SOLVE PARALLEL commands. What this means is that for each partition, the database will LOAD data from the fact table/view and then SOLVE (aggregate) data. If you have specified a value for parallel that is greater than 1, partitions will be processed in parallel (in the example above, 4 processes). AWM also provides the ability to set the refresh method (C, or complete, in the above example).

LOAD_AND_AGGREGATE is a good choice for a full build, but it might not be the best choice for an incremental update. If you are simply updating the cube with changes within a few recent partitions (e.g., yesterday or this month), the LOAD PRUNE command is probably better than LOAD PARALLEL.

LOAD PRUNE will first query the fact table or view to first determine which partition will have new data using a SELECT DISTINCT. It will then only generate LOAD commands for those partitions that will have records loaded into them.
Let’s run through an update scenario. Make the following assumptions:

* The time dimension has months for 2008 through 2012 and the cube is partitioned by month. The cube will have 60 partitions.

* You have loaded data into the cube for January 2008 through March 2012.

* It’s now time to load data for April 2012. This data has been inserted into the fact table.

* You have mapped the cube to a view. For the April 2012 update, you have added a filter to the view so that it returns data only for April.

If you use the LOAD_AND_AGGREGATE script and choose the FAST SOLVE refresh method, the database will really to the following:

BEGIN
DBMS_CUBE.BUILD('OLAPTRAIN.SALES_CUBE USING (LOAD PARALLEL, SOLVE PARALLEL)','S',false,4,true,true,false);
END;
/

With LOAD PARALLEL, the database will process the LOAD command for each partition (all 60). Since it’s selecting from a view that’s filtered out all but April 2012, 59 partitions will have no new or changed data. Although it doesn’t take a long time to load 0 rows and figure out that a SOLVE is not required, it still adds up if there are a lot of partitions.

With LOAD PRUNE, the database will determine that a LOAD is only required for April 2012. The LOAD step is skipped for all other partitions. While you will still see the SOLVE for all partitions, it doesn’t really do any work because no rows were loaded into the partition. An example using LOAD PRUNE follows.

BEGIN
DBMS_CUBE.BUILD('OLAPTRAIN.SALES_CUBE USING (LOAD PRUNE, SOLVE PARALLEL)','S',false,2,true,true,false);
END;
/

If you would like a script that walked through a complete example using the OLAPTRAIN schema, including the use of an MV log table to automatically capture changes to the fact table, send me an email william.endress@oracle.com with a link to this posting.
Categories: BI & Warehousing

Do You Need to Learn to Write SQL Queries?

Sue Harper - Fri, 2012-04-13 05:00
I'm an old timer... when I started in the industry years ago, I learned how to create Data Models and write SQL. Yes, yes, I admit, it wasn't the first thing, and it was after I dropped the punch cards on the floor at university...! Didn't we all? In fact I taught SQL for quite a few years before I moved on to teaching application development using the various tools offered by Oracle. By the time I left Oracle, nearly 20 years on, the developers were all learning Java, but we all still need SQL.

Generally, if we're putting data into a relational, SQL-based, database and then querying the results, we probably need to know some SQL.  I know that's a pretty broad statement, but if you're an application developer and you need to get at data by writing ad hoc queries, it's key to be able to write good SQL.

Most of the people I know in the tech industry are old timers too, by that I mean they've been writing SQL for so long they can't remember when they started and it's easy for them, but what about those new to the technology?  How do you get up to speed?  How do you learn SQL? Do you know the data model of the tables you are querying?  Do have access to tools that help you look at the data model? What about tools that help you learn to write SQL?

Yup, you know what's coming - if you're starting out, we have a tool that can help you learn SQL.  The graphical interface in DBClarity Developer allows you to create, what we call SQL Rules to interact with the database, by building a visual structure of the query.  You don't need to know SQL.  The great news is, that you can see the SQL created and reuse the SQL once its created and the feedback we've had is that new users are learning SQL.

Why not download the product today and see what you can do? There lots of shorts videos on the website to help yo get started and on Tuesday the 17th April, we're running a webinar showing you how to create  SQL Rules.  Register and learn!

I'd love to hear your thoughts and feedback - I'm on twitter @MCGN_DBClarity or @SueHarps

Whitepaper : Enterprise Manager 12c Cloud Control Metering and Chargeback

Pankaj Chandiramani - Thu, 2012-04-12 22:52

Here is an interesting white paper on Metering and Charge-back using Enterprise Manager 12c . This covers Cloud, Database and Middleware.



http://www.oracle.com/technetwork/oem/cloud-mgmt/wp-em12c-chargeback-final-1585483.pdf

Categories: DBA Blogs

Looking to apply Bundle Patch 1 on Enterprise Manager 12c ? Here is a workbook to help you ....

Pankaj Chandiramani - Thu, 2012-04-12 22:45

Are you planning to apply Bundle patch 1 for EM 12c ? 


If yes , check this workbook which describes the complete flow .


Enterprise Manager Cloud Control Workbook for Applying Bundle Patch 1 (February 2012) and 12.1.0.2 Plugins [ID 1393173.1]


Applies to:
Enterprise Manager Base Platform - Version: 12.1.0.1.0 to 12.1.0.1.0 - Release: 12.1 to 12.1


Purpose
This document provides an overview of the installation steps needed to apply Bundle Patch 1 on the EM Cloud Control 12c Oracle Management Service OMS) and Management Agent.

Categories: DBA Blogs

Hello to the world of EM

Pankaj Chandiramani - Thu, 2012-04-12 22:25

Its been an year since i moved to my new role as Product Manager for Enterprise Manager & time flew like anything specially with activities like Product Beta's , Pre-launch Activity , Oracle Open World , Product Launch , Collateral creation (white-papers , video , demos etc)  & 100's of others things . Now finally i have decided to revive this blog & start sharing my experience on Em12 .

Categories: DBA Blogs

Remote Debugging with SQL Developer revisited.

Barry McGillin - Thu, 2012-04-12 16:41
As part of the development process, we all have to work out the bugs in our code.  For all of us who use SQLDeveloper , we know how to debug with SQL Developer. Compile for Debug, breakpoint and go.  However, People still get confused by what remote debugging is and how it works.  At its most basic, it allows us to run a procedure in a session and debug if from another.

So, Lets say we have a simple procedure on employees table like this.

create or replace
FUNCTION GET_EMP_NAME
(
ID IN NUMBER
) RETURN VARCHAR2 AS
name varchar2(100);
BEGIN
select first_name||' '||last_name into name from employees
where employee_id = ID;
RETURN name;
END GET_EMP_NAME;


We can compile this for debug in SQLDeveloper as normal.  Now, for remote debugging, we want to go to another session and run this function there.  For clarity, we can do it in SQL*Plus.  Before that however, we need to switch on the remote debugger listener so we can attach to a session.  So, firstly, right click on your connection and choose remote debug, which will pop up a little window

For our purposes, on localhost, we dont need to add any other information, but if you are connecting to another database on another machine, add the host name to the local address field and choose an appropriate port.  When you click ok on this, the Run manager is shown with the listener details on there as shown above.

Now, here we are with SQL*Plus, fire it up with our demo user and make sure to execute the command

 execute DBMS_DEBUG_JDWP.CONNECT_TCP('127.0.0.1',4000);

and then we can run our function as described above.

[oracle@localhost ~]$ sqlplus hrdemo/hrdemo

SQL*Plus: Release 11.2.0.2.0 Production on Thu Apr 12 19:16:37 2012

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HRDEMO@ORCL> set serveroutput on
HRDEMO@ORCL> execute DBMS_DEBUG_JDWP.CONNECT_TCP('127.0.0.1',4000);

PL/SQL procedure successfully completed.

HRDEMO@ORCL> begin
2 dbms_output.put_line(get_emp_name(100));
3 end;
4 /


Once we run the anonymous bock, the remote debugger kicks in and we stop at the appropriate breakpoint in the code.

On a last note, this works well in Application Express too so when you make a call to a function which you have remote debug switched on for, the debugger will break on the line as long as you have debug switched on in the developer toolbar.

Convert sqlplus spool output to CSV

Tyler Muth - Wed, 2012-04-11 15:47
I’ve been doing some work lately with sizing / capacity planning where the source are a few of Karl Arao’s excellent AWR Mining scripts. Had I been reading his blog more I would have known he was going down a parallel path using Tableau as I was moving in the direction of R.  Anyway, I […]
Categories: DBA Blogs, Development

Extract from SalesForce, Load to Oracle

Mike Moore - Wed, 2012-04-11 15:28

I did it this way for 2 reasons.
1)These are the tools I know how to use.
2)There are things I am not allowed to do at work due to access rights. 


This is not intended to be a perfect document, it's just intended to be helpful.
No doubt, you will find problems which I did not encounter. If you do, please leave notes in the comments to help the next guy out.


---------------------OVERVIEW--------------------------------------------------------
NOTE: When I mean something literally I use the left-bracket and right-bracket
       character like this:
      [that's literally what "I" mean]


This is the results of dozens of failed attempts due to SQLPLUS idiosyncrasies of dealing with 
multi-line text. If you don't have multi-line text, many of the following edits won't be applicable. 


The process I use is basically
  1) extract data from Salesforce into a file
  2) Use Excel to format salesforce data into Oracle INSERT statements
  3) edit all the things out of the file which SQLPLUS will find objectionable
   and put in a few additional script commands like COMMIT
  4) run SQLPLUS 


This document assumes that the columns you extract from SalesForce will match exactly with the Oracle
DB table you want to load. 


Software you will need.
  1) The Salesforce Data Loader program
  2) TextPad (get it off the web. It's the only text editor that won't screw up your files.)
  3) Excel 2007 or later (maybe earlier versions would work too)
  4) An Oracle database.


-----------------------INSTRUCTIONS---------------------------------------------------------------      
1) Use Salesforce Data Loader to create the extract .csv file (hence forth called 'the extract file').
   If you don't know how  to do this, Google it. There is much better documentation on this than
   I could ever provide. 


2) At this point, I copy all of the column headers in the .cvs file and use them to create an Oracle table. 
   I define all the columns as VARCHAR2(4000) just for ease but you can use whatever matches the data 
   if you want.


3) Use Excel to open the extract file, from step 1, and do SAVE AS sf_extract_edited.xlxs. In other words
   immediately save it back under a new name so that you don't overwrite the extract file in step 1.
   It does not really matter what you name it but it is a good idea to include the word "edited" so that
   you know that this is an edited version.


   3.1) Rename the first worksheet tab (at the bottom) to [raw_data]. 
   3.2) Do a FIND on [#NAME] using the following options ... 
        3.2.1) ctrl-f
        3.2.2) click on [Options] button on the FIND-dialog pop-up window.
        3.2.3) for [Look in:] dropdown, choose [Values]  <--IMPORTANT !
        3.2.4) make sure the entire sheet will be searched and click on Find Next
        3.2.5) if the find gets a hit, note that in for formula field you will
               probably see a value that starts with [=]
        3.2.6) on the formula field, insert a ['] before the [=] (put a single quote before the equal sign)
               this will prevent excel from interpreting the equal sign as a formula indicator. 
        3.2.7) repeat this process until no more hits on the FIND.
        
   3.3) Insert an empty column to the left of column A (this empty column will become the new column A)
        I'm doing this so that the cells line up with the cells of the next sheet which we are going to create


   3.4) create a new "Sheet", name the new sheet 'inserts'
   3.5) Create an insert statement that matches the table you will be loading then 
        position cursor at cell A2 and paste in the first part of your INSERT statement, for example:


INSERT INTO
 ZD_OPPORTUNITY (ID, ISDELETED, ACCOUNTID, RECORDTYPEID, NAME,
 DESCRIPTION, STAGENAME, AMOUNT, PROBABILITY, EXPECTEDREVENUE,
 CLOSEDATE, TYPE, NEXTSTEP, LEADSOURCE, ISCLOSED, ISWON, FORECASTCATEGORY,
 FORECASTCATEGORYNAME, HASOPPORTUNITYLINEITEM,
 PRICEBOOK2ID, OWNERID, CREATEDDATE, CREATEDBYID, LASTMODIFIEDDATE,
 LASTMODIFIEDBYID, SYSTEMMODSTAMP, LASTACTIVITYDATE,
 FISCALQUARTER, FISCALYEAR, FISCAL, SPLIT_TOTAL__C) VALUES (


 3.6) select cell B2 and enter the formula  [=CONCATENATE("q'~",raw_data!B2,"~',")] without
      the outer brackets, on the formula line
    3.6.1) If any text has more than 4000 characters, modify the above formula for that column to be ..
           [=CONCATENATE("q'~",  MID( raw_data!N11409,1,3200),"~',")]  
            excel and sqlplus count chars differently hence 3200 for a safe margin.


 3.7) drag the cell out to the right to cover all of the columns that have data in the raw_data sheet
 3.8) edit the right-most cell's formula so that your INSERT statement will end with [);] and not [,]
      3.8.1) i.e. change this: [=CONCATENATE("q'~",raw_data!CQ2,"~',")]
                      to this: [=CONCATENATE("q'~",raw_data!CQ2,"~');")]
 3.8) select all cells of column 2 and drag down to cover all rows in the raw_data sheet
 3.9) save your work
 4.0) open a new (empty) file using TextPad. You can get TextPad for free trial off the internet.
 4.1) copy-paste the entire excel content of the "inserts" tab to the empty TextPad window.
 4.2) insert the following at the top of the textpad window:
       set sqlblanklines on
       set define off
       spool opp.log
 4.3) insert SPOOL OFF at the bottom
 4.4) save and quit out of EXCEL, we won't need it any more after this.
 4.5) Save the TextPad file now, just for safety. Several edits will now need to be made to the TextPad file. 
   4.5.1) n/a
   4.5.2) Replace all ["INSERT] with [INSERT]  i.e. remove the quote mark
   4.5.3) Replace all [VALUES ("] with [VALUES (] i.e. remove the quote mark
   4.5.4) This one is a little tricky and requires using a regular expression for the Replace.
          The intent is that no text line should start with a pound-sign so we are inserting a blank.
          On the Replace dialog box, put a check mark in the Regular expression option, then do
          Replace all [^#] with [ #]
        Now un-check Regular express option.
   4.5.6) Replace all ["q'~] with [q'~]
   4.5.7) Replace all [~',"] with [~',] 
   4.5.8) Replace all [);"]  with [);]


   4.5.9) Next we will change all lines that end with [;], except for ends with [');],  to end with [:]
          This is to pick up cases such as the following multi-line text example:


                 q'~A Multi-line text;
                    where a text line ends with semi-colon
                    will cause SQLPLUS to think the line has ended and you will get
                    ERROR:
                    ORA-01756: quoted string not properly terminated~'


          1) first replace all [');] with [````]. So they won't be picked up by our next replace.


          2) then replace all [; *$] with [:] using regular expression
                  This says, If the line ends with semi-colon, or semi-colon and a bunch of spaces,
                  replace it with a colon. If it is not acceptable for you to make slight changes
                  like this, then you will have to figure something else out.
                  WARNING: If you already coded COMMIT; it is now COMMIT:, go fix it. 


          3) change all  [````] back to [');]


          4) Replace all [^/$] with [ ] using regular expression
                 SQLPLUS does not like a text line that only contains a slash.


          5) Replace all [^\.$] with [ ] using regular expression
                 SQLPLUS does not like a text line that only contains a dot.
     
   4.5.9) optional - remove all tab characters 
          1) check the Hex box and replace [09] with []
   4.5.10) Save the file and close TextPad
 5.0) Get into SQLPLUS and run the INSERTs you just created.


One last thing, after loading the table,  carefully check any columns that are supposed to have SalesForce dates. Excel sometimes turns these into Julian dates. If you end up with excel julian dates in your oracle tables, you can translate by doing:



SELECT   to_date(to_char(2415019+closedate),'J') as closedate


where closeddate is the Excel julian date.










      



Pages

Subscribe to Oracle FAQ aggregator