Feed aggregator

Managing Windows scheduled tasks - SCHTASKS output misleading

Nigel Thomas - Thu, 2012-01-26 07:04
Here's a little gem - found on Windows Server SP2 but still there on Windows Server 2008 R2 SP1 at least.

I wanted to write a little script to disable some scheduled tasks (for maintenance) then after a predetermined time to re0-enable them. This is a common support problem, and I find I often complete the maintenance and forget to re-enable the tasks which results in alarms going off - but maybe not until the start of the next working day.

Anyhow, Windows gives you (at least) two ways of interacting with scheduled tasks:

  1. SCHTASKS
  2. PowerShell and the PowerShellPack which includes a TaskScheduler module
Although Powershell is an attractive option for scripting, PowerShellPack is poorly documented and the TaskScheduler module is a bit lacking. You can create, start, stop, register or get a task but there doesn't seem to be a cmdlet for actually enabling or disabling a task.

So, back to using groovy as a wrapper to SCHTASKS. All fine, we can use execute to create a CMD process that calls SCHTASKS /query to get the task status. Here's an example using easy-to-parse CSV format:

C:\>schtasks /query /fo csv /tn "\Apple\AppleSoftwareUpdate"
"TaskName","Next Run Time","Status"
"\Apple\AppleSoftwareUpdate","31/01/2012 11:15:00","Ready"

We can see that "Status" is in field 3 on the heading line, and its value is "Ready" on the data line. That's great.
To disable the task, we can then:

C:\>schtasks /change /disable  /tn "\Apple\AppleSoftwareUpdate"
SUCCESS: The parameters of scheduled task "\Apple\AppleSoftwareUpdate" have been changed.

Now let's check the status again:

C:\>schtasks /query /fo csv /tn "\Apple\AppleSoftwareUpdate"

"TaskName","Next Run Time","Status"
"\Apple\AppleSoftwareUpdate","Disabled",""

Yay! The task is indeed disabled - but look how the status has swapped into field 2 - under "Next Run Time". Presumably because there is no next run time while the task is disabled. A blank 3rd field value has been provided, but it is in the wrong place. Whatever way you list out the data, the error is still there:

C:\>schtasks /query /fo table /tn "\Apple\AppleSoftwareUpdate"

Folder: \Apple
TaskName Next Run Time Status
======================================== ====================== ===============
AppleSoftwareUpdate Disabled


C:\>schtasks /query /fo list /tn "\Apple\AppleSoftwareUpdate"

Folder: \Apple
HostName: PIERO
TaskName: \Apple\AppleSoftwareUpdate
Next Run Time: Disabled
Status:
Logon Mode: Interactive/Background

OK, now I know this, I can work around it. But another example of MS inconsistency (which no doubt is now firmly baked in for "backward compatibility" for ever and a day...

tnsping without the TNSNAMES.ORA entry

Peter O'Brien - Wed, 2012-01-25 11:04
It isn't immediately obvious in the tnsping documentation, but it is possible to perform the ping without having the Transparent Network Substrate configuration in place (i.e net service name) on the client for the database in question.

$ORACLE_HOME/bin/tnsping <hostname>:<port>/<sid>

Oracle Service Bus Cookbook

Marc Kelderman - Wed, 2012-01-25 05:37
Former Oracle collegueus and business parnerts wrote a nice cok book on the usages of Oracle Service Bus. The book is full of examples and guide you through the working and usage of the Oracle Service Bus. The is very technical of useful for developers who just ant to start the OSB. Experienced developers will use this book for the complete examples on the different technologies using OSB, such as JMS, EJB.

When you use this book, it is expected that you understand the concepts of XML, XLST, WSDL, WebServices, JMS and SOAP and basic knowledge of SQL. This is not explained in the book. Which is a advantage! Refer to the w3schools for such things.



You can obtain it via Packt Publishing.

Some subjects I miss in the book; best practice on execption handing, throtteling, deployment.

The examples in the book are based Oracle Service Bus patch set #3, but can also be used on top of patch set #4. This release is already available since august 2010. Why didn't' the authors use this version? I expect that the examples in the book can also be applied on the upcoming patch set release #5.


Resumes & Job Objectives

Jeff Hunter - Tue, 2012-01-24 17:11
I've been reviewing a lot of resume's lately.  Please tell me, what is the purpose stating your "Job Objective"?  Isn't it implied that your objective is to find a new job, specifically my job? I assume you're dynamic and technical and your vast expertise will help my company conquer the world. Also, what is the purpose of summarizing your experience on the first page and then giving me eight

Kscope12 - Schedule is Published

Look Smarter Than You Are - Mon, 2012-01-23 01:08
Waterpark at the JW Marriott Hill Country2012 will be my final year as conference chair for the Kscope conference.  In case you haven't heard elsewhere, in 2012, it's June 24-28 in San Antonio, TX at the gorgeous JW Marriott Hill Country.  The schedule has been finalized, published, and (I'll humbly admit this since I really had very little to do with it) it's the best schedule I've seen for a conference in my memory.  Yes, historians, the schedule is better than the last Hyperion Solutions conference because there are no marketing sessions (beyond one timeslot for clearly marked vendor sessions) and the content is deep and not just broad.

It's also not dominated by the software vendor (unlike Solutions).  When the maker of the software speaks (Oracle, in our case), it's because they're asked to speak and on topics we care about.  On the subject of Hyperion, for instance, Oracle is hosting an entire day-long symposium on what's going to be released in the future for Hyperion, Essbase, and OBIEE.  It's led by product development and not the Oracle product marketing guys.

One of the expansion areas this year is that in the areas of BI and EPM, they're adding more business and introductory content.  Here are all the dedicated BI/EPM tracks for Kscope12 amounting to over 150 sessions (click on the name of each to get a page about each track):
- Business Intelligence.  This track is led by some of the best OBIEE (and other Oracle BI product areas) people in the business.  The track has been expanded this year as the importance of BI has grown tremendously within Oracle.
- Essbase.  They have over 50 sessions all on Essbase this year.  This is more than any other conference in the world.  This track will cover intermediate to advanced Essbase sessions you won't get at conferences like Collaborate, Connection Point, or OpenWorld.
- Essbase Beginner.  This is a new track that allows people who are just getting started in the world of Hyperion to get some introductory training from the best in the business.
- Hyperion Applications. Hyperion Planning, HFM (Hyperion Financial Management), Hyperion Strategic Finance, and all the other Hyperion applications finally get a track of their own... and it has over 50 sessions dedicated to the Hyperion applications.  Like the Essbase track, this makes it the largest Hyperion application track of any conference in the world.
- Hyperion Business Content. For the first time in 2012, we are adding a track devoted to the business users.  If you're a director, manager, VP, controller, power user, or any type of person who primarily uses or manages Hyperion/Essbase instead of implementing it, you finally have a place to turn.  Since Solutions ended in 2007, a true Hyperion or EPM business-specific user didn't have dedicated content at any conference.  Collaborate tried (and no offense, failed).  OpenWorld missed dramatically by assuming most users were either CFO's or users with hard-core IT backgrounds.  Business people, welcome to Kscope.

In addition to those 150+ sessions on Business Intelligence and Enterprise Performance Management, there are other tracks serving the non BI/EPM community:

If you haven't registered for the conference yet, I will save you $100.  If you've already registered, it's too late.  When you register, put in promo code IRC (it stands for interRel Consulting) and it'll take $100 off whatever the prevailing rate is.  Consider that my gift for you reading this far in the blog (for which $100 is not nearly enough, I'm sure you're thinking).
Categories: BI & Warehousing

Hyperion Solutions Roadshow to Denver

Look Smarter Than You Are - Fri, 2012-01-13 13:07
Hyperion Solutions Roadshow to Denver - AgendaI just finished booking my travel to Denver for the big Hyperion event on the 24th at the Hyatt Regency (downtown by the convention center).  It's the closest Denver has come to a HUG (Hyperion User Group) meeting since Hyperion got acquired 5 years ago (wow, it's hard to believe Hyperion was acquired in 2007).  Oracle and interRel are putting on a 5+ hour event split across two educational tracks.


The first track is an introductory track that introduces some products and also covers what's new in Oracle EPM/BI 11.1.2:
  • What’s New in Oracle EPM 11.1.2.1 and OBIEE 11g: A Customer Story with Catholic Health Initiatives
  • Taking Control of Your Hierarchies with Data Relationship Management 
  • Quick Start to Hyperion Financial Close Solutions
The second track is for people that have more intermediate to advanced experience with Hyperion:
  • Hyperion Financial Reporting: Top 10 Tips & Tricks
  • Thinking Outside the Cube: Non-Financial Applications of Oracle Essbase
  • 10 Reasons Why You Don’t Have to ‘Code’ or ‘Customize’ Hyperion Planning
I will be giving some of the sessions, Essbase expert and fellow Oracle ACE Director, Glenn Schwartzberg, will be delivering some others, and Oracle and Catholic Health will be splitting the rest.  I'm most excited that Toufic Wakim (one of the greatest Product Development guys in the EPM/BI business unit at Oracle) will be delivering the keynote to start off the day.  He'll be talking about the future of Hyperion in an interactive discussion.  Among other things, Toufic is responsible for development of Smart View and the classic Essbase Excel Add-In, and you've seen how much those products have evolved recently under Toufic's tutelage.  For anyone that's had a chance to hear Toufic speak, his sessions are always hugely attended, hilarious, and full of information.  I will actually be attending his keynote and taking notes (and hopefully, blogging whatever we're allowed to publicly restate).  


Throughout the day, we'll be having networking time and at the end of the day we're going to have a group dinner and then go to the Colorado Avalanche game after.  I think the Avs play hockey (it's a Canadian ice sport played with sticks, I think), but I'm primarily going to the game to meet the local rocky mountain Hyperion users.  It's time that the users get back together and form a community.  If you're an Oracle client and can fly in on January 24th, send an e-mail to Danielle White and she'll send you more information on registering.  Flights in and out of Denver are cheap and the event is free, so I hope to see you there.
Categories: BI & Warehousing

Exalytics - Pricing Has Been Announced

Look Smarter Than You Are - Thu, 2012-01-12 13:24
The official catchy Oracle name is "Exalytics In-Memory Machine X2-4" which come to think of it is not very catchy but does sound techie.  Larry Ellison announced Exalytics at OpenWorld 2011 to great fanfare and little details.  In a nutshell, it's Essbase, OBIEE, and TimesTen running in-memory on a really powerful server.  How powerful?  40 Intel cores (4 Intel Xeon E7-4800 processors with 10 cores each), a terabyte of RAM, an InfiniBand backbone (40 GB/s when talking to Exadata), two 10 GB/s ethernet ports for connecting to non-Exadata sources, and 3.6 TB of hard drive.  Imagine Essbase running fully in memory with ethernet speeds so powerful it's like you're running Essbase locally (subject to the speed of your actual corporate network, of course).

It's an exciting development for those people who want to make BI virtually real-time.  There's even a slightly modified front-end on the OBIEE side of things to make queries a more interactive "speed of thought" activity.  If you want to make Essbase even faster, this is the solution for you.  Early benchmarks have been all over the map (I've seen 5 times improvement all the way up to 80 times improvement) but suffice to say, that once you've tuned your Essbase cubes for running in-memory, you'll be looking at five-fold improvement at the bare minimum.  If you want to learn more, Oracle has an in-depth whitepaper at:

Various rumors have leaked out on the pricing for Exalytics, but it's now been finalized and posted on the Oracle website. While there are a few places where you can find this on the web this morning (including the actual PDF of the pricing from Oracle), the best summary I've read comes from Chris Kanaracus at IDG.

Here are the pricing highlights:
  • Hardware: $135,000
  • Processor Licenses of TimesTen: $34,500
  • Named User Licenses of TimesTen: $300
  • Processor Licenses of BI Foundation Suite: $450,000
  • Named User Licenses of BI Foundation Suite: $3,675
Some additional points:
  • Annual Maintenance is the typical 22% of net.
  • Licenses of TimesTen and BI Foundation Suite must be equal (if I'm reading a footnote on page 8 of the price list correctly).
  • BI Foundation Suite includes Essbase, OBIEE, and Oracle Strategy & Scorecard Management.  The pricing above is the current pricing for BI Foundation Suite (technology price list, page 5).
  • Processors must be licensed for every core meaning full list at processor licensing for every core on the box is almost $20,000,000 (though the article points out that Oracle would probably drop that as much as 70%). That's still a lot of money so I foresee most companies going with the named user license.
  • Oracle will probably discount named users as well. Assume ~50% discount on these (though Chris Kanaracus points out that it can go as much as 70% for large deals). Hardware, following Oracle traditional appliance discounting, will discount at most 25%.
Following the math, list price for 100 users (the minimum you're allowed to buy) would be about:
  • Hardware: $135,000
  • Software: $397,500
  • List Total: $532,500
  • Discount: $232,500 (25% hardware, 50% software)
  • Net Total: $300,000
  • Maintenance: $66,000 (due on signing for 1st year)
It's expensive, to say the least, but keep in mind that list for 100 users of just Essbase is $290,000 and this gives you some great hardware, Essbase, OBIEE, and TimesTen with everything pre-installed and configured (reducing your infrastructure costs).  I don't know what Oracle will do if you already own licenses of BI Foundation Suite.  My guess is (and I don't work for Oracle) that they won't make you pay for it again, but you'll at least have to pay for the full hardware and TimesTen.



Before I leave the subject of Exalytics, I have to point out just how worried SAP is about Exalytics competing with their HANA solution.  SAP's Sanjay Poonen (President, Corporate Officer of Global Solutions at SAP) wrote one of the worst attack pieces I've ever read right after Exalytics was announced.  To summarize his point, Essbase is an old dying OLAP technology that's been around for 20 years and is therefore worthless.  First of all Sanjay, the relational database has been around a lot longer than that and no one is saying that RDBMS' are going away.  But my main problem with his article is that if you take him at face value, he has no idea about Essbase beyond 10 year old bad competitive intelligence information.  To quote from his article he paid to post on Forbes.com:
Essbase even with all its “optimization” cannot efficiently run in-memory – you still have to do pre-calculations and pre-aggregates, with no ability to do calculations on the fly. You’d have to limit how far the Essbase calculations propagate to ensure performance doesn’t blow up, and insert operations force the indexes in the database to be rebuilt, thus ruining performance...
Um, not to imply that no one fact checked your essay Mr. Poonen, but you're talking about Essbase Block Storage (the 20 year old technology which most would think means it's more reliable than something released in the last 2 years).  Essbase Aggregate Storage (created about 6 years ago) was created to solve all these problems.  It's a fundamentally different architecture than Essbase block storage: it doesn't need to be aggregated, it doesn't need to be pre-calculated, and it does all formulas and calculations on the fly.  There is no separate index that needs to be rebuilt.  Basically, all your problems you're listing (forgetting that there are many things the Essbase Block Storage does better than any OLAP technology out there), are for the Essbase Block Storage technology.

I would forgive Sanjay Poonen for just using out-dated information under the excuse that he doesn't have access to Essbase directly, but a simple Google search takes you to the Essbase Wikipedia page where it defines Essbase Aggregate Storage:
Although block storage effectively minimizes storage requirements without impacting retrieval time, it has limitations in its treatment of aggregate data in large applications, motivating the introduction of a second storage engine, named Aggregate Storage Option (Essbase ASO) or more recently, Enterprise Analytics. This storage option makes the database behave much more similarly to OLAP databases like SQL Server Analysis Services.  Following a data load, Essbase ASO does not store any aggregate values, but instead calculates them on demand.

That text has been on Essbase's Wikipedia page for a few years, so the only conclusion I can draw is that either Sanjay doesn't know how to use Google, or he was blatantly ignoring the facts.  Assuming he's not a moron, SAP must be very afraid of Exalytics to put this piece together and hope no one pointed out how fundamentally errant the whole discussion is.  I don't have time to point out every one of the wrong things in his article, but if you wish to comment on his article, visit here, and feel free to correct anything you disagree with.

And just in case Sanjay thinks I'm not willing to stand behind what I write, I challenge him to a cube build-off.  Let's get together and put whatever cube technology SAP is pushing today (SAP BW?  SAP BIW?  Business Objects?  HANA?) up against Essbase.  You and I can jointly benchmark cube build time, query time, calculation time, whatever you want, and we'll both jointly publish the results.  If you're not afraid of how the results will come out, call my office at 01-972-735-8716.  Ask for Edward Roske and say it's Sanjay Poonen calling.  I'll make sure my receptionist knows to forward your call to my cell anywhere I am in the world.  I look forward to hearing from you.

When does Exalytics release?
Exalytics should be generally available soon, but it has to wait until, among other things, Essbase 11.1.2.2 comes out since they're tweaking Essbase to run better in-memory.  If I had to guess, I'd say before the end of Oracle's fiscal year (May 2012).  Exalytics will continue to make Oracle Essbase and OBIEE a factor to be reckoned with going forward.  I'm told there's a waiting list for the first Exalytics boxes to come off the line, so call your Oracle rep now if you're interested.
Categories: BI & Warehousing

Oracle NoSQL Database Performance Tests

Charles Lamb - Thu, 2012-01-12 05:40

Our colleagues at Cisco gave us access to their Unified Computing and Servers (UCS) labs for some Oracle NoSQL Database performance testing.  Specifically, they let us use a dozen C210 servers for hosting the Oracle NoSQL Database Rep Nodes and a handful of C200 servers for driving load.

The C210 machines were configured with 96GB RAM, dual Xeon X5670 CPUs (2.93 GHz), and 16 x 7200 rpm SAS drives.  The drives were configured into two sets of 8 drives, each in a RAID-0 array using the hardware controller, and then combined into one large RAID-0 volume using the OS.  The OS was Linux 2.6.32-130.el6.x86_64.

Cisco 10GigE switches were used to connect all the machines (Rep Nodes and load drivers).

We used the Yahoo! Cloud System Benchmark as the client for the tests.  Our keysize was 13 bytes and the datasize 1108 bytes (that's how our serialization turned out for 1K of data).  We ran two phases: a load, and a 50/50 read/update benchmark.  Because YCSB only supports a Java integer's worth of records (2.1 billion), we created 400 million records per NoSQL Database Rep Group.  The "KVS size" column shows the total number of records in the K/V Store followed by the number of rep groups and replication factor in ()'s.  For example, "400m(1x3)" means 400m total records in a K/V Store consisting of 1 Rep Group with a Replication Factor of 3 (3 Replication Nodes total).

The clients ran on the C200 nodes, which were configured with dual X5670 Xeon CPUs and 96GB of memory, although really only the CPU speed matters on that side of the equation since they were not memory or IO bound.  Typically, we ran with 90 client threads per YCSB client process.  In the table below, the total number of client processes is shown in the "Clients" column, and at 90 threads/client (in general), the total client threads is shown in the "Total Client Threads" column.

The Oracle NoSQL Database Rep Node cache sizes were configured such that the B+Tree Internal Nodes fit into memory, but the leaf nodes (the data) did not.  Specifically, we configured them with 32GB of JVM heap and 22GB of  cache.  Therefore, the 50/50 Read/Update results are showing a single I/O per YCSB operation. The Durability was the NoSQL Database recommended (and default) value of no_sync, simple_majority, no_sync. The Consistency that we used for the 50/50 read/update test was Consistency.NONE.

Insert Results

KVS size 

Clients Total Client Threads
Time (sec)
Throughput (inserts/sec)
Insert Avg Latency (ms)
95% Latency (ms)
99% Latency (ms)
 400m(1x3)  3  90 15,139 26,498  3.3 5 7  1200m(3x3)  3  270 16,738 71,684  3.6 7
11  1600m(4x3)  4  360 17,053 94,441  3.7 7 18

50/50 Read/Update Results

KVS size

Clients Total Client Threads Total Throughput
Avg Read Latency
95% Read Latency
99% Read Latency
Avg Update Latency 95% Update Latency 99% Update Latency millions
processes

 ops/sec  ms  ms  ms  ms  ms  ms 400m(1x3)
3
 30  5,595  4.8  13  50  5.6  13  52 1200m(3x3)
3
 270  17,097  4.0  13  53  5.7  15  57 1600m(4x3)
4
 360  24,893  4.0  12  43  5.3  14  51


The results demonstrate excellent scalability, throughput, and latency of Oracle NoSQL Database.

I want to say "thank you" to my colleagues at Cisco for sharing their extremely capable hardware, lab, and staff with us for these tests.


Oracle Database 10.2 De-Supported

Tyler Muth - Wed, 2012-01-11 09:54
OK, that’s an alarmist title and is not really true. However, I feel this topic is important enough to warrant the title. I’ve talked to a lot of people lately that were not aware of the actual support status of Database 10.2 and below (it changed on July 2010 and the 1 year grace period […]
Categories: DBA Blogs, Development

Meet me @ Oracle Partner Event Partner Community Forum – February 7th & 8th 2012, Malaga Spain

Marc Kelderman - Tue, 2012-01-10 07:34
I will join the OPN Forum in Malaga Spain, on February 7th & 8th 2012. If you want to join just click on the link:



In this event you will meet other fellows on related stuff on Weblogic 12c, BPM, SOA, ADF and Webcenter.


  • learn how to sell the value of Fusion Middleware by combining SOA, BPM, WebCenter and WebLogic solutions
  • meet with Oracle SOA, BPM, WebCenter and WebLogic Product Management
  • exchange knowledge and have access to competitive intelligence
  • learn from successful SOA, BPM, WebCenter and WebLogic implementations
  • learn about WebCenter Sites and WebLogic12c
  • network within the Oracle SOA & BPM Partner Community, the Oracle WebCenter Partner Community and the Oracle WebLogic Partner Community

For People That Have Managers

Jeff Hunter - Mon, 2012-01-09 20:00
Interesting take on what managers are thinking: http://quickbase.intuit.com/blog/2012/01/09/10-things-your-boss-isnt-telling-you/

RAC11.2.0.2 redundant interconnect and the Cluster Health Monitor

Alejandro Vargas - Mon, 2012-01-09 15:54

There are 2 interesting new features on RAC 11.2.0.2.

The first is the cluster HAIP resource, that makes possible to have up to 4 redundant interconnects that will be automatically managed by the cluster for fail-over and load balancing.

 The second one is the Cluster Health Monitor. It was previously available as a utility that you can download and install, now is a resource on the cluster and will start to collect valuable OS statistics from the moment the Cluster is installed.

You can see details about both features on this file : HAIP and CHM 11.2.0.2 RAC Features

Categories: DBA Blogs

RAC11.2.0.2 redundant interconnect and the Cluster Health Monitor

Alejandro Vargas - Mon, 2012-01-09 15:54


There are 2 interesting new features on RAC 11.2.0.2.


The first is the cluster HAIP resource, that makes possible to have up to 4 redundant interconnects that will be automatically managed by the cluster for fail-over and load balancing.


 The second one is the Cluster Health Monitor. It was previously available as a utility that you can download and install, now is a resource on the cluster and will start to collect valuable OS statistics from the moment the Cluster is installed.


You can see details about both features on this file : HAIP and CHM 11.2.0.2 RAC Features

Categories: DBA Blogs

Display only the active archive log destinations

Jared Still - Fri, 2012-01-06 12:25
One thing I find annoying is when I want to see the archive log destinations in an oracle database.
I usually want to see only those that are enabled, and have a non null value for the destination.

show parameter log_archive_dest shows more than I care to look at.

Try this:


select name, value
from v$parameter
where name = 'log_archive_dest'
and value is not null
union all
select p.name, p.value
from v$parameter p where
name like 'log_archive_dest%'
and p.name not like '%state%'
and p.value is not null
and 'enable' = (
   select lower(p2.value)
   from v$parameter p2
   where p2.name =  substr(p.name,1,instr(p.name,'_',-1)) || 'state' || substr(p.name,instr(p.name,'_',-1))
)
union all
select p.name, p.value
from v$parameter p
where p.name like 'log_archive_dest_stat%'
and lower(p.value) = 'enable'
and (
   select p2.value
   from v$parameter p2
   where name = substr(p.name,1,16) || substr(p.name,instr(p.name,'_',-1))
) is not null
/


Categories: DBA Blogs

OCP Advisor To Present At COLLABORATE12

OCP Advisor - Fri, 2012-01-06 02:29
Your blog author will be presenting at COLLABORATE12 - The Annual Oracle User Group jointly hosted by OAUG, IOUG and Quest. This will be OCP Advisor's 55th presentation at an Oracle conference. Since the conference is being hosted at Las Vegas, the presentation title is appropriately named: Show Me The Money!

The session abstract and objectives are reproduced below:
Equinix implemented Oracle Credit and Collections Suite to automate credit review and improve cash flow through automated collection strategies. This implementation case study describes rolling out Oracle Credit Management, Oracle Advanced Collections, Dun & Bradstreet Toolkit and integration with Oracle Order Management, Oracle Customer Online and Oracle Receivables. The presenters will share how the functionality was extended to include support for consolidated billing. This project promises to be a ROI winner!


Objective 1: Learn how to automate credit review using Oracle Credit Management
Objective 2: Learn how to automate collection strategies using Oracle Advanced Collections
Objective 3: Migration to Oracle Advanced Collections for Oracle Receivables Users
Objective 4: Extend functionality of Advanced Collections to support Consolidated Billing feature. 

If you can't beat 'em join 'em

Chris Muir - Sat, 2011-12-31 22:28
A New Year has brought a desire for new challenges. As a result early in the year I'll be taking on a new role as a product manager for ADF at Oracle Corporation.

The decision to move was certainly a difficult one. I've had an excellent 10+ years at SAGE Computing Services under the leadership of Oracle ACE Penny Cookson and the SAGE team, all who've been inspiring to work with. In turn I was fortunate enough to have two offers on my table which were both excellent, but both providing different outcomes. Choices choices.

The end decision has me moving to Oracle Corporation in early February, still based in Perth Australia for now.

One ramification of the move to Oracle is I give up my Oracle ACE Director status. This is a sad moment in many ways because like SAGE I owe the ACE Director program a lot. I feel that the program has allowed me to grow and extend my skills and experiences significantly. The chance to mix with other ACEs and Oracle staff, living up to their experiences & expectations, the chance to attend and present at conferences and share my enthusiasm with delegates has been incredibly rewarding. As a result my thanks go out to both the OTN team for running the program and providing the opportunity, and also to all the ACE and ACE Directors, Oracle staff, user groups reps and Oracle enthusiasts out there I've had pleasure of meeting and befriending over the last 5 years. Seriously your friendships, advice and generosity has meant a lot to me.

With that little bit of news out of the way I'd like to wish everyone a happy New Year and I hope to see you at a conference somewhere soon.

(Post edit: as some people have kindly taken the time to point out, yes it is in fact true, the real reason for the move is I just couldn't bear to be apart from Richard Foote ;-)

Invoice Image Processing Architecture in Fusion Payables

Krishanu Bose - Thu, 2011-12-29 04:01

Fusion Payables is tightly integrated with Oracle Document Capture (ODC), Oracle Imaging and Process Management (IPM), Oracle Content Management and Oracle BPEL process Manager to provide a seamlessly integrated solution supporting the entire payables cycle starting from scanning of physical invoices, invoice image recognizition using OCR to pre-populate invoice header, routing of the scanned invoices to AP entry specialists and subsequent approval and payment of invoices. Oracle is the only vendor in the market today offering a fully integrated solution without the use of third party bolt-on solutions.

Once the invoice arrives in a centralized mail room, the imaging specialist would sort and prepare the invoices based on parameters like invoice amount, due date, supplier, etc. and then scan these invoices using ODC. Next the images are automatically sent to Forms Recognition for intelligent recognition to extract key invoice header data like PO number, Supplier, Invoice Number, Invoice Amount and Invoice Date from scanned images using Optical Character Recognition (OCR) capabilities. Once the key header data recognition is completed, the invoice images are sent to Oracle Imaging and Process Management for storage and subsequent routing to accounts payable invoice entry specialists using Oracle BPEL Process Manager workflows. The BPEL process is generated whenever an invoice image is saved successfully in Imaging and Process Management, and this image is then routed to the AP invoice entry operator based on pre-configured rules like invoice amount, supplier, etc. The AP specialist views the scanned image and then fills up the remaining fields of the invoice to kick-off the subsequent process of invoice validation, approval, accounting and payment.


You Don’t Know SQL

alt.oracle - Wed, 2011-12-28 20:17

Or maybe you do. However, I’ve talked to a lot of DBAs (pretty much the target audience for this blog) and you might be surprised how often the SQL skills of your average DBAs dwindle over time. In today’s role-specific market, developers do developer stuff while DBAs do database stuff. Somewhere in between falls SQL – the red-headed stepchild of the programming world. Ask a DBA and they’ll probably say SQL is a legitimate fourth generation language. Tell a Java programmer that and they’ll laugh themselves into a seizure. It’s strange that DBAs become less familiar with SQL over time, since it’s probably the first thing you learned when you were an Oracle newbie. Maybe you learned about pmon and archivelog mode first, but more likely you struggled with how to use a SELECT statement to form a join between two tables. I know that’s how I started.


So that leads me into my excuse for not posting, lo, these many months. It’s because I wrote a book. A book about SQL. The fine folks at Packt Publishing approached me at the end of 2010 and asked me to write the first in a series of books to help folks earn an Oracle Certification. I’ve been teaching students that stuff for eight years, so it seemed like a good fit. This book, aptly named “OCA Oracle Database 11g: SQL Fundamentals I: A Real World Certification Guide” was published a few weeks ago, and may also hold the record for the longest title in history for an Oracle book.

This is the link to the book on Packt's site  and this is the link on Amazon

Here is the lovely cover.


I'd take my "bridge to nowhere" picture over some weird animal cover any day.  I'm talking to you, O'Reilly Publishing...


I’ll write more about the book soon, but my point here is about the subject of the book – SQL. If you’re a DBA, you might be able to do a nifty join between v$process and v$session to find the database username and OS process id of a user, but could you write it in Oracle’s new join syntax? Do you know how a correlated subquery works? Ever try a conditional insert with an INSERT...WHEN statement? No? Then buy my book and all these mysteries will be revealed!

Seriously, though, even if you’re not interested in being certified and your daily job description doesn’t include any correlated subqueries, it never hurts to be reminded that SQL is actually *why* we have relational databases in the first place. An Oracle DBA should always try to understand as much about Oracle as he or she can. So don't go to rust – bust out those mad SQL skillz!
Categories: DBA Blogs

Oracle NoSQL Database in 5 Minutes

Charles Lamb - Tue, 2011-12-20 06:40

Inspired by some other "Getting started in 5 minutes" guides, we now have a Quick Start Guide for Oracle NoSQL Database.  kvlite, the single process Oracle NoSQL Database, makes it incredibly easy to get up and running.  I have to say the standard disclaimer: kvlite is only meant for kicking the tires on the API.  It is not meant for any kind of performance evaluation or production use.

Oracle NoSQL Database - A Quickstart In 5 Minutes

Install Oracle NoSQL Database
  • Download the tar.gz file from http://www.oracle.com/technetwork/database/nosqldb/downloads/index.html.
  • gunzip and untar the .tar.gz package (or unzip if you downloaded the .zip package). Oracle NoSQL Database version 1.2.116 Community Edition is used in this example.

    $ gunzip kv-ce-1.2.116.tar.gz
    $ tar xvf kv-ce-1.2.116.tar
    kv-1.2.116/
    kv-1.2.116/bin/
    kv-1.2.116/bin/kvctl
    kv-1.2.116/bin/run-kvlite.sh
    kv-1.2.116/doc/
    ...
    kv-1.2.116/lib/servlet-api-2.5.jar
    kv-1.2.116/lib/kvclient-1.2.116.jar
    kv-1.2.116/lib/kvstore-1.2.116.jar
    kv-1.2.116/LICENSE.txt
    kv-1.2.116/README.txt
    $
    
Start up KVLite KVLite is a single process version of Oracle NoSQL Database. KVLite is not tuned for performance, but does give you easy access to a simple Key/Value store so that you can test the API.

  • cd into the kv-1.2.116 directory to start the NoSQL Database server.

    $ cd kv-1.2.116
    $ java -jar lib/kvstore-1.2.116.jar kvlite
    Created new kvlite store with args:
    -root ./kvroot -store kvstore -host myhost -port 5000 -admin 5001
    
  • In a second shell, cd into the kv-1.2.116 directory and ping your KV Lite to test that it's alive.

    $ cd kv-1.2.116
    $ java -jar lib/kvstore-1.2.116.jar ping -port 5000 -host myhost
    Pinging components of store kvstore based upon topology sequence #14
    kvstore comprises 10 partitions and 1 Storage Nodes
    Storage Node [sn1] on myhost:5000    Datacenter: KVLite [dc1]    Status: RUNNING   Ver: 11gR2.1.2.116
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 31 haPort: 5011
    
  • Compile and run the Hello World example. This opens the Oracle NoSQL Database and writes a single record.

    $ javac -cp examples:lib/kvclient-1.2.116.jar examples/hello/HelloBigDataWorld.java
    $ java -cp examples:lib/kvclient-1.2.116.jar hello.HelloBigDataWorld
    Hello Big Data World!
    $
    
  • Peruse the Hello World example code and expand it to experiment more with the Oracle NoSQL Database API.

Learn more about Oracle NoSQL Database

Open the doc landing page (either locally in kv-1.2.116/doc/index.html or on OTN). From there, the Getting Starting Guide (HTML | PDF) and Javadoc will introduce you to the NoSQL Database API. The Oracle NoSQL Database Administrator's Guide (HTML | PDF) will help you understand how to plan and deploy a larger installation.

Remember, KVLite should only be used to become familiar with the NoSQL Database API. Any serious evaluation of the system should be done with a multi-process, multi-node configuration.

  • To install a standard, multi-node system, you need to repeat the instructions above on how to unpack the package on any nodes that do not yet have the software accessible. Then follow a few additional steps, described in the Admin Guide Installation chapter. Be sure to run ntp on each node in the system.
  • If you want to get started with a multi-node installation right away, here's a sample script for creating a 3 node configuration on a set of nodes named compute01, compute02, compute03. You can execute it using the NoSQL Database CLI.
    configure "mystore"
    plan -execute deploy-datacenter BurlDC Burlington
    plan -execute deploy-sn 1 compute01 5000 Compute01StorageNode
    plan -execute deploy-admin 1 5001
    addpool mySNPool
    joinpool mySNPool 1
    plan -execute deploy-sn 1 compute02 5000 Compute02StorageNode
    joinpool mySNPool 2
    plan -execute deploy-sn 1 compute03 5000 Compute03StorageNode
    joinpool mySNPool 3
    plan -execute deploy-store mySNPool 3 100
    show plans
    show topology
    quit
    
  • You can access the Adminstrative Console at http://compute01:5001/ at any time after the plan-execute deploy-admin command to view the status of your store.
  • To evaluate performance, you will want to be sure to set JVM and cache size parameters to values appropriate for your target hosts. See Planning Your Installation for information on how to determine those values. The following commands are sample parameters for target machines that have more than 32GB of memory. These commands would be invoked after the configure "mystore" command.
    set policy "javaMiscParams=-server -d64 -XX:+UseCompressedOops -XX:+AlwaysPreTouch -Xms32000m -Xmx32000m -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/tmp/gc-kv.log"
    set policy "cacheSize=22423814540"
    

You can ask questions, or make comments on the Oracle NoSQL Database OTN forum.

Oracle NoSQL DAtabase 1.2.123 Community and Enterprise Editions Available for Download

Charles Lamb - Mon, 2011-12-19 23:14

Oracle NoSQL Database release 1.2.123, both Community Edition (new) and Enterprise Edition, are now available for download on OTN:

http://www.oracle.com/technetwork/database/nosqldb/downloads/index.html

In addition to some minor bug fixes, a performance improvement to the snapshot function, and deprecation of kvctl (see the changelog for details), the Community Edition is now available.  The CE package includes source code and the license for CE is aGPLv3.  The license for the EE remains the same as before (standard OTN license).


Pages

Subscribe to Oracle FAQ aggregator