Feed aggregator

eDVD

Bradley Brown - Mon, 2014-12-22 11:27
We've struggled to figure out what to call this next generation of video delivery.  Is it "video on demand?"  The industry insiders are very specific in calling it TVOD and SVOD, which stands for transactional and subscription video on demand respectively.  Transactional video on demand means that consumers can buy or rent a video and watch it on their device.  Subscription video on demand means that consumers buy a subscription and they can watch a grouping of videos as a part of their subscription.

But what does all of this have to do with consumers and how they talk about "video on demand?"  I certainly don't hear consumers using that term.  In fact, when my son recently posted a video on Facebook, my mother-in-law (his grandma) said "that was a really cool DVD Austin."  Later she asked "that was a DVD, right?"  You could hear her questioning herself about the use of the term DVD.  Austin was a little taken back by the question, paused and said "yes grandma."  He didn't want to get into the details that a DVD is a physical implementation of storage, not a method of playing a video on Facebook.

That got me thinking about what I originally labelled this new technology as - i.e. an eDVD.  This would allow people to continue to referring to online videos as DVDs - specifically eDVDs.  So how do we change the world's view of these terms and get everyone to start calling them eDVDs?  Now that I've declare it, the world knows, right?  Well...not quite yet, but I'm sure very soon :)  Spread the word!

Happy Holidays and a Prosperous New Year from VitalSoftTech!

VitalSoftTech - Sun, 2014-12-21 09:01
We at VST want to thank you, our prestigious members, for making 2014 a memorable year for us!  We are ever so grateful for your continuous support, participation and words of encouragement.  As technological mavens, you help us sustain this community with quality feedback that derives continued success for us all! How about mastering a […]
Categories: DBA Blogs

Digital Delivery "Badge"

Bradley Brown - Sun, 2014-12-21 00:44
At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).

Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.

Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:

Screenshot 2014-12-16 19.39.25.png

On our client's web page, it looks something like this:

Screenshot 2014-12-17 14.01.11.png

The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:

Screenshot 2014-12-17 16.31.54.png

What do your client's customers need to know about in order to help you sell more?

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.
SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle 12.1.0.2 Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

What does an App Cost?

Bradley Brown - Sat, 2014-12-20 17:59
People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:


When I drilled into a specific product, again I see a list of videos within the product:


I can download or play a video in a product:


Here's what it looks like for The Dailey Method:



Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:


Or if I click on the title, I get to see a list of the videos in more detail:


Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:


Here's a specific video that talked about a technique to set your sled unstuck:


Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:


Looking at a specific video and it's details:


We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!


PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:


  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.




In case you choose the On Demand method, the data refresh can actually be done following two different methods:


  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.


  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 


How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:



We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

Limitations
The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

Conclusions
All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Do You Really Need a Content Delivery Network (CDN)?

Bradley Brown - Fri, 2014-12-19 10:39
When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content.  So...website pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

What Do Oracle Audit Vault Collection Agents Do?

The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

Audit Value Collectors for Oracle Databases*

Audit Trail Type

How Enabled

Collector Name

Database audit trail

For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.

DBAUD

Operating system audit trail

For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.

OSAUD

Redo log files

The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.

REDO

 *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

If you have questions, please contact us at mailto:info@integrigy.com

Reference
Auditing, Oracle Audit Vault, Oracle Database
Categories: APPS Blogs, Security Blogs

Elephants and Tigers - V8 of the Website

Bradley Brown - Thu, 2014-12-18 21:54
It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?

Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.

Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.

What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...


Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.


Any device, anywhere, any time!  That's how your customers want the content.


We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.


It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.


We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.


I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.


What website would be complete without a full list of client testimonials?


If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!


What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.


Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: http://intelivideo.com/sign-up/ 


We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.


Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.


Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...


Once our clients sign off on everything and our implementation team does as well...it's time to buy your media, promote your products and start selling.  We handle the delivery.


For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.


As I said at the beginning of this blog post...it's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!

This new version of our website should be live in the next day or so...as always, I'd love to hear your feedback!

Elephant Hunting

Bradley Brown - Wed, 2014-12-17 11:00
Most every startup that I've watched (and been part of) has grand plans of virality.  Build it and they will not just come to you, but they will flock to you!  There is a dream that what you have built is going to change the world and it's going to be so obvious to everyone that they will want to share the news with all of their friends.  It's a good dream and there is a dose of reality that hits you dead in the face at some point.

When I started InteliVideo, it seemed SO clear to me that we had developed an amazing offering that everyone would tell all of their friends about us.  It was also clear to me that all of my friends who were in the training business (doing training in person or virtually - via WebEx) would choose to start offering their training through our platform.  After all, they have a brand, they have a customer base and they want to provide their training to their customers.  They certainly don't want to put their training into YouTube and serve it up for free.  They certainly don't want their customer to watch their training and then at the end of the video for them to see 10 of their competitors videos to choose from.  This seemed so obvious to me.  But...it clearly wasn't clear to them because they didn't flock to our platform - even though I offered it to them repeatedly.

After all, I knew just how easy it was for me to create my content (i.e. record a video lesson), bundle up a series of lessons into a product, set a price and away I went, selling my training online.  I knew just how excited and energized many of my students were to be able to watch my training.  They could watch it one time or 1000 times - at their own learning pace.  I could see their progress!  In fact, I knew that many of students came to me and asked for additional custom lessons, which I charged them a consulting fee to produce for them (i.e. $200 for one lesson).  I set up the lesson at $200 in the platform (without any videos in it), asked them to pay for the lesson, then I recorded it and attached it to the product...and reduced the price of the lesson for future purchasers to $15-25.  In other words, I created new content for a fee AND I was able to sell it time and time again.

You see, I've written 6 technical books (on average about 1000 pages each) that took 6-12 month of my life to write.  Sure, it generates credibility in a subject area, but it doesn't generate a lot of direct revenue.  Whereas recording and then selling a video-based course requires less than one one hundredth of the effort of writing a book for the same, actually better output.

Where am I going with all of this?  Well...after trying to convince 1000s of small business owners that they should use our platform, offering them free trials to see just how easy it is, talking endlessly about what's in it for them, we concluded that this futile effort of virality is insanity.  The common definition I hear for insanity is doing the same thing and expecting different results.  We continued to try to convince people - sure with more convincing messages - but the "conversion rate" (the number of people who signed up and were successful) was not good.

When we stopped and looked at who are real customers are that generate real revenue, we quickly discovered that they are what we might refer to as elephants.  Big companies who completely understand how to develop, curate, sell, and ultimately deliver valuable content to their customers...who buy from them time and time again.

So we changed our approach and our website to communicate to the elephants.  This new approach will go live today.  The "old approach" will show up as a "Small Business" link at the bottom of the page.  The new approach explains the deeper details of integration, APIs and things that are important to the larger companies who know to sell their valuable content.

We've had GREAT success with our elephants and we're VERY excited about where they are taking us!  We have a TON of new functionality that we continue to roll out each week.  We have integrated with a number of shopping carts.  We've created a new template system that will allow us to create a completely different look and feel for each of our clients.  We're launching a whole new series of brandable apps in the next few weeks.  We completely understand just how important our apps are to our success and have spent a fortune recreating our apps from the ground up.

It's been an exceptionally gifted ride over the last year.  We finalized our series A round this summer. Startups are an adrenaline junkie's dream job.  One day you're riding high on your laurels of success and the next day you're wondering how you're going to get to a cash flow positive position.  All the while, life, real life goes on.  Your family continues to age, grow up, build their own businesses and maybe you're not out having as much "fun" as you might like to.  For me that translates to not riding my dirt bike or snowmobile as much as I would like.  But I'm having fun in the business - that's the tradeoff.

That's what I call opportunity cost.  Each day you could be doing what you're doing or something else.  Take a minute to think about the cost of what you're doing right now.  Should you be hunting virality or elephants?

Oracle E-Business Suite Database 12c Upgrade Security Notes

When upgrading the Oracle E-Business Suite database to Oracle Database 12c (12.1), there are a number of security considerations and steps that should be included in the upgrade procedure.  Oracle Support Note ID 1524398.1 Interoperability Notes EBS 12.0 or 12.1 with RDBMS 12cR1 details the upgrade steps.  Here, we will document steps that should be included or modified to improve database security.  All references to steps are the steps in Note ID 1524398.1.

Step 8

"While not mandatory for the interoperability of Oracle E-Business Suite with the Oracle Database, customers may choose to apply Database Patch Set Updates (PSU) on their Oracle E-Business Suite Database ...".

After any database upgrade, the latest CPU patch (either PSU or SPU) should always be applied.  The database upgrade only has the latest CPU patch available at the time of release of the database upgrade patch.  In the case of 12.1.0.1, the database upgrade will be current as of July 2013 and be missing the latest five CPU patches.  Database upgrade patches reset the CPU level - so even if you had applied the latest CPU patch prior to the upgrade, the upgrade will revert the CPU patch level to July 2013.

From a security perspective, the latest PSU patch should be considered mandatory.

Step 11

It is important to note from a security perspective that Database Vault must be disable during the upgrade process.  Any protections enabled in Database Vault intended for DBAs will be disabled during the upgrade.

Step 15

The DMSYS schema is no longer used with Oracle E-Business Suite and can be safely dropped.  We recommended you drop the schema as part of this step to reduce the attack surface of the database and remove unused components.  Use the following SQL to remove the DMSYS user --

DROP USER DMSYS CASCADE;
Step 16

As part of the upgrade, it is a good time to review security related initialization parameters are set correctly.  Verify the following parameters are set -

o7_dictionary_accessibility = FALSE
audit_trail = <set to a value other than none>
sec_case_sensitive_logon = TRUE (patch 12964564 may have to be applied)
Step 20

For Oracle E-Business Suite 12.1, the sqlnet_ifile.ora should contain the following parameter to correspond with the initialization parameter sec_case_sensitive_login = true -

SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10

 

 

 

Oracle E-Business Suite, DBA
Categories: APPS Blogs, Security Blogs

SLOB 2.2 Not Generating AWR reports? Testing Large User Counts With Think Time? Think Processes and SLOB_DEBUG.

Kevin Closson - Tue, 2014-12-16 17:30

I’ve gotten a lot of reports of folks branching out into SLOB 2.2 large user count testing with the SLOB 2.2 Think Time feature. I’m also getting reports that some of the same folks are not getting the resultant AWR reports one expects from a SLOB test.

If you are not getting your AWR reports there is the old issue I blogged about here (click here). That old issue was related to a Redhat bug.  However, if you have addressed that problem, and still are not getting your AWR reports from large user count testing, it might be something as simple as the processes initialization parameter. After all, most folks have been accustomed to generating massive amounts of physical I/O with SLOB at low session counts.

I’ve made a few changes to runit.sh that will help future folks should they fall prey to the simple processes initialization parameter folly. The fixes will go into SLOB 2.2.1.3. The following is a screen shot of these fixes and what one should expect to see in such situation in the future. In the meantime, do take note of SLOB_DEBUG as mentioned in the screenshot:

 

slob2.2-processes-folly


Filed under: oracle

SLOB Data Loading Case Studies – Part II. SLOB 2.2 For High-Bandwidth Data Loading.

Kevin Closson - Tue, 2014-12-16 00:44

BLOG UPDATE 2015.07.24: For all testing recipes please visit the SLOB Recipes section of kevinclosson.net/slob

This is Part II in a series. Part I can be found here (click here). Part I in the series covered a very simple case of SLOB data loading. This installment is aimed at how one can use SLOB as a platform test for a unique blend of concurrent, high-bandwidth data loading, index creation and CBO statistics gathering.

Put SLOB On The Box – Not In a Box

As a reminder, the latest SLOB kit is always available here: kevinclosson.net/slob .

Often I hear folks speak of what SLOB is useful for and the list is really short. The list is so short that a single acronym seems to cover it—IOPS, just IOPS and nothing else. SLOB is useful for so much more than just testing a platform for IOPS capability. I aim to make a few blog installments to make this point.

SLOB for More Than Physical IOPS

I routinely speak about how to use SLOB to study host characteristics such as NUMA and processor threading (e.g., Simultaneous Multithreading on modern Intel Xeons). This sort of testing is possible when the sum of all SLOB schemas fit into the SGA buffer pool. When testing in this fashion, the key performance indicators (KPI) are LIOPS (Logical I/O per second) and SQL Executions per second.

This blog post is aimed at suggesting yet another manner of platform testing with SLOB–specifically concurrent bulk data loading.

The SLOB data loader (~SLOB/setup.sh) offers the ability to test non-parallel, concurrent table loading, index creation and CBO statistics collection.

In this blog post I’d like to share a “SLOB data loading recipe kit” for those who wish to test high performance SLOB data loading. The contents of the recipe will be listed below. First, I’d like to share a platform measurement I took using the data loading recipe. The host was a 2s20c40t E5-2600v2 server with 4 active 8GFC paths to an XtremIO array.

The tar archive kit I’ll refer to below has the full slob.conf in it, but for now I’ll just use a screen shot. Using this slob.conf and loading 512 SLOB schema users generates 1TB of data in the IOPS tablespace. Please note the attention I’ve drawn to the slob.conf parameters SCALE and LOAD_PARALLEL_DEGREE. The size of the aggregate of SLOB data is a product of SCALE and the number of schemas being loaded. I drew attention to LOAD_PARALLEL_DEGREE because that is the key setting in increasing the concurrency level during data loading. Most SLOB users are quite likely not accustomed to pushing concurrency up to that level. I hope this blog post makes doing so seem more worthwhile in certain cases.

SLOB-dataload-slob.conf

The following is a screenshot of the output from the SLOB 2.2 data loader. The screenshot shows that the concurrent data loading portion of the procedure took 1,474 seconds. On the surface that would appear to be a data loading rate of approximately 2.5 TB/h. One thing to remember, however, is that SLOB data is loaded in batches controlled by LOAD_PARALLEL_DEGREE. Each batch loads LOAD_PARALLEL_DEGREE number of tables and then creates a unique indexes and performs CBO statistics gathering.  So the overall “data loading” time is really data loading plus these ancillary tasks. To put that another way, it’s true this is a 2.5TB data loading use case but there is more going on than just simple data loading. If this were a pure and simple data loading processing stream then the results would be much higher than 2.5TB/h. I’ll likely blog about that soon.

slob2.2-load-1TB

As the screenshot shows the latest SLOB 2.2 data loader isolates the concurrent loading portion of setup.sh. In this case, the seed table (user1) was loaded in 20 seconds and then the concurrent loading portion completed in 1,474 seconds.

That Sounds Like A Good Amount Of Physical I/O But What’s That Look Like?

To help you visualize the physical I/O load this manner of testing places on a host, please consider the following screenshot. The screenshot shows peaks of vmstat 30-second interval reporting of approximately 2.8GB/s physical read I/O combined with about 435 MB write I/O for an average of about 3.2GB/s. This host has but 4 active 8GFC fibre channel paths to storage so that particular bottleneck is simple to solve by adding another 4 port HBA! Note also how very little host CPU is utilized to generate the 4x8GFC saturating workload. User mode cycles are but 15% and kernel mode utilization was 9%. It’s true that 24% sounds like a lot, however, this is a 2s20c40t host and therefore 24% accounts for only 9.6 processor threads–or 5 cores worth of bandwidth. There may be some readers who were not aware that 5 “paltry” Ivy Bridge Xeon cores are capable of driving this much data loading!

NOTE: The SLOB method is centered on the sparse blocks. Naturally, fewer CPU cycles are required for loading data into sparse blocks.

Please note, the following vmstat shows peaks and valleys. I need to remind you that SLOB data loading consists of concurrent processing of not only data loading (Insert as Select) but also a unique index creation and CBO statistics gathering. As one would expect I/O will wane as the loading process shifts from the bulk data load to the index creation phase and then back again.

vmstat-SLOB-dataload

Finally, the following screenshot shows the very minimalist init.ora settings I used during this testing.

SLOB-dataload-load.ora

The Recipe Kit

The recipe kit can be found in the following downloadable tar archive. The kit contains the necessary files one would need to reproduce this SLOB data loading time so long as the platform has sufficient performance attributes. The tar archive also has all output generated by setup.sh as the following screenshot shows:

slob-data-load-kit

The SLOB 2.2 data loading recipe kit can be downloaded here (click here). Please note, the screenshot immediately above shows the md5 checksum for the tar archive.

Summary

This post shows how one can tune the SLOB 2.2 data loading tool (setup.sh) to load 1 terabyte of SLOB data in well under 25 minutes. I hope this is helpful information and that, perhaps, it will encourage SLOB users to consider using SLOB for more than just physical IOPS testing.


Filed under: oracle

LAF 1.7.7 new gift for Christmas

Francois Degrelle - Tue, 2014-12-16 00:44
Hello there, It's been a long time. This is an amazing new .FMB Forms module, kind of "Tetris" like game. It needs the last 1.7.7 version of the LAF to run. The game is a little bit buggy, but the aim, there, is only to demonstrate what you can do with...

Is Oracle 12c REST ready?

Marcelo Ochoa - Sat, 2014-12-13 16:33
This post is a continuation of my previous post Is Oracle 11g REST Ready?, and the answer is yes.
Again the availability of the embedded JVM at the Oracle RDBMS allows us to run an implementation of the complete REST stack and application.
To show how to implement a simple Hello World Application in REST I decided to use this time Jersey REST stack.
With Oracle 12c we have the availability of two JDK (1.6 and 1.7) and to compile and run Jersey we have to change the default 1.6 and switch de RBMS to 1.7 JDK, follow this guide to do that, but remember that in a CDB/PDB environment switching the JDK means change the compatibility JDK on all PDB, here another good post on that topic, DB 12c update java to version 7.
Once we have our RDBMS ready with JDK 1.7 we need Jersey compiled and ready to upload, here my steps:
a.- Check JDK version:
    mochoa@localhost:~$ export JAVA_HOME=/usr/local/jdk1.7
    mochoa@localhost:~$ export PATH=$JAVA_HOME/bin:$PATH
    mochoa@localhost:~$ type java
    java is /usr/local/jdk1.7/bin/java
    mochoa@localhost:~$ java -version
    java version "1.7.0_55"
    Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) b.- Check Maven:
    mochoa@localhost:~$ mvn -version
    Apache Maven 3.2.3 (33f8c3e1027c3ddde99d3cdebad2656a31e8fdf4; 2014-08-11T17:58:10-03:00)
    Maven home: /usr/local/apache-maven-3.2.3
    Java version: 1.7.0_55, vendor: Oracle Corporation
    Java home: /home/usr/local/jdk1.7.0_55/jre
    Default locale: en_US, platform encoding: UTF-8
    OS name: "linux", version: "3.13.0-40-generic", arch: "amd64", family: "unix"
    c.- Download and build Jersey using this guide Building and Testing Jersey, after a successful build of jersey all components will be located at our Maven local repository, on Linux is at $HOME/.m2/repository
    d.- Add a new container implementation for Oracle XMLDB Servlet, sources could be downloaded using this link, this new container implementation is a cloned version of jersey-servlet-core but downgrading Servlet 2.3 to 2.2 implemented by XMLDB, here the steps:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd containers/
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers$ tar xvfz /tmp/xdb-servlet.tar.gz
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers$ cd xdb-servlet/
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers/xdb-servlet$ mvn -Dmaven.test.skip=true clean install
    [INFO] Scanning for projects...
    [INFO]                                                                        
    [INFO] ------------------------------------------------------------------------
    [INFO] Building jersey-container-servlet-xdb 2.14-SNAPSHOT
    .... lot of stuff here ....
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 18.146 s
    [INFO] Finished at: 2014-12-13T17:07:21-03:00
    [INFO] Final Memory: 29M/351M
    [INFO] ------------------------------------------------------------------------
    e.- Create a new user into the RDBMS, this user will be used to contain all Jersey stack:
    SQL> select tablespace_name from dba_tablespaces;
    TABLESPACE_NAME
    ------------------------------
    SYSTEM
    SYSAUX
    TEMP
    USERS
    EXAMPLE
    SQL> create user jersey identified by jersey
         default tablespace users
         temporary tablespace temp
         quota unlimited on users;
    User created.
    SQL> grant connect,resource,create public synonym to jersey;
    Grant succeeded.
    f.- Upload all libraries to the RDBMS, the list of libraries build from Jersey sources and their dependency are:
    1. javax/servlet/servlet-api/2.2/servlet-api-2.2.jar (la versión instalada en la BD)
    2. javax/persistence/persistence-api/1.0/persistence-api-1.0.jar
    3. org/glassfish/hk2/external/javax.inject/2.4.0-b06/javax.inject-2.4.0-b06.jar
    4. org/glassfish/hk2/hk2-utils/2.4.0-b06/hk2-utils-2.4.0-b06.jar
    5. org/osgi/org.osgi.core/4.2.0/org.osgi.core-4.2.0.jar
    6. org/glassfish/hk2/osgi-resource-locator/1.0.1/osgi-resource-locator-1.0.1.jar
    7. org/glassfish/hk2/hk2-api/2.4.0-b06/hk2-api-2.4.0-b06.jar
    8. javax/ws/rs/javax.ws.rs-api/2.0.1/javax.ws.rs-api-2.0.1.jar
    9. org/glassfish/jersey/bundles/repackaged/jersey-guava/2.14-SNAPSHOT/jersey-guava-2.14-SNAPSHOT.jar
    10. javax/annotation/javax.annotation-api/1.2/javax.annotation-api-1.2.jar
    11. org/glassfish/jersey/core/jersey-common/2.14-SNAPSHOT/jersey-common-2.14-SNAPSHOT.jar
    12. org/glassfish/jersey/core/jersey-client/2.14-SNAPSHOT/jersey-client-2.14-SNAPSHOT.jar
    13. javax/validation/validation-api/1.1.0.Final/validation-api-1.1.0.Final.jar
    14. javassist/javassist/3.12.1.GA/javassist-3.12.1.GA.jar
    15. org/glassfish/hk2/external/aopalliance-repackaged/2.4.0-b06/aopalliance-repackaged-2.4.0-b06.jar
    16. org/glassfish/hk2/hk2-locator/2.4.0-b06/hk2-locator-2.4.0-b06.jar
    17. org/glassfish/jersey/core/jersey-server/2.14-SNAPSHOT/jersey-server-2.14-SNAPSHOT.jar
    library [1] should never be uploaded into RDBMS because is part of XMLDB implementation, so here the steps to upload [2]-[17] libraries:

    $ cd $HOME/.m2/repository % loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[2]
    Classes Loaded: 91
    Resources Loaded: 2
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 91
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[3]
    Classes Loaded: 6
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 6
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[4]
    Classes Loaded: 60
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 60
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[5]
    Some errors, but no resolving problems found.
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[6]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 12
    Synonyms Created: 12
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[7]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 153
    Synonyms Created: 153
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[8]
    Classes Loaded: 125
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 125
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[9]
    Classes Loaded: 1594
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 1594
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[10]
       ...Some errors not allowed in PDB, other classes OK....
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[11]
    Classes Loaded: 0
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 490
    Synonyms Created: 490
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[12]
    Classes Loaded: 99
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 99
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[13]
    Classes Loaded: 106
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 106
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[14]
    Classes Loaded: 366
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 366
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[15]
    Classes Loaded: 26
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 26
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[16]
    Classes Loaded: 0
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 92
    Synonyms Created: 92
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[17]
    Classes Loaded: 0
    Resources Loaded: 16
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 652
    Synonyms Created: 652
    Errors: 0
    We can check above loadjava commands logged as jersey into the target database using (all queries must return empty):
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/persistence/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/inject/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/annotation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/validation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/ws/rs/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'jersey/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'org/%';
    g.- finally uploading our xdb-servlet container:
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl org/glassfish/jersey/containers/jersey-container-servlet-xdb/2.14-SNAPSHOT/jersey-container-servlet-xdb-2.14-SNAPSHOT.jar
    Classes Loaded: 39
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 39
    Errors: 0
    h.- at this point we have everything uploaded into the RDBMS, now we prepare XMLDB to run Java implemented Servlets.
    Enabling XMLDB HTTP access:
    SQL> EXEC DBMS_XDB.SETHTTPPORT(8080);
    PL/SQL procedure successfully completed.
    SQL> alter system register;
    System altered.
    i.- By default XMLDB is configured with digest authentication, to change that we can download xdbconfig.xml file using ftp and updating the section and finally uploading again using ftp (require SYS user):
    <authentication>
        
    <allow-mechanism>basic</allow-mechanism>
    </authentication>j.- grants required for running Jersey Servlet, we are using JERSEY user here, for using other account similar grants are required directly or by creating a new role (recommend):
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'getClassLoader','');
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY','SYS:java.util.logging.LoggingPermission', 'control', '' );
    k.- uploading a simple Hello World application from examples directory:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd examples/helloworld
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ loadjava -r -v -u jersey/jersey@pdborcl target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class
    arguments: '-u' 'jersey/***@pdborcl' '-r' '-v' 'target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class'
    identical: org/glassfish/jersey/examples/helloworld/HelloWorldResource
    skipping : class org/glassfish/jersey/examples/helloworld/HelloWorldResource
    Classes Loaded: 0
    Resources Loaded: 0
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 1
    Synonyms Created: 0
    Errors: 0
    l.- Registering Servlet into XMLDB Adapater (logged as SYS):
    SQL> DECLARE
        configxml SYS.XMLType;
    begin
     dbms_xdb.deleteServletMapping('JerseyServlet');
     dbms_xdb.deleteServlet('JerseyServlet');
     dbms_xdb.addServlet(name=>'JerseyServlet',language=>'Java',class=>'org.glassfish.jersey.servlet.ServletContainer',dispname=>'Jersey Servlet',schema=>'jersey');
    SELECT INSERTCHILDXML(xdburitype('/xdbconfig.xml').getXML(),'/xdbconfig/sysconfig/protocolconfig/httpconfig/webappconfig/servletconfig/servlet-list/servlet[servlet-name="JerseyServlet"]','init-param',
    XMLType('jersey.config.server.provider.classnamesorg.glassfish.jersey.examples.helloworld.HelloWorldResourceHello World Application'),'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"') INTO configxml
    FROM DUAL;
     dbms_xdb.cfg_update(configxml);
     dbms_xdb.addServletSecRole(SERVNAME => 'JerseyServlet',ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser');
     dbms_xdb.addServletMapping('/jersey/*','JerseyServlet');
     commit;
    end;
    /
    m.- Finally test my App:
    mochoa@localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ curl --basic --user jersey:jersey http://localhost:8080/jersey/helloworld
    Hello World!!
    And that's all, happy 12c REST world.
    Notes on security:
    1. As you can see when registering a Servlet we added ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser', this imply that a RDBMS user name and password is required for accessing to this Servlet, as in the example we have to provide jersey/jersey which was the owner of the Hello Wolrd app
    2. HTTP protocol sent user name and password encoded as Base 64 when using basic authentication schema, if we want to hide this information over the net when using plain HTTP protocol we have to move to HTTPS.
    3. If you install other Hello World application in a different schema, for example scott, is necessary to upload also the class ServletContainer, for example using loadjava -u scott/tiger@pdborcl org/glassfish/jersey/servlet/ServletContainer.class, and obviously our new application class, finally registering the Servlet changing the tag to <servlet-schema>scott</servlet-schema>
    4. Servlet which runs without authentication are registered using ROLENAME => 'PUBLIC',ROLELINK => 'PUBLIC'), but is NOT recommended and it requires anonymous account unlock and these grants:
    SQL> ALTER USER ANONYMOUS ACCOUNT UNLOCK;User altered.SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'getClassLoader',''));
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS','SYS:java.util.logging.LoggingPermission', 'control', '' );


    Throw it away - Why you shouldn't keep your POC

    Robert Baillie - Sat, 2014-12-13 04:32
    "Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.It can be tempting, whilst answering these questions to become attached to the code that you generate.I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..Why do we set out on a proof of concept?The...

    Throw it away - Why you shouldn't keep your POC

    Rob Baillie - Sat, 2014-12-13 04:26

    "Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

    They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

    It can be tempting, whilst answering these questions to become attached to the code that you generate.

    I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

    I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

    Why do we set out on a proof of concept?

    The purpose of a proof of concept is to (by definition):

      * Prove:  Demonstrate the truth or existence of something by evidence or argument.
      * Concept: An idea, a plan or intention.

    In most cases, the concept being proven is a technical one.  For example:
      * Will this language be suitable for building x?
      * Can I embed x inside y and get them talking to each other?
      * If I put product x on infrastructure y will it basically stand up?

    They can also be functional, but the principles remain the same for both.

    It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

    The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

    By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

    As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

    This is quite different to what we set out to do in our normal software development process. 

    We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

    What process do we follow when embarking on a proof of concept?

    Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

    With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

    You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

    Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

    But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

    To illustrate:

    Will this language be suitable for building x?

    You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

    You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

    That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

    Can I embed x inside y and get them talking to each other

    You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

    You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

    That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

    If I put product x on infrastructure y will it basically stand up?

    You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

    You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

    That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

    Production development and Proof of Concept development is not the same

    The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

    You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

    When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

    That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

    Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

    That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

    That is the fundamental difference, and why you should throw your code away.

    Paginated HTML is here and has been for some time ... I think!

    Tim Dexter - Fri, 2014-12-12 18:03

    We have a demo environment in my team and of course things get a little beaten up in there. Our go to, 'here's Publisher' report was looking really bad. Data was not returning or being rendered correctly on the five templates we have for it.
    So, I spent about a half hour cleaning up the report; getting things working again; clearing out the rubbish. I noticed that one of the layouts when rendered in HTML was repeatedly showing a header down the screen. Oh, I know where to get rid of that and off I click to the report properties to fix it. But what is this I see? Is it? Can it be? Are my tired old eyes deceiving me?

    Yes, Dexter, you see that right, 'View Paginated'! I nervously changed the value to 'true' and went back to the HTML output.
    Holy Amaze Balls Batman, paginated HTML, the holy grail of HTML rendered reports, the Mount Everest of ... no, thats too easy, the K2 of html output ... its fan-bloody-tastic! Can you tell Im excited? I was immediately on messenger to Leslie (doc writer extraordinaire) 


    Obviously not quite as big a deal in the sane, real world outside of my head. 'Oh yeah, we have that now ...' Leslie is so calm and collected, however, she does like Maroon 5 but, we overlook that :)

    I command you 11.1.1.6+'ers to go find the property and turn it on right now and bask in the glory that is, 'paginated html.!'
    I cannot stop clicking back and forth and then to the end and then all the way back to the beginning. Its fantastic!

    Just look at those icons, just click em, you know you want to!

    Categories: BI & Warehousing

    Pages

    Subscribe to Oracle FAQ aggregator