Feed aggregator

Selling Books on Amazon vs. Intelivideo

Bradley Brown - Thu, 2015-09-17 21:44
For the last couple of years I've heard that publishers don't like selling their books through Amazon.  The way I heard it described sounded like Amazon was forcing (or pushing) you to sell books for under $10.  It's actually pretty complicated as to how their pricing works, so I've attempted to simplify for you here.  If you want the details, you can read more here:

https://kdp.amazon.com/help?topicId=A301WJ6XCJ8KW0

Basically you have to pick which plan you’re on.  The 35% or 70% royalty plan.  At first glance, you would ask - why would anyone pick the 35% royalty plan?  Let's see, you do all of the research, write the book, take a book through 5 edits and get it to the point of publishing and Amazon is going to keep 35 "or" 70% of the revenue generated?  Logically who would say they only want to keep 35%?  It's more complicated - i.e. strings are attached to each choice.

If you pick the 70% royalty plan, you keep as much as 70% (minus delivery costs and with about 100 other rules) of whatever they sell it for.  But according to the small print, on a number of your sales, you’ll actually keep 35% of whatever they sell it for.  Here's the real kicker - if you want to keep 70% (minus delivery costs, VAT, etc), they force you to set the list price to $2.99 to $9.99 AND by the way they will keep 65% if they sell it in other countries, etc.  If you choose the 35% royalty plan (keep in mind, they are are keeping 65%), you can set the price between $2.99 and $200.  You can sell it for less than $2.99 (i.e. down to $.99) if you have a small book (i.e. less than 10Mb footprint).  They also say that the list price must be at least 20% below the lowest physical list price for the physical book.  Wow - SO many rules!

So Amazon charges (keeps) 30% (minus delivery costs) to 65% (and it’s usually this amount) and has minimum and maximum prices you can charge and a lot of rules AND it’s Amazon’s customer (not yours).

The 2 pricing options are explained (and tough to understand) here:

https://kdp.amazon.com/help?topicId=A29FL26OKE7R7B

And their FAQ is here:

https://kdp.amazon.com/help?topicId=A30F3VI2TH1FR8

We're soon to release (secure) eBook functionality at Intelivideo.  So how does it work?  If you pick the Pro Plan, you keep 85% of the revenue and you can set the price to whatever price you want.  We have some other fine print, but overall I can assure you that our pricing is WAY better than Amazon's offer - and it's your customer.  You can sell them more products.  You can do promotions to them.  You can upsell them.  I'm shocked by Amazon's model and now understand the frustration others have!

Oracle Priority Service Infogram for 17-SEP-2015

Oracle Infogram - Thu, 2015-09-17 16:41

OpenWorld

OpenWorld is coming up soon, and articles on how to get the most of it are starting to fill the blogosphere. Here’s one from The Data Warehouse Insider: OpenWorld 2015 on your smartphone and tablet.


RDBMS

SE2 - Some questions, some answers ..., from Upgrade your Database – NOW!


PL/SQL

A PL/SQL Inlining Primer, from Oracle Database PL/SQL and EBR.

Exadata

Exadata 12.1.2.2.0 Software is Released, from Emre Baransel, Support Engineer's Blog.


Hardware

All-Flash Oracle FS1 Storage System, from Oracle PartnerNetwork News.

SOA and BPM



Java

Concurrency on the JVM, from The Java Source.


Solaris


EBS

From the Oracle E-Business Suite Support blog:






From the Oracle E-Business Suite Technology blog:




The Fundamental Challenge of Computer System Performance

Cary Millsap - Thu, 2015-09-17 11:46
The fundamental challenge of computer system performance is for your system to have enough power to handle the work you ask it to do. It sounds really simple, but helping people meet this challenge has been the point of my whole career. It has kept me busy for 26 years, and there’s no end in sight.
Capacity and WorkloadOur challenge is the relationship between a computer’s capacity and its workload. I think of capacity as an empty box representing a machine’s ability to do work over time. Workload is the work your computer does, in the form of programs that it runs for you, executed over time. Workload is the content that can fill the capacity box.


Capacity Is the One You Can Control, Right?When the workload gets too close to filling the box, what do you do? Most people’s instinctive reaction is that, well, we need a bigger box. Slow system? Just add power. It sounds so simple, especially since—as “everyone knows”—computers get faster and cheaper every year. We call that the KIWI response: kill it with iron.
KIWI... Why Not?As welcome as KIWI may feel, KIWI is expensive, and it doesn’t always work. Maybe you don’t have the budget right now to upgrade to a new machine. Upgrades cost more than just the hardware itself: there’s the time and money it takes to set it up, test it, and migrate your applications to it. Your software may cost more to run on faster hardware. What if your system is already the biggest and fastest one they make?

And as weird as it may sound, upgrading to a more powerful computer doesn’t always make your programs run faster. There are classes of performance problems that adding capacity never solves. (Yes, it is possible to predict when that will happen.) KIWI is not always a viable answer.
So, What Can You Do?Performance is not just about capacity. Though many people overlook them, there are solutions on the workload side of the ledger, too. What if you could make workload smaller without compromising the value of your system?
It is usually possible to make a computer produce all of the useful results that you need without having to do as much work.You might be able to make a system run faster by making its capacity box bigger. But you might also make it run faster by trimming down that big red workload inside your existing box. If you only trim off the wasteful stuff, then nobody gets hurt, and you’ll have winning all around.

So, how might one go about doing that?
Workload“Workload” is a conjunction of two words. It is useful to think about those two words separately.


The amount of work your system does for a given program execution is determined mostly by how that program is written. A lot of programs make their systems do more work than they should. Your load, on the other hand—the number of program executions people request—is determined mostly by your users. Users can waste system capacity, too; for example, by running reports that nobody ever reads.

Both work and load are variables that, with skill, you can manipulate to your benefit. You do it by improving the code in your programs (reducing work), or by improving your business processes (reducing load). I like workload optimizations because they usually save money and work better than capacity increases. Workload optimization can seem like magic.
The Anatomy of PerformanceThis simple equation explains why a program consumes the time it does:
r = cl        or        response time = call count × call latencyThink of a call as a computer instruction. Call count, then, is the number of instructions that your system executes when you run a program, and call latency is how long each instruction takes. How long you wait for your answer, then—your response time—is the product of your call count and your call latency.

Some fine print: It’s really a little more complicated than this, but actually not that much. Most response times are composed of many different types of calls, all of which have different latencies (we see these in program execution profiles), so the real equation looks like r = c1l1 + c2l2 + ... + cnln. But we’ll be fine with r = cl for this article.

Call count depends on two things: how the code is written, and how often people run that code.
  • How the code is written (work) — If you were programming a robot to shop for you at the grocery store, you could program it to make one trip from home for each item you purchase. Go get bacon. Come home. Go get milk... It would probably be dumb if you did it that way, because the duration of your shopping experience would be dominated by the execution of clearly unnecessary travel instructions, but you’d be surprised at how often people write programs that act like this.
  • How often people run that code (load) — If you wanted your grocery store robot to buy 42 things for you, it would have to execute more instructions than if you wanted to buy only 7. If you found yourself repeatedly discarding spoiled, unused food, you might be able to reduce the number of things you shop for without compromising anything you really need.
Call latency is influenced by two types of delays: queueing delays and coherency delays.
  • Queueing delays — Whenever you request a resource that is already busy servicing other requests, you wait in line. That’s a queueing delay. It’s what happens when your robot tries to drive to the grocery store, but all the roads are clogged with robots that are going to the store to buy one item at a time. Driving to the store takes only 7 minutes, but waiting in traffic costs you another 13 minutes. The more work your robot does, the greater its chances of being delayed by queueing, and the more such delays your robot will inflict upon others as well.
  • Coherency delays — You endure a coherency delay whenever a resource you are using needs to communicate or coordinate with another resource. For example, if your robot’s cashier at the store has to talk with a specific manager or other cashier (who might already be busy with a customer), the checkout process will take longer. The more times your robot goes to the store, the worse your wait will be, and everyone else’s, too.
The SecretThis r = cl thing sure looks like the equation for a line, but because of queueing and coherency delays, the value of l increases when c increases. This causes response time to act not like a line, but instead like a hyperbola.


Because our brains tend to conceive of our world as linear, nobody expects for everyone’s response times to get seven times worse when you’ve only added some new little bit of workload, but that’s the kind of thing that routinely happens with performance. ...And not just computer performance. Banks, highways, restaurants, amusement parks, and grocery-shopping robots all work the same way.

Response times are trememdously sensitive to your call counts, so the secret to great performance is to keep your call counts small. This principle is the basis for perhaps the best and most famous performance optimization advice ever rendered:
The First Rule of Program Optimization: Don’t do it.

The Second Rule of Program Optimization (for experts only!): Don’t do it yet.

The ProblemKeeping call counts small is really, really important. This makes being a vendor of information services difficult, because it is so easy for application users to make call counts grow. They can do it by running more programs, by adding more users, by adding new features or reports, or by even by just the routine process of adding more data every day.

Running your application with other applications on the same computer complicates the problem. What happens when all these application’ peak workloads overlap? It is a problem that Application Service Providers (ASPs), Software as a Service (SaaS) providers, and cloud computing providers must solve.
The SolutionThe solution is a process:
  1. Call counts are sacred. They can be difficult to forecast, so you have to measure them continually. Understand that. Hire people who understand it. Hire people who know how to measure and improve the efficiency of your application programs and the systems they reside on.
  2. Give your people time to fix inefficiencies in your code. An inexpensive code fix might return many times the benefit of an expensive hardware upgrade. If you have bought your software from a software vendor, work with them to make sure they are streamlining the code they ship you.
  3. Learn when to say no. Don’t add new features (especially new long-running programs like reports) that are inefficient, that make more calls than necessary. If your users are already creating as much workload as the system can handle, then start prioritizing which workload you will and won’t allow on your system during peak hours.
  4. If you are an information service provider, charge your customers for the amount of work your systems do for them. The economic incentive to build and buy more efficient programs works wonders.

Drop It Like It's Not

Scott Spendolini - Thu, 2015-09-17 09:50
I just ran the following script:

-- TABLES
FOR x IN (SELECT table_name FROM user_tables)
LOOP
  EXECUTE IMMEDIATE('DROP TABLE ' || x.table_name || ' CASCADE CONSTRAINTS');
END LOOP;

-- SEQUENCES
FOR x IN (SELECT sequence_name FROM user_sequences)
LOOP
  EXECUTE IMMEDIATE ('DROP SEQUENCE ' || x.sequence_name);
END LOOP;

-- VIEWS
FOR x IN (SELECT view_name FROM user_views)
LOOP
  EXECUTE IMMEDIATE ('DROP VIEW ' || x.view_name);
END LOOP;

Basically, drop all tables, views and sequences.  It worked great, cleaning out those objects in my schema without touching any packages, producers or functions.  The was just one problem:  I ran it in the wrong schema.

Maybe I didn't have enough coffee, or maybe I just wasn't paying attention, but I essentially wiped out a schema that I really would rather not have.  But I didn't even flinch, and here's why.

All tables & views were safely stored in my data model.  All sequences and triggers (and packages, procedures and functions) were safely stored in scripts.  And both the data model and associated scripts were safely checked in to version control.  So re-instantating this project was a mere inconvenience that took no more than the time it takes to drink a cup of coffee - something I clearly should have done more of earlier this morning.

Point here is simple: take the extra time to create a data model and a version control repository for your projects - and then make sure to use them!  I religiously check in code and then make sure that at least my TRUNK is backed up elsewhere.  Worst case for me, I'd lose a couple of hours or work, perhaps even less, which is far better than the alternative.

Oracle Partners ♥ UX Innovation Events

Usable Apps - Thu, 2015-09-17 09:35

I have just returned from a great Apps UX Innovation Events Internet of Things (IoT) hackathon held in Oracle Nederland in Utrecht (I was acting in a judicial capacity). This was the first of such events organized in cooperation with an Oracle partner, in this case eProseed

eProseed Managing Partner Lonneke Dikmans

Design patterns maven: eProseed managing partner, SOA, BPM and UX champ, Lonneke Dikmans (@lonnekedikans) at the hackathon. Always ready to fashion a business solution in a smart, reusable way.

You can read more about what went on at the event on other blogs, but from an Oracle partner enablement perspective (my main role), this kind of participation means a partner can:  

  • Learn hands-on about the latest Oracle technology from Oracle experts in person. This event provided opportunities to dive deep into Oracle Mobile Cloud Service, Oracle IoT Cloud, Oracle Mobile Application Framework, Oracle SOA Suite, and more, to explore building awesome contextual and connected solutions across a range of devices and tech.
  • Bring a team together in one place to work on business problems, to exchange ideas, and to build relationships with the "go-to" people in Oracle's technology and user experience teams.  
  • Demonstrate their design and development expertise and show real Oracle technology leadership to potential customers, to the Oracle PartnerNetwork, and to the educational, development, and innovation ecosystem.

That an eProseed team was declared the winners of the hackathon and that eProseed scored high on all three benefits above is just sweet!

eProseed NL team demo parking solution

The eProseed NL team shows off its winning "painless parking" IoT solution.

Many thanks to eProseed for bringing a team from across Europe and for working with Apps UX Innovation Events to make this event such a success for everyone there!

Stay tuned for more events on the Apps UX Innovation Events blog and watch out for news of the FY16 PaaS4SaaS UX enablement for Oracle partners on this blog.

Pictures from the IoT hackathon are on the Usable Apps Instagram account

Index Advanced Compression: Multi-Column Index Part I (There There)

Richard Foote - Thu, 2015-09-17 01:57
I’ve discussed Index Advanced Compression here a number of times previously. It’s the really cool additional capability introduced to the Advanced Compression Option with 12.1.0.2, that not only makes compressing indexes a much easier exercise but also enables indexes to be compressed more effectively than previously possible. Thought I might look at a multi-column index to highlight just […]
Categories: DBA Blogs

My Nomination for the Oracle Database Developer Choice Awards

Dietmar Aust - Thu, 2015-09-17 00:30
Actually this came as a wonderful surprise ... I have been nominated for the Oracle Database Developer Choice Awards:
I have basically devoted my entire work life to build solutions based on Oracle technology ... and you can build some pretty cool stuff with it. I have always enjoyed building software that makes a difference ... and even more so share to what I have learned and support and inspire others to do the same. The people in the Oracle community are simply amazing and I have made a lot of friends there. If you have an account for the Oracle Technology Network (OTN) I would appreciate your vote ! And if you don't feel like voting for me ... vote anyway in all the different categories ... because the Oracle community deserves the attention. Thanks, ~Dietmar.

US Consumer Law Attorney Rates

Nilesh Jethwa - Wed, 2015-09-16 22:27

The hourly rate in any consulting business or practice increases by the years of experience in the field.

Read more at: http://www.infocaptor.com/dashboard/us-consumer-law-attorney-rates

If You're In Latvia, Estonia, Romania, Slovenia or Croatia, Oracle APEX is Coming to You!

Joel Kallman - Wed, 2015-09-16 21:04
In the first part of October, my colleague Vlad Uvarov and I are taking the Oracle APEX & Oracle Database Cloud message to a number of user groups who are graciously hosting us.  These are countries for which there is growing interest in Oracle Application Express, and we wish to help support these groups and aid in fostering their growing APEX communities.

The dates and locations are:

  1. Latvian Oracle User Group, October 5, 2015
  2. Oracle User Group Estonia, Oracle Innovation Day in Tallinn, October 7, 2015
  3. Romanian Oracle User Group, October 8, 2015
  4. Oracle Romania (for Oracle employees, at the Floreasca Park office), October 8-9, 2015
  5. Slovenian Oracle User Group, SIOUG 2015, October 12-13, 2015
  6. Croatian Oracle User Group, 20th HrOUG Conference, October 13-16, 2015

You should consider attending one of these user group meetings/conferences if:

  • You're a CIO or manager, and you wish to understand what Oracle Application Express is and if it can help you and your business.
  • You're a PL/SQL developer, and you want to learn how easy or difficult it is to exploit your skills on the Web and in the Cloud.
  • You come from a client/server background and you want to understand what you can do with your skills but in Web development and Cloud development.
  • You're an Oracle DBA, and you want to understand if you can use Oracle Application Express in your daily responsibilities.
  • You know nothing about Oracle Application Express and you want to learn a bit more.

The User Group meetings in Latvia, Estonia and Romania all include 2-hour instructor-led hands on labs.  All you need to bring is a laptop, and we'll supply the rest.  But you won't be merely watching an instructor drive their mouse.  You will be the ones building something real.  I guarantee that people completely new to APEX, as well as seasoned APEX developers, will learn a number of relevant skills and techniques in these labs.

If you have any interest or questions or concerns (or complaints!) about Oracle Application Express, and you are nearby, we would be very honored to meet you in person and assist in any way we can.  We hope you can make it!

Presenting the Hotsos Symposium Training Day – 10 March 2016 (Heat)

Richard Foote - Wed, 2015-09-16 05:06
I’ve just accepted an invitation to present the Hotsos Symposium Training Day on 10 March 2016 in sunny Dallas, Texas. In the age of Exadata and In-Memory databases, it’ll be an updated and consolidated version of my Index Internals and Best Practices seminar. With an emphasis on using indexes appropriately to boost performance, it’ll feature […]
Categories: DBA Blogs

Presentation slides for my ORDS talk at KScope 2015

Dietmar Aust - Tue, 2015-09-15 12:48
Hi guys,

in June I gave a talk at the ODTUG KScope conference regarding the optimal setup of Oracle ORDS for Oracle Application Express: Setting Up the Oracle APEX Listener (Now ORDS) for Production Environments

You can access the slides certainly through the ODTUG site. They have even recorded the presentation and made it available to their members.

This seems to be a good investment for $99 per year for a paid membership, because now you also have access to the other content from the ODTUG conferences. I am not affiliated with ODTUG but all I can say is that the KScope conference is the best place for an Oracle developer to learn and connect with the best folks in the industry.

For everybody else who is not (yet) an ODTUG member you can download my slides and the config file here: http://www.opal-consulting.de/downloads/presentations/2015-06-ODTUG-KScope-ORDS-in-production/

Cheers and all the best,
~Dietmar.

P.S.: The configuration is based on the version 3.0.0 of ORDS. You should definitely move to 3.0.1 which is currently available.

But on the other hand I was once again thrown of by another problem with version 3.0.1 running the schema creation scripts for the ORDS schema users (ords_metadata and ords_public_user) .

Thus I have come to the conclusion it is best to do it step by step, the database users have to be created first. You can extract the installation scripts from the ords.war just as well:
- http://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#CHDDIFEC
- http://docs.oracle.com/cd/E56351_01/doc.30/e56293/install.htm#CHDFJHEA



Copycat blog

Vikram Das - Tue, 2015-09-15 03:50
While doing a google search today I noticed that there is another blog that has copied all content from my blog and posted it as their own content and even kept a similar sounding name: http://oracleapps-technology.blogspot.com .  I have made a DMCA complaint to google about this.  The google team asked me to provide a list of URLs.  I had to go through the copycat's whole blog and create a spreadsheet with two columns. One column with URL of my original post and second column with the URL of the copycat's blog.  There were 498 entries.  I patiently did it and sent the spreadsheet to google team and got a reply within 2 hours:


Hello,
In accordance with the Digital Millennium Copyright Act, we have completed processing your infringement notice. We are in the process of disabling access to the content in question at the following URL(s):

http://oracleapps-technology.blogspot.com/

The content will be removed shortly.

Regards,
The Google Team 
Categories: APPS Blogs

Adding the "Deploy to Bluemix" Button to my Bluemix Applications in GitHub

Pas Apicella - Mon, 2015-09-14 19:25
Not before time I finally added my first "Deploy to Bluemix" button on my GitHub projects for Bluemix applications. The screen shot below shows this for the Spring Session - Spring Boot Portable Cloud Ready HTTP Session demo.


Here is what it looks like when I do deploy this using the "Deploy to Bluemix" button and requires me to log in to IBM Bluemix. What happens when you use this button it adds the prohect code via FORK to your own DevOps projects , adds a pipeline to compile/deploy the code and finally deploys it as you Expect it to do.



More Information

https://developer.ibm.com/devops-services/2015/02/18/share-code-new-deploy-bluemix-button/
Categories: Fusion Middleware

EM12c 12.1.0.5 Upgrade Tasks

Arun Bavera - Mon, 2015-09-14 14:51
1.      Upgrade Primary OMR, OMS using Installer of 12.1.0.5   - 2 Hours
   Check if OMR requires upgrade:
12c Database has been Certified as an EM 12.1.0.4 or 12.1.0.5 Repository with Certain Patchset and PSU Restrictions (Doc ID 1987905.1)
12.1.0.2 Patch Set Updates - List of Fixes in each PSU (Doc ID 1924126.1)
12.1.0.2 Patch Set - Availability and Known Issues (Doc ID 1683799.1)
Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)


Applying Enterprise Manager 12c Recommended Patches (Doc ID 1664074.1)

2.     Upgrade Primary Agent      - 6 Minutes

3.      Cleanup Agent

4.      Cleanup OMS

5.      Upgrade Secondary  OMS     - 30 Minutes

6.      Cleanup Agent

7.      Cleanup OMS

8.      Apply Monthly Agent/OMS Patches available 
    Oracle Recommended Patches (PSU) for Enterprise Manager Base Platform (All Releases) (Doc ID 822485.1)
    Document 2038446.1 - Enterprise Manager 12.1.0.5.0 (PS4) Master Bundle Patch List

9.  Install Latest JDK 1.6 (Note: 1944044.1) JDK 1.6.0.95.. 
        Refer:
All Java SE Downloads on MOS (Doc ID 1439822.1)
  How to Upgrade JDK to 1.6 Update 95 on OMS 12.1.0.4 or 12.1.0.5 (Doc ID 2059426.1)
How to Upgrade the JDK Used by Oracle WebLogic Server 11g to a Different Version (Doc ID 1309855.1)
How to Upgrade the JDK Used by Oracle WebLogic Server 12c to a Different Version (Doc ID 1616397.1)
         How to Install and Maintain the Java SE Installed or Used with FMW 11g/12c Products (Doc ID 1492980.1)

10.     Install Weblogic latest PSU (1470197.1)  

11.  Verify Load Balancer

12.  OMS Sizing 

Refer:
Enterprise Manager Cloud Control Upgrade Guide

EM 12c R5: Checklist for Upgrading Enterprise Manager Cloud Control from Version 12.1.0.2/3/4 to 12.1.0.5 (Doc ID 2022505.1)

12c Database has been Certified as an EM 12.1.0.4 or 12.1.0.5 Repository with Certain Patchset and PSU Restrictions (Doc ID 1987905.1)

EM 12c: How to Patch the EM-Integrated Oracle BI Publisher (Doc ID 1982656.1)

http://oraforms.blogspot.com/2014/05/oracle-em12c-release-and-patch-schedules.html


Categories: Development

Forcing Garbage Collection in JDK manually using JVisualVM

Arun Bavera - Mon, 2015-09-14 14:43
You might have seen many times heap crossing the limit and GC algorithm not working properly and keeping old object long time.
Even though it is not advised to force major GC manually if you come across a situation you can use the following method to clear the Heap.
Note. If the Heap size is huge more than 6GB doing major GC may cause application to wait for couple of seconds. Also, make sure you have enough system memory(RAM) to invoke the tool JVisualVM.
This is typical method in many corporates where X-Windows is not installed on their *NIX machines and app account is locked down for direct login.
1) Login as yourself into Linux/Unix machine and make sure your laptop/Desktop has X-emulator like xming running.
2) Note down the authorized X-keys:    xauth list
3) Login as app owner :     sudo su – oracle
4) Add the X-keys to oracle(App owner session)
xauth add <full string from xauth list from previous session>image

5) Do ps –ef|java , note down the JDK directory and go directly to JDK bin (/opt/app/oracle/jdk1.7.0_55/bin ) in this case we are using JDK7
6) Invoke  ./jvisualvm &
7) Choose the Weblogic PID and make sure in the Overview tab the server name is the one you are interested and Perform manual GC.
  Note: From JDK 7 onwards if your Heap size is more than 6GB then G1GC algorithm works in best possible ways. 
     Also refer: https://blogs.oracle.com/g1gc/

image
Categories: Development

Report Carousel in APEX 5 UT

Dimitri Gielis - Mon, 2015-09-14 10:45
The Universal Theme in APEX 5.0 is full of nice things.

Did you already see the Carousel template for Regions
When you add a region to your page with a couple of sub-regions and you give the parent region the "Carousel Container" template it turns the regions into a carousel, so you can flip between regions.

I was asked to have the same functionality but than on dynamic content.
So I decided to build a report template that would be shown as carousel. Here's the result:



I really like carousels :)

Here's how you can have this report template in your app:
1) Create a new Report Template:


Make sure to select Named Column for the Template Type:


Add following HTML into the template at the given points:




That's it for the template.

Now you can create a new report on your page and give it the template you just created.
Here's the SQL Statement I used:

select PRODUCT_ID          as id,
       PRODUCT_NAME        as title,
       PRODUCT_DESCRIPTION as description,
       product_id,       
       dbms_lob.getlength(PRODUCT_IMAGE) as image,
       'no-icon'           as icon,
       null                as link_url 
  from DEMO_PRODUCT_INFO

Note 1: that you have to use the same column aliases as you defined in the template.
Note 2: make sure you keep the real id of your image in the query too, as otherwise you'll get an error (no data found)

To make the carousel a bit nicer I added following CSS to the page, but you could add it to your own CSS file or in the custom css section of Theme Roller.


Note: the carousel can work with an icon or an image. If you want to see an icon you can use for example "fa-edit fa-4x". When using an image, define the icon as no-icon.

Eager for more Universal Theme tips and tricks? check-out our APEX 5.0 UI training in Birmingham on December 10th. :)

For easier copy/paste into your template, you find the source below:

 *** Before Rows ***  
<div class="t-Region t-Region--carousel t-Region--showCarouselControls t-Region--hiddenOverflow" id="R1" role="group" aria-labelledby="R1_heading">
<div class="t-Region-bodyWrap">
<div class="t-Region-body">
<div class="t-Region-carouselRegions">
*** Column Template ***
<div data-label="#TITLE#" id="SR_R#ID#">
<a href="#LINK_URL#">
<div class="t-HeroRegion " id="R#ID#">
<div class="t-HeroRegion-wrap">
<div class="t-HeroRegion-col t-HeroRegion-col--left">
<span class="t-HeroRegion-icon t-Icon #ICON#"></span>
#IMAGE#
</div>
<div class="t-HeroRegion-col t-HeroRegion-col--content">
<h2 class="t-HeroRegion-title">#TITLE#</h2>
#DESCRIPTION#
</div>
<div class="t-HeroRegion-col t-HeroRegion-col--right"><div class="t-HeroRegion-form"></div><div class="t-HeroRegion-buttons"></div></div>
</div>
</div>
</a>
</div>
*** After Rows ***
</div>
</div>
</div>
</div>
*** Inline CSS ***
.t-HeroRegion-col.t-HeroRegion-col--left {
padding-left:60px;
}
.t-HeroRegion {
padding:25px;
border-bottom:0px solid #CCC;
}
.t-Region--carousel {
border: 1px solid #d6dfe6 !important;
}
.t-HeroRegion-col--left img {
max-height: 90px;
max-width: 130px;
}
.no-icon {
display:none;
}
Categories: Development

Bigfoot vs UFO analytics

Nilesh Jethwa - Sat, 2015-09-12 21:29

Bigfoot and UFO remain elusive but know their ways to make news from time to time.

Read more at: http://www.infocaptor.com/dashboard/bigfoot-vs-ufo-analytics

Here We Go Again

Floyd Teter - Thu, 2015-09-10 13:59
Yup, moving on one more time.  Hopefully for the last time.  I’m leaving Sierra-Cedar Inc. for a position as Sr. Director with Oracle's HCM Center of Excellence team.

As an enterprise software guy, I see the evolution of SaaS and Cloud as the significant drivers of change in the field.  I want to be involved, I want to contribute in a meaningful way, I want to learn more, and I want to be at the center of it all.  And there is no better place for all that than Oracle.  I had the opportunity to meet most of the folks I’ll be working alongside…knew many of them and met a few new faces.  And I’m excited to work with them. So when the opportunity presented itself, I was happy to follow through on it.

I’ll also freely admit that I’ve seen…and experienced…a pretty substantial amount of upheaval regarding Oracle services partners over the past several years.  Some are fighting the cloud-driven changes in the marketplace, others have accepted the change but have yet to adapt, a few are substantially shifting their business model to provide relevant services as the sand shifts under their feet.  Personally, I’ve had enough upheaval for a bit.

The first mission at Oracle:  develop tools and methods to meaningfully reduce lead time between customer subscript and customer go-live.  Pretty cool, as it lets me work on my #beat39 passion.  I’ll be starting with building tools to convert data from legacy HCM applications to HCM Cloud through the HCM Data Loader (“HDL”).


While I regret leaving a group of great people at SCI, I’m really looking forward to rejoining Oracle.  I kind of feel like a minion hitting the banana goldmine!

Building an Oracle NoSQL cluster using Docker

Marcelo Ochoa - Thu, 2015-09-10 09:43
Continuing with my previous post about using Docker in development/testing environment now the case is how to build an Oracle NoSQL cluster in single machine using Docker.
I assume that you already have docker installed and running, there are a plenty of tutorial about that and in my case using Ubuntu is just two step installer using apt-get :)
My starting point was using some ideas from another Docker project for building a Hadoop Cluster.
This project is using another great idea named Serf/Dnsmasq on Docker the motivating extracted from README.md file is:
This image aims to provide resolvable fully qualified domain names,
between dynamically created docker containers on ubuntu.
## The problem
By default **/etc/hosts** is readonly in docker containers. The usual
solution is to start a DNS server (probably as a docker container) and pass
a reference when starting docker instances: `docker run -dns `
So with this idea in mind I wrote this Docker file:

FROM java:openjdk-7-jdk
MAINTAINER marcelo.ochoa@gmail.com
RUN export DEBIAN_FRONTEND=noninteractive && \
    apt-get update && \
    apt-get install -y dnsmasq unzip curl ant ant-contrib junit
# dnsmasq configuration
ADD dnsmasq.conf /etc/dnsmasq.conf
ADD resolv.dnsmasq.conf /etc/resolv.dnsmasq.conf
# install serfdom.io
RUN curl -Lo /tmp/serf.zip https://dl.bintray.com/mitchellh/serf/0.5.0_linux_amd64.zip
RUN curl -Lo /tmp/kv-ce-3.3.4.zip http://download.oracle.com/otn-pub/otn_software/nosql-database/kv-ce-3.3.4.zip
RUN unzip /tmp/serf.zip -d /bin
RUN unzip /tmp/kv-ce-3.3.4.zip -d /opt
RUN rm -f /tmp/serf.zip
RUN rm -f /tmp/kv-ce-3.3.4.zip
ENV SERF_CONFIG_DIR /etc/serf
# configure serf
ADD serf-config.json $SERF_CONFIG_DIR/serf-config.json
ADD event-router.sh $SERF_CONFIG_DIR/event-router.sh
RUN chmod +x  $SERF_CONFIG_DIR/event-router.sh
ADD handlers $SERF_CONFIG_DIR/handlers
ADD start-serf-agent.sh  $SERF_CONFIG_DIR/start-serf-agent.sh
RUN chmod +x  $SERF_CONFIG_DIR/start-serf-agent.sh
EXPOSE 7373 7946 5000 5001 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020
CMD /etc/serf/start-serf-agent.sh
relevant information was marked in strong, here the explanation:
  • FROM java:openjdk-7-jdk, this Docker base images already have installed Ubuntu and Java7 so only a few additions are required
  • RUN curl .. /0.5.0_linux_amd64.zip, this is compiled version of Serf implementation ready to run on Ubuntu
  • RUN curl -Lo .. /kv-ce-3.3.4.zip, this is the community version of Oracle NoSQL, free download
  • CMD /etc/serf/start-serf-agent.sh, this is the script modified from the original Docker/serf project which includes the configuration of the Oracle NoSQL just after the image boot.
Last point requires an special explanation first there are 3 bash function for starting, stopping and creating the bootconfig file for the NoSQL nodes, here the sections:
stop_database() {
        java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar stop -root $KVROOT
exit
}
start_database() {
nohup java -Xmx256m -Xms256m -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
}
create_bootconfig() {
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "m" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -admin 5001 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
        [[ -n $NODE_TYPE ]] && [[ $NODE_TYPE = "s" ]] && java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVROOT -port 5000 -host "$(hostname -f)" -harange 5010,5020 -store-security none -capacity 1 -num_cpus 0 -memory_mb 0
}
last function (create_bootconfig) works different if the node is designed as master ($NODE_TYPE = "m") or slave ($NODE_TYPE = "s").
I decided to not persist the NoSQL storage after docker images stop, but is it possible replacing the directory where the NoSQL nodes reside externally as I showed on my previous post, with this configuration the NoSQL storage is not re-created at every boot.
With above explanations we can create the Docker image using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker build -t "oracle-nosql/serf" .
the complete list of files required can be download as zip from this location.
Once the image is built We can start a cluster of 3 nodes simple executing the script start-cluster.sh, this script creates a node named master.mycorp.com and two slaves, slave[1..2].mycorp.com, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster.sh
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
e4932053780227f2a99e167f6efb0b1eeb9fda93fba2aa9206c7a9f05bacc25c
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
d6d0387c6893263141d58efa80933065be23aa3c98651dc6358bf7d7688d32cf
WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers.
4fc18aebf466ec67de18c72c22739337499b5a76830f86d90a6533ff3bb6e314
you can check the status of the cluster executing a serf command at the master node, for example:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master serf members
master.mycorp.com  172.17.0.71:7946  alive
slave1.mycorp.com  172.17.0.72:7946  alive
slave2.mycorp.com  172.17.0.73:7946  alive
at this point 3 NoSQL nodes are ready to work, but they are unconfigured, here the output of NoSQL ping command:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
SNA at hostname: master, registry port: 5000 is not registered.
No further information is available
Using the examples of Oracle NoSQL Documentation We can create an store using this plan (script.txt):
configure -name mystore
plan deploy-zone -name "Boston" -rf 3 -wait
plan deploy-sn -zn zn1 -host master.mycorp.com -port 5000 -wait
plan deploy-admin -sn sn1 -port 5001 -wait
pool create -name BostonPool
pool join -name BostonPool -sn sn1
plan deploy-sn -zn zn1 -host slave1.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn2
plan deploy-sn -zn zn1 -host slave2.mycorp.com -port 5000 -wait
pool join -name BostonPool -sn sn3
topology create -name topo -pool BostonPool -partitions 300
plan deploy-topology -name topo -wait
show topology
to simple submit this plan to the NoSQL nodes there is a script named deploy-store.sh, here the output:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
Store configured: mystore
Executed plan 1, waiting for completion...
Plan 1 ended successfully
Executed plan 2, waiting for completion...
Plan 2 ended successfully
Executed plan 3, waiting for completion...
Plan 3 ended successfully
Added Storage Node(s) [sn1] to pool BostonPool
Executed plan 4, waiting for completion...
Plan 4 ended successfully
Added Storage Node(s) [sn2] to pool BostonPool
Executed plan 5, waiting for completion...
Plan 5 ended successfully
Added Storage Node(s) [sn3] to pool BostonPool
Created: topo
Executed plan 6, waiting for completion...
Plan 6 ended successfully
store=mystore  numPartitions=300 sequence=308
  zn: id=zn1 name="Boston" repFactor=3 type=PRIMARY
  sn=[sn1] zn:[id=zn1 name="Boston"] master.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2] zn:[id=zn1 name="Boston"] slave1.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn2] RUNNING
          No performance info available
  sn=[sn3] zn:[id=zn1 name="Boston"] slave2.mycorp.com:5000 capacity=1 RUNNING
    [rg1-rn3] RUNNING
          No performance info available
  shard=[rg1] num partitions=300
    [rg1-rn1] sn=sn1
    [rg1-rn2] sn=sn2
    [rg1-rn3] sn=sn3
Also you can access to NoSQL Admin page using the URL http://localhost:5001/ because the start-cluster.sh script publish this port outside the master container.
Here the screen shot:


The cluster is ready!!, have fun storing your data.

Addendum!!
Persistent NoSQL store, as I mentioned early in this post if We put the /var/kvroot mapped to the host machine the NoSQL store will persist through multiples execution of the cluster, for example creating 3 directories as:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot1
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot2
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# mkdir /tmp/kvroot3
and creating a new shell script for starting the cluster mapped to above directories as (start-cluster-persistent.sh):
docker run -d -t --volume=/tmp/kvroot1:/var/kvroot --publish=5000:5000 --publish=5001:5001 --dns 127.0.0.1 -e NODE_TYPE=m -P --name master -h master.mycorp.com oracle-nosql/serf
FIRST_IP=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master)
docker run -d -t --volume=/tmp/kvroot2:/var/kvroot --dns 127.0.0.1 -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave1 -h slave1.mycorp.com oracle-nosql/serf
docker run -d -t --volume=/tmp/kvroot3:/var/kvroot --dns 127.0.0.1 -e NODE_TYPE=s -e JOIN_IP=$FIRST_IP -P --name slave2 -h slave2.mycorp.com oracle-nosql/serf
We can start and deploy the store for the first time using:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ls -ltr /tmp/kvroot1
total 8
-rw-r--r-- 1 root root  52 sep 10 20:19 security.policy
-rw-r--r-- 1 root root 781 sep 10 20:19 config.xml
...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./deploy-store.sh 
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:20:18 UTC   Version: 12.1.3.3.4
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:0 maxCatchupTimeSecs:0
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:627 haPort:5011
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:627 haPort:5010 delayMillis:0 catchupTimeSecs:0
as you can see the cluster is ready for storing data, now We will stop and start again to see that is not necessary to redeploy the configuration:
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./stop-cluster.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# ./start-cluster-persistent.sh
... output here ...
root@local:/var/lib/docker/dockerfiles/build-oracle-nosql# docker exec -ti master java -jar /opt/kv-3.3.4/lib/kvstore.jar ping -host master -port 5000
Pinging components of store mystore based upon topology sequence #308
300 partitions and 3 storage nodes
Time: 2015-09-10 23:34:15 UTC   Version: 12.1.3.3.4
Shard Status: total:1 healthy:1 degraded:0 noQuorum:0 offline:0
Zone [name="Boston" id=zn1 type=PRIMARY]   RN Status: total:3 online:3 maxDelayMillis:2342 maxCatchupTimeSecs:-4
Storage Node [sn1] on master.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Admin [admin1] Status: RUNNING,MASTER
Rep Node [rg1-rn1] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5011 delayMillis:2342 catchupTimeSecs:-4
Storage Node [sn2] on slave1.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:639 haPort:5010 delayMillis:0 catchupTimeSecs:0
Storage Node [sn3] on slave2.mycorp.com:5000    Zone: [name="Boston" id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.3.4 2015-04-24 09:01:17 UTC  Build id: e3ae28b507bf
Rep Node [rg1-rn3] Status: RUNNING,MASTER sequenceNumber:639 haPort:5010
and that's all last ping command shows that the store survive the stop/remove/start container cycle.


Spring Session - Spring Boot application for IBM Bluemix

Pas Apicella - Thu, 2015-09-10 07:28
The following guide shows how to use Spring Session to transparently leverage Redis to back a web application’s HttpSession when using Spring Boot.

http://docs.spring.io/spring-session/docs/current/reference/html5/guides/boot.html

The demo below is a simple Spring Boot / Thymeleaf/ Bootstrap application to test Session replication using Spring Session - Spring Boot within IBM Bluemix. Same demo will run on Pivotal Cloud Foundry as well.

IBM DevOps URL ->

https://hub.jazz.net/project/pasapples/SpringBootHTTPSession/overview

Sample Project on GitHub ->

https://github.com/papicella/SpringBootHTTPSession



More Information

The Portable, Cloud-Ready HTTP Session
https://spring.io/blog/2015/03/01/the-portable-cloud-ready-http-session
Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator