Skip navigation.

Feed aggregator

Presentation slides for my ORDS talk at KScope 2015

Dietmar Aust - Tue, 2015-09-15 11:48
Hi guys,

in June I gave a talk at the ODTUG KScope conference regarding the optimal setup of Oracle ORDS for Oracle Application Express: Setting Up the Oracle APEX Listener (Now ORDS) for Production Environments

You can access the slides certainly through the ODTUG site. They have even recorded the presentation and made it available to their members.

This seems to be a good investment for $99 per year for a paid membership, because now you also have access to the other content from the ODTUG conferences. I am not affiliated with ODTUG but all I can say is that the KScope conference is the best place for an Oracle developer to learn and connect with the best folks in the industry.

For everybody else who is not (yet) an ODTUG member you can download my slides and the config file here:

Cheers and all the best,

P.S.: The configuration is based on the version 3.0.0 of ORDS. You should definitely move to 3.0.1 which is currently available.

But on the other hand I was once again thrown of by another problem with version 3.0.1 running the schema creation scripts for the ORDS schema users (ords_metadata and ords_public_user) .

Thus I have come to the conclusion it is best to do it step by step, the database users have to be created first. You can extract the installation scripts from the ords.war just as well:

OTN Tour of Latin America 2015 : It’s a Wrap!

Tim Hall - Tue, 2015-09-15 11:28


I just realised I didn’t write a closing post for the OTN Tour of Latin America 2015, so here goes.

Here are the links to all the posts I wrote during the two weeks that related to the main body of the tour.

Here are the links to the posts I wrote during the little trip to Machu Picchu.

Overall it was a really fun tour. Ignoring my illness at Machu Picchu, I think I coped a lot better with it than I have the previous couple of tours, which was good news.

Big thanks to the organisers and attendees at all the events. I hope to see you all again soon! Thanks also to the ACE Program for giving me the opportunity to fly the flag! I must also say a thank you to my fellow speakers for putting up with me for all that time. I know I can be hard work, so you are all deserving of an “I survived a tour with Tim”, badge, if one existed. :)

Sorry for the delay in writing this post! See you soon!



OTN Tour of Latin America 2015 : It’s a Wrap! was first posted on September 15, 2015 at 6:28 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Cloud Control : It’s production upgrade day…

Tim Hall - Tue, 2015-09-15 11:08

cloudI mentioned a couple of months ago I was planning to upgrade our production Enterprise Manager Cloud Control installation from to Well, today was the day. I held back a while because I knew I would be out of the country for a while on the Latin America tour and I didn’t want to make a big change before I ran away. :)

So today I pretty much did exactly what was in my upgrade article and everything went well. I upgraded the OMS and the local agent and I’ll run like that for a couple of days before I start pushing out the agent updates to the monitored hosts.

Happy days!

If you are interested, you can see some of my Cloud Control articles here.



Cloud Control : It’s production upgrade day… was first posted on September 15, 2015 at 6:08 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OTBI Enterprise

Dylan's BI Notes - Tue, 2015-09-15 09:39
OTBI Enterprise is the BI cloud service, a SaaS deployment of OBIA.  It is using a data warehouse based architecture.  The ETL processes are handled within the cloud.  The data are first loaded from either on premise or cloud sources using various means in the original formats.   The data are first loaded into the […]
Categories: BI & Warehousing

Creating a Database “War Room”: A Team of Remote DBAs

Chris Foot - Tue, 2015-09-15 07:00

Any business can attest to how important their data is to the functions of their company; without it, everything comes to a halt. But what is sometimes overlooked are the individuals tasked with administering that data and its surrounding environments. In the worst of cases, a business experiences a catastrophic event, like a corrupted server, and slips into panic-mode as no one at the company knows how to correct the issue – or, the sole individual in charge of your databases can’t be reached. Businesses know they can’t afford to have this happen, but:

  • An in-house DBA team isn’t in the budget
  • And the current employee in charge of your data can only handle so much

24×7 database management services is what your business needs to avoid another critical event, but where can your business turn to get the database support needed to not only monitor and protect your data, but provide specialized expertise and solve critical issues with a “War Room Strategy”?

Database Administration Services

Plenty of businesses face the same struggles when trying to find a solution to the database management challenges plaguing them. Even for businesses with one or a handful of in-house DBAs, their expertise can only take them so far in an industry with so many platforms, processes, and best practices to follow. Dealing with day-to-day database administration, database monitoring, and critical issues that arise are time-consuming, often taking away an in-house DBA’s ability to properly work on other pressing activities, such as:

  • Strategic (global) improvements to the current architecture to increase the database environment’s performance, availability, security and usability
  • Participating in database application design, build and deployment projects – DBAs act as in-depth technical advisors to support business projects focused on increasing revenue or reducing the cost of goods/services sold
  • Leveraging advanced database features and technologies to improve future service
  • Additional high ROI projects which are oftentimes pushed back as day-to-day tasks take precedence

Many businesses are now outsourcing work handled by database administrators to companies specializing in remote database management, like RDX. Our remote DBAs can function in a couple of different ways:

  1. Your own full service DBA team handling 100% of your database administration needs – assuming complete responsibility for the security, performance, and improvement of your infrastructure.
  2. Remote database support specialists helping your internal team to juggle the daily and long-term tasks required to keep your database environment functioning as needed. The neglected responsibilities listed above can now be properly addressed with remote DBAs providing back-up to your in-house administrators.

Whether your internal team just needs some extra help, or our database administration services become the sole answer to monitoring, maintaining and protecting your critical data, your business will have 24×7 remote database support from a dedicated DBA team.

Experienced Remote DBAs for Your Business

When RDX becomes part of your team, the primary remote DBA on your account isn’t standing alone. A secondary DBA also assumes ownership of your environment and contributes to the ongoing maintenance, support and improvement of your databases. There’s always two points of contact who know everything about your organization’s database infrastructure and the best ways to grow it with your company.

At RDX, each remote DBA is a Subject Matter Expert (SME) in a specific area, and the primary and secondary DBAs assigned to your account are the best match for your database needs. If your infrastructure requires frequent SQL server performance tuning and high availability, then the areas of expertise of your assigned DBAs will reflect those needs.

However, your environment’s needs are constantly changing, and the most crucial area of database support can turn at a moment’s notice. Database replication may be the most pressing issue one week, and the next week your infrastructure may be in need of an advanced security implementation – this is where access to our deep dive specialists comes into play. While your primary and secondary DBAs specialize in areas most relevant to your database needs, deep dive SMEs can quickly be pulled onto your account to provide expert advice and support in their own specialized areas. RDX has SMEs in all key areas of support – Highly Available (HA) architectures, database performance, SQL Statement tuning, security, and more – all of which are available to your infrastructure when needed.

Tap into the Collective Knowledge of our Remote Database Administrators

While your account has a primary and secondary DBA attached to it, there’s truly an army of support behind your data with RDX. Our remote DBAs never act alone, but as a unit that draws from the decades of experience handling vastly different accounts across a wide range of industries. Remote database management services from RDX never limit you to just the expertise of the primary and secondary remote DBAs handling your account on a day-to-day basis. From deep dive specialists to the remote DBA at an adjacent workstation, your primary and secondary DBAs have a team of experts to assist with any issue that arises.

Unlike the sole in-house DBA who can only draw from his own knowledge when a problem arises, the remote DBAs at RDX draw from the collective pool of experience across our offices. If a problem arises that one Database Administrator hasn’t dealt with, there’s a good chance they can find a quick solution with the help of one or more of their teammates; this is the IT “War Room Strategy” in effect. The culture at RDX revolves around working as a team – from sharing industry best practices amongst SQL Server DBAs to a group of Oracle DBAs staying late to ensure proper replication on one client’s server – we carry this team atmosphere to each environment we manage. Whether our clients use us as their full service DBA team or support for their own internal team, they have peace of mind knowing that the RDX Database “War Room” is on-hand to tackle any critical problems that occur.

Whether you’re in need of full remote DBA services or supplemental DBA support, RDX brings you the collective knowledge of dozens of database professionals to ensure your environment is maintained, supported and evolving with your business needs. To see how RDX’s remote DBA services can help you tackle today’s toughest data store challenges, contact us to speak with one of our qualified representatives.

New Call-to-action


The post Creating a Database “War Room”: A Team of Remote DBAs appeared first on Remote DBA Experts.

Managing Impala and Other Mixed Workloads on the Oracle Big Data Appliance

Rittman Mead Consulting - Tue, 2015-09-15 04:56

One of our current client projects uses Cloudera Impala to provide fast ad-hoc querying to the data we’re loading into their Oracle Big Data Appliance Hadoop environment. Impala bypasses MapReduce to provide faster queries than Hive, but to do so it does a lot of processing in-memory and runs server processes on each node in the cluster, leading in some cases to runaway queries blocking other workloads in the same way that OBIEE queries on an Oracle Database can sometimes block ETL and application workloads. Several projects share this same Big Data Appliance, so to try and limit the impact Impala could have on other cluster workloads the client had disabled the Impala Daemons on nine of the twelve nodes in their Big Data Appliance; our concern with this approach was that an Impala query could access data from any datanode in the Big Data Appliance cluster, so whilst HDFS data is typically stored and replicated to three nodes in the cluster running the Impala daemons on just a quarter of the available nodes was likely to lead to data locality issues for Impala and blocks getting shipped across the network unnecessarily.

Going back to OBIEE and the Oracle Database, Oracle have a resource management feature for the Oracle database that allows you to put users and queries into separate resource pools and manage the share of overall resources that each pool gets. I covered this concept on the blog a few years ago, and the version of Cloudera Hadoop (CDH5.3) as used on the client’s Big Data Appliance has a feature called “YARN”, or Yet Another Resource Negotiator, that splits out the resource management and scheduling parts that were bound into MapReduce in Hadoop 1.0 so that MapReduce then just runs as a workload type on Hadoop, and with it then possible to run other workload types, for example Apache Spark, on that same cluster management framework.



Impala isn’t however configured to use YARN by default and uses an internal scheduler to govern how concurrent queries run and use cluster resources, but it can be configured to use YARN in what Cloudera term “Integrated Resource Management” and our initial response was to recommend this approach; however YARN is really optimised for longer-running batch jobs and not the shorter jobs that Impala generates (such that Cloudera recommends you don’t actually use YARN, and control Impala resource usage via service-level process constraints or through a new Impala feature called Admission Control instead). Taking a step back though, how do we actually see what resources Impala is using across the cluster when a query runs, and is there a feature similar to the Oracle Database’s SQL Explain Plan to help us understand how an Impala SQL query is executed? Then, using this and the various resource management options to us, can we use them to understand how YARN and other options will affect the Impala users on the client’s cluster if we enable them? And, given that we were going to test this all out on one of our development Hadoop clusters running back at the office on VMWare, how well could we simulate the multiple concurrent queries and mixed workload we’d then encounter on the real customer Big Data Appliance?

When trying to understand what goes on when a Cloudera Impala SQL query runs, the two main tools in your toolbox are EXPLAIN plans and query profiles. The concept of EXPLAIN plans will be familiar to Oracle developers, and putting “explain” before your Impala SQL query when you’re using the Impala Shell (or pressing the “Explain” button when you’re using the Impala Editor in Hue) will display an output like the one below, showing the steps the optimiser plans to take to return the query results:

[] > explain
select sum( as total_flights, d.dest_city
from flight_delays f join geog_dest d on f.dest = d.dest
join geog_origin o on f.orig = o.orig
where d.dest_state = 'California'
and   o.orig_state in ('Florida','New York','Alaska')
group by d.dest_city
having total_flights > 3000;
Query: explain select sum( as total_flights, d.dest_city
from flight_delays f join geog_dest d on f.dest = d.dest
join geog_origin o on f.orig = o.orig
where d.dest_state = 'California'
and   o.orig_state in ('Florida','New York','Alaska')
group by d.dest_city
having total_flights > 3000
| Explain String                                                      |
| Estimated Per-Host Requirements: Memory=154.01MB VCores=2           |
|                                                                     |
| 10:EXCHANGE [UNPARTITIONED]                                         |
| |                                                                   |
| 09:AGGREGATE [FINALIZE]                                             |
| |  output: sum:merge(                                     |
| |  group by: d.dest_city                                            |
| |  having: sum( > 3000                                    |
| |                                                                   |
| 08:EXCHANGE [HASH(d.dest_city)]                                     |
| |                                                                   |
| 05:AGGREGATE                                                        |
| |  output: sum(                                           |
| |  group by: d.dest_city                                            |
| |                                                                   |
| 04:HASH JOIN [INNER JOIN, BROADCAST]                                |
| |  hash predicates: f.orig = o.orig                                 |
| |                                                                   |
| |--07:EXCHANGE [BROADCAST]                                          |
| |  |                                                                |
| |  02:SCAN HDFS [airlines.geog_origin o]                            |
| |     partitions=1/1 files=1 size=147.08KB                          |
| |     predicates: o.orig_state IN ('Florida', 'New York', 'Alaska') |
| |                                                                   |
| 03:HASH JOIN [INNER JOIN, BROADCAST]                                |
| |  hash predicates: f.dest = d.dest                                 |
| |                                                                   |
| |--06:EXCHANGE [BROADCAST]                                          |
| |  |                                                                |
| |  01:SCAN HDFS [airlines.geog_dest d]                              |
| |     partitions=1/1 files=1 size=147.08KB                          |
| |     predicates: d.dest_state = 'California'                       |
| |                                                                   |
| 00:SCAN HDFS [airlines.flight_delays f]                             |
|    partitions=1/1 files=1 size=64.00MB                              |
Fetched 35 row(s) in 0.21s

Like an Oracle SQL explain plan, Impala’s cost-based optimiser uses table and partition stats that you should have gathered previously using Impala’s “compute stats” command to determine what it thinks is the optimal execution plan for your query. To see the actual cost and timings for the various plan steps that are run for a query, you can then use the “summary” statement after your query has run (or for more detail, the “profile” statement”) to see the actual timings and stats for each step in the query execution.

[] > summary;                                 > ;
| Operator        | #Hosts | Avg Time | Max Time | #Rows   | Est. #Rows | Peak Mem  | Est. Peak Mem | Detail                        |
| 10:EXCHANGE     | 1      | 20.35us  | 20.35us  | 7       | 193        | 0 B       | -1 B          | UNPARTITIONED                 |
| 09:AGGREGATE    | 6      | 142.18ms | 180.81ms | 7       | 193        | 6.28 MB   | 10.00 MB      | FINALIZE                      |
| 08:EXCHANGE     | 6      | 59.86us  | 123.39us | 60      | 1.93K      | 0 B       | 0 B           | HASH(d.dest_city)             |
| 05:AGGREGATE    | 6      | 171.72ms | 208.36ms | 60      | 1.93K      | 22.73 MB  | 10.00 MB      |                               |
| 04:HASH JOIN    | 6      | 89.42ms  | 101.82ms | 540.04K | 131.88M    | 12.79 MB  | 5.41 KB       | INNER JOIN, BROADCAST         |
| |--07:EXCHANGE  | 6      | 16.32us  | 19.63us  | 2.81K   | 117        | 0 B       | 0 B           | BROADCAST                     |
| |  02:SCAN HDFS | 1      | 302.83ms | 302.83ms | 469     | 117        | 309.00 KB | 32.00 MB      | airlines.geog_origin o        |
| 03:HASH JOIN    | 6      | 936.71ms | 1.10s    | 15.68M  | 131.88M    | 12.14 MB  | 3.02 KB       | INNER JOIN, BROADCAST         |
| |--06:EXCHANGE  | 6      | 19.02us  | 46.49us  | 1.04K   | 39         | 0 B       | 0 B           | BROADCAST                     |
| |  01:SCAN HDFS | 1      | 266.99ms | 266.99ms | 173     | 39         | 325.00 KB | 32.00 MB      | airlines.geog_dest d          |
| 00:SCAN HDFS    | 6      | 1.07s    | 1.90s    | 131.88M | 131.88M    | 74.03 MB  | 480.00 MB     | airlines.flight_delays_full f |

Output from the Summary statement gives us some useful information in working out the impact of the various resource management options for the Oracle Big Data Appliance, at least in terms of its impact on individual Impala queries – we’ll look at the impact on the overall Hadoop cluster and individual nodes later on. From the output of the above Summary report I can see that my query ran on all six nodes in the cluster (queries I ran earlier on a smaller version of the fact table ran on just a single node), and I can see how long each step in the query actually took to run. So what happens if I run the same query again on the cluster but disable the Impala daemon service role on three of the nodes, using Cloudera Manager?


Here’s the Summary output after running the query again:

[] > summary;
| Operator        | #Hosts | Avg Time | Max Time | #Rows   | Est. #Rows | Peak Mem  | Est. Peak Mem | Detail                        |
| 10:EXCHANGE     | 1      | 22.01us  | 22.01us  | 7       | 193        | 0 B       | -1 B          | UNPARTITIONED                 |
| 09:AGGREGATE    | 3      | 111.12ms | 117.24ms | 7       | 193        | 6.27 MB   | 10.00 MB      | FINALIZE                      |
| 08:EXCHANGE     | 3      | 30.09us  | 39.02us  | 30      | 1.93K      | 0 B       | 0 B           | HASH(d.dest_city)             |
| 05:AGGREGATE    | 3      | 161.26ms | 173.57ms | 30      | 1.93K      | 22.84 MB  | 10.00 MB      |                               |
| 04:HASH JOIN    | 3      | 156.50ms | 238.90ms | 540.04K | 131.88M    | 12.81 MB  | 5.41 KB       | INNER JOIN, BROADCAST         |
| |--07:EXCHANGE  | 3      | 20.19us  | 28.93us  | 1.41K   | 117        | 0 B       | 0 B           | BROADCAST                     |
| |  02:SCAN HDFS | 1      | 477.38ms | 477.38ms | 469     | 117        | 309.00 KB | 32.00 MB      | airlines.geog_origin o        |
| 03:HASH JOIN    | 3      | 1.48s    | 1.66s    | 15.68M  | 131.88M    | 12.14 MB  | 3.02 KB       | INNER JOIN, BROADCAST         |
| |--06:EXCHANGE  | 3      | 12.07us  | 14.89us  | 519     | 39         | 0 B       | 0 B           | BROADCAST                     |
| |  01:SCAN HDFS | 1      | 308.83ms | 308.83ms | 173     | 39         | 325.00 KB | 32.00 MB      | airlines.geog_dest d          |
| 00:SCAN HDFS    | 3      | 3.39s    | 6.85s    | 131.88M | 131.88M    | 74.11 MB  | 480.00 MB     | airlines.flight_delays_full f |

What the Summary statement doesn’t show you is the overall time the query took to run, and the query ran against three nodes took 9.48s to run compared to 3.59s for the one before where I had all six nodes’ Impala daemon enabled. In-fact I’d expect a query running on the client’s BDA with just three out of twelve nodes enabled to run even slower because of the block locality issue – Impala has a feature called block locality tracking which keeps track of where HDFS data blocks are actually located on the cluster and tries to run impalad tasks on the right nodes, but three out of twelve nodes running makes that job really hard – but the other factor that we need to consider is how running multiple queries concurrently affects things when only a few nodes are handling all the Impala user queries.

To try and simulate concurrent queries running I opened six terminal session against nodes actually running Impala Daemon service roles and submitted the same query from each session, with a second or two gap between each query; with all six nodes enabled the average response time rose to about 6s, but with just three enabled the response rose fairly consistently to around 27s.


This is of course what you’d expect when everything was trying to run on the same three (now resource-starved) server nodes, and again I’d expect this to be even more pronounced on the client’s twelve-node BDA. What this test of course didn’t cover was running workloads other than Impala on the same cluster, or running queries against different datasets, but it did at least show us how response-time increases fairly dramatically (albeit consistently) as more Impala users come onto the system.

So now we have some baseline benchmarking figures, let’s configure Impala to use YARN, using Cloudera Manager on the CDH5.3 setup used on the client’s BDA and our development cluster back in the office. There’s actually two parts to Impala running on YARN in CDH5.x; YARN itself as the overall cluster resource management layer, and another component called Llama (Low-Latency, or “Long-Lived”, Application Master) that sits between YARN and Impala and reduces the time that each Impala query takes to obtain YARN resource allocations.

llama arch

Enabling YARN and Llama (and if you want to, configuring Llama and thereby Impala for high-availability) is done through a wizard in CDH5.3 that also offers to set up an Linux feature called Cgroups that YARN can use to limit the “containers” it uses for resource management at the OS-level.

Once you’ve run through the wizard and restarted the cluster, Impala should be configured to use YARN instead of its own scheduler to request resources, which in-theory will allow Hadoop and the Big Data Appliance to consider Impala workloads alongside MapReduce, Spark and HBase when scheduling jobs across the cluster. Before we get into the options YARN gives us for managing these workloads I ran the same Impala queries again, first as a single query and then with six running concurrently, to see what impact YARN on its own had on query response times.

The single query on its own took around the same time as without YARN to run (3-4s), but when I ran six concurrent queries together the response time went up from the 3-4s that I saw without YARN enabled to between 5s and 18s depending on the session, with quite a bit of variation between response times compared to the consistent times I saw when YARN wasn’t being used – which surprised me as one of the stated benefits of YARN is making job execution times more predictable and smooth, though this cloud be more of an overall-cluster thing and there are also recommendations around configuring YARN and Llama’s resource estimation more efficient for Impala in the Cloudera docs.

[] > summary;
| Operator        | #Hosts | Avg Time | Max Time | #Rows   | Est. #Rows | Peak Mem  | Est. Peak Mem | Detail                        |
| 10:EXCHANGE     | 1      | 41.38us  | 41.38us  | 7       | 193        | 0 B       | -1 B          | UNPARTITIONED                 |
| 09:AGGREGATE    | 6      | 115.28ms | 123.04ms | 7       | 193        | 6.28 MB   | 10.00 MB      | FINALIZE                      |
| 08:EXCHANGE     | 6      | 44.44us  | 67.62us  | 60      | 1.93K      | 0 B       | 0 B           | HASH(d.dest_city)             |
| 05:AGGREGATE    | 6      | 170.91ms | 201.47ms | 60      | 1.93K      | 22.82 MB  | 10.00 MB      |                               |
| 04:HASH JOIN    | 6      | 82.25ms  | 98.34ms  | 540.04K | 131.88M    | 12.81 MB  | 5.41 KB       | INNER JOIN, BROADCAST         |
| |--07:EXCHANGE  | 6      | 15.39us  | 18.99us  | 2.81K   | 117        | 0 B       | 0 B           | BROADCAST                     |
| |  02:SCAN HDFS | 1      | 244.40ms | 244.40ms | 469     | 117        | 309.00 KB | 32.00 MB      | airlines.geog_origin o        |
| 03:HASH JOIN    | 6      | 850.55ms | 942.47ms | 15.68M  | 131.88M    | 12.14 MB  | 3.02 KB       | INNER JOIN, BROADCAST         |
| |--06:EXCHANGE  | 6      | 13.99us  | 19.05us  | 1.04K   | 39         | 0 B       | 0 B           | BROADCAST                     |
| |  01:SCAN HDFS | 1      | 222.03ms | 222.03ms | 173     | 39         | 325.00 KB | 32.00 MB      | airlines.geog_dest d          |
| 00:SCAN HDFS    | 6      | 1.54s    | 2.88s    | 131.88M | 131.88M    | 74.03 MB  | 480.00 MB     | airlines.flight_delays_full f |

But it seems clear that users of Impala on the client cluster should expect some sort of overhead from using YARN to manage Impala’s resources, with the payoff being better balance between Impala workloads and the other uses they’re putting the BDA cluster too – however I think there’s more we can do to fine-tune how Llama and YARN allocate memory to Impala queries up-front (allocating a set amount of memory for all queries, rather than making an estimate and then adding more memory mid-query if it’s needed) and of course we’ve not really tested it on a cluster with a full, mixed workload running. But what about our original scenario, where only a certain percentage of the overall cluster resources or nodes are allocated to Impala query processing? To set up that sort of division resources we can use another feature of YARN called dynamic allocation, and dynamic resource pools that we can set up through Cloudera Manager again.

Dynamic allocation is one of the ways that YARN can be configured to manage multiple workloads on a Hadoop cluster (the other way is through static service pools, and I’ll come to those in a moment). Using dynamic allocation I can set up a resource pool for the airline flight delays application that my Impala SQL queries are associated with and allocate it 25% of overall cluster resources, with the remainder of cluster resources allocated to other applications. I can keep that weighting simple as I have done in the screenshot below, or I can allocate resources based on virtual cores and memory, but I found it simpler to just set these overall weightings and let YARN worry about cores and RAM. 


Depending on the scheduling policy you select, YARN will prioritise Impala and other jobs in different ways, but the recommended scheduling policy for mixed workloads is dominent resource fairness which balances RAM and CPU depending on which resource pool needs them most at a particular time. Note also that Impala can either be managed as part of the overall YARN workload or separately, a choice you can make in the Impala service configuration settings in Cloudera Manager (the “Enable Dynamic Resource Pools” setting that’s checked below, but was unchecked for the screenshot above)


There’s also a separate control you can place on Impala queries called Admission Control, that limits the number of queries that can run or be queued for a resource pool at any particular time. The docs are a bit vague on when to use admission control, when to use YARN or not and so on, but my take on this is that if it’s just Impala queries you’re worried about and throttling their use solves the problem then use this feature and leave Impala outside of YARN, but if you need to manage overall mixed workloads then do it all through YARN. For my testing example though I just went with simple resource pool weighting, and you can see from the screenshot below where multiple queries are running at once for my pool, CPU and RAM resources are constrained as expected.


To make a particular Impala query run within a specific resource pool you can either allocate that user to a named resource pool, or you can specific the resource pool in your Impala shell session like this:

[] > set request_pool = airlines; 
REQUEST_POOL set to airlines
[] > select sum( as total_flights, d.dest_city
from airlines.flight_delays_full f join airlines.geog_dest d on f.dest = d.dest
join airlines.geog_origin o on f.orig = o.orig
where d.dest_state = 'California'
and o.orig_state in ('Florida','New York','Alaska')
group by d.dest_city
having total_flights > 3000;

Looking then at a typical summary output for a query running with these restrictions (25% of resources overall) and other queries running concurrently, the numbers don’t look all that different to before and results took between 8s and 30s to return – again I was surprised on the variance but I think YARN is more about overall cluster performance rather than individual queries, and you shouldn’t read too much into specific times on a dev server with an unrepresentative overall workload.

[] > summary;
| Operator        | #Hosts | Avg Time | Max Time | #Rows   | Est. #Rows | Peak Mem  | Est. Peak Mem | Detail                        |
| 10:EXCHANGE     | 1      | 26.78us  | 26.78us  | 7       | 193        | 0 B       | -1 B          | UNPARTITIONED                 |
| 09:AGGREGATE    | 6      | 209.10ms | 262.02ms | 7       | 193        | 6.28 MB   | 10.00 MB      | FINALIZE                      |
| 08:EXCHANGE     | 6      | 63.20us  | 118.89us | 60      | 1.93K      | 0 B       | 0 B           | HASH(d.dest_city)             |
| 05:AGGREGATE    | 6      | 282.56ms | 401.37ms | 60      | 1.93K      | 22.76 MB  | 10.00 MB      |                               |
| 04:HASH JOIN    | 6      | 99.56ms  | 114.14ms | 540.04K | 131.88M    | 12.85 MB  | 5.41 KB       | INNER JOIN, BROADCAST         |
| |--07:EXCHANGE  | 6      | 15.49us  | 17.94us  | 2.81K   | 117        | 0 B       | 0 B           | BROADCAST                     |
| |  02:SCAN HDFS | 1      | 531.08ms | 531.08ms | 469     | 117        | 309.00 KB | 32.00 MB      | airlines.geog_origin o        |
| 03:HASH JOIN    | 6      | 1.20s    | 1.54s    | 15.68M  | 131.88M    | 12.14 MB  | 3.02 KB       | INNER JOIN, BROADCAST         |
| |--06:EXCHANGE  | 6      | 24.29us  | 68.23us  | 1.04K   | 39         | 0 B       | 0 B           | BROADCAST                     |
| |  01:SCAN HDFS | 1      | 287.31ms | 287.31ms | 173     | 39         | 325.00 KB | 32.00 MB      | airlines.geog_dest d          |
| 00:SCAN HDFS    | 6      | 2.34s    | 3.13s    | 131.88M | 131.88M    | 74.03 MB  | 480.00 MB     | airlines.flight_delays_full f |

A point to note is that I found it very hard to get Impala queries to run when I got down to specifying virtual core and memory limits rather than just overall weightings, so I’d go with these high-level resource pool prioritisations which seemed to work and didn’t unduly affect query response times. For example the setting below looked clever, but queries always seemed to time out and I never really got a satisfactory setup working properly.


Note that for YARN dynamic resource pools to be used, all Linux/CDH users will need to be assigned to resource pools so they don’t run as “unconstrained”; this can also be done from the Dynamic Resource Pools configuration page.

Finally though, all of this management through resource pools might not be the best way to control resource usage by YARN. The Cloudera docs say quite clearly on the Integrated Resource Management page that:

“When using YARN with Impala, Cloudera recommends using the static partitioning technique (through a static service pool) rather than the combination of YARN and Llama. YARN is a central, synchronous scheduler and thus introduces higher latency and variance which is better suited for batch processing than for interactive workloads like Impala (especially with higher concurrency). Currently, YARN allocates memory throughout the query, making it hard to reason about out-of-memory and timeout conditions.

What this means in-practice is that, if you’ve got a single project using the Big Data Appliance and you just want to specify at a high-level what proportion of resources Impala, HBase, MapReduce and the other services under YARN management use, you can define this as static service pool settings in Cloudera Manager and have these restrictions enforced by Linux Cgroups. In the screenshot below I unwound all of the dynamic resource pool settings I created a moment ago and allocated 25% of overall cluster resources to Impala, with the wizard then using those top-level values to set limits for services across all nodes in the cluster based on their actual RAM and CPU, the services running on them and so on.


Then, going back to Cloudera Manager and running some queries, you can see these static service pool limits being applied in real-time and their effect in the form of graphs for particular cluster resources.


So given all of this, what was our recommendation to the client about how best to set up resource management for Impala and other workloads on their Big Data Appliance? Not too much should be read into individual numbers – it’s hard to simulate a proper mixed workload on a development server, and of course their BDA has 12 nodes, more memory, faster CPUs. However it’s probably fair to say these are the obvious conclusions:

  • Running Impala daemons on just a subset of nodes isn’t actually a bad way to constrain resources used by Impala, but it’ll only work on clusters with a small amount of nodes so that there’s a good chance one node will have one of the three copies of a data block. On a system of the scale of our customer’s, we’ll probably hit unacceptable overheads in terms of block locality. I would not carry on with this approach because of that.
  • If the customer BDA will be running a mixed workload, i.e. data loading, long-running Hive/Pig/Spark jobs as well as short-running Impala jobs, enabling Impala for YARN and setting overall resource pools for applications would be the best approach, but individual Impala queries will probably run slower than now (even given the restriction in resources), due to the overhead YARN imposes when scheduling and running jobs. But this will be the best way to allocate resource between applications and provide a generally “smoother” experience for users
  • If the BDA needs to be optimised mostly for Impala queries, then don’t manage Impala under YARN, leave it outside of this and just use Static service pools to allocate Impala roughly 25% of resources across all nodes. In both this and the previous instance (Impala on YARN) then all nodes should be re-enabled for Impala so as to minimize issues over block locality
  • If the only real issue is Impala queries for a particular application taking all resources/becoming runaway, Impala could be left outside of YARN but enabled for admission control so as to limit the total number of running/queued queries for a particular application.
Categories: BI & Warehousing

MobaXterm 8.2

Tim Hall - Tue, 2015-09-15 03:47

command-promptMobaXterm 8.2 is now available.

Downloads and changelog in the usual places.

This is a must for Windows users who use SSH and X Emulation!



MobaXterm 8.2 was first posted on September 15, 2015 at 10:47 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ADF BC Inline View Criteria for Hierarchical Search

Andrejus Baranovski - Tue, 2015-09-15 03:19
ADF BC View Criteria allows to implement Inline View Criteria to execute hierarchal search. This is especially useful, when you have Master-Detail relationship and want to filter Master records, based on attribute value from Detail. Keep in mind, Inline View Criteria for hierarchical search would not work, if VO is based on custom SQL query. It works only with declarative VO's.

This is how View Criteria with detail search option looks like. If there is View Link available, you can select depend VO from the list of attributes. JDeveloper automatically sets Inline View Criteria option and you can select any attribute from detail VO. All criteria attributes will be rendered in the single search block on UI:

Inline View Criteria is created based on View Link:

Make sure not to use custom SQL based VO, Inline View Criteria would work only with declarative or not custom SQL based VO:

Search filters Master results (departments), based on Detail filtering (employees). I'm searching for employees with salary greater or equal to 12000. This query returns only those departments, where employees with such salary are available:

If I search with lower salary, more departments are present in the result:

Download sample application -

Copycat blog

Vikram Das - Tue, 2015-09-15 02:50
While doing a google search today I noticed that there is another blog that has copied all content from my blog and posted it as their own content and even kept a similar sounding name: .  I have made a DMCA complaint to google about this.  I will update this article about how it goes.
Categories: APPS Blogs

Turkish Hadoop User Group(TRHUG) 2015 meeting

H.Tonguç Yılmaz - Tue, 2015-09-15 01:39
Turkish Hadoop User Group(TRHUG) 2015 annual meeting will be at Tuesday October 6, İTÜ Maslak Campus İstanbul. Dilişim / Oracle / Cloudera / Intel are the event sponsors of the meeting this year. You can catch one of the last free tickets and check out the presentations of the day from this link:

Adding the "Deploy to Bluemix" Button to my Bluemix Applications in GitHub

Pas Apicella - Mon, 2015-09-14 18:25
Not before time I finally added my first "Deploy to Bluemix" button on my GitHub projects for Bluemix applications. The screen shot below shows this for the Spring Session - Spring Boot Portable Cloud Ready HTTP Session demo.

Here is what it looks like when I do deploy this using the "Deploy to Bluemix" button and requires me to log in to IBM Bluemix. What happens when you use this button it adds the prohect code via FORK to your own DevOps projects , adds a pipeline to compile/deploy the code and finally deploys it as you Expect it to do.

More Information
Categories: Fusion Middleware

EM12c Upgrade Tasks

Arun Bavera - Mon, 2015-09-14 13:51
1.      Upgrade Primary OMR, OMS using Installer of   - 2 Hours

2.     Upgrade Primary Agent      - 6 Minutes
3.      Cleanup Agent
4.      Cleanup OMS
5.      Upgrade Secondary  OMS     - 30 Minutes
6.      Cleanup Agent
7.      Cleanup OMS
8.      No Monthly Agent/OMS Pacthes available yet for - Jul-14-2015 expected Jul-30-2015

9.  Install Latest JDK 1.6 (Note: 1944044.1) JDK Updated to
10.     Install Weblogic latest PSU (1470197.1)  
11.  Verify Load Balancer
12.  OMS Sizing

Categories: Development

Forcing Garbage Collection in JDK manually using JVisualVM

Arun Bavera - Mon, 2015-09-14 13:43
You might have seen many times heap crossing the limit and GC algorithm not working properly and keeping old object long time.
Even though it is not advised to force major GC manually if you come across a situation you can use the following method to clear the Heap.
Note. If the Heap size is huge more than 6GB doing major GC may cause application to wait for couple of seconds. Also, make sure you have enough system memory(RAM) to invoke the tool JVisualVM.
This is typical method in many corporates where X-Windows is not installed on their *NIX machines and app account is locked down for direct login.
1) Login as yourself into Linux/Unix machine and make sure your laptop/Desktop has X-emulator like xming running.
2) Note down the authorized X-keys:    xauth list
3) Login as app owner :     sudo su – oracle
4) Add the X-keys to oracle(App owner session)
xauth add <full string from xauth list from previous session>image

5) Do ps –ef|java , note down the JDK directory and go directly to JDK bin (/opt/app/oracle/jdk1.7.0_55/bin ) in this case we are using JDK7
6) Invoke  ./jvisualvm &
7) Choose the Weblogic PID and make sure in the Overview tab the server name is the one you are interested and Perform manual GC.
  Note: From JDK 7 onwards if your Heap size is more than 6GB then G1GC algorithm works in best possible ways. 
     Also refer:

Categories: Development

AZORA – Arizona Oracle User Group new location

Bobby Durrett's DBA Blog - Mon, 2015-09-14 13:35

The Arizona Oracle User Group has moved tomorrow’s meeting to Oracle’s offices on Camelback road:

Meetup link with meeting details


Categories: DBA Blogs

College Scorecard: An example from UMUC on fundamental flaw in the data

Michael Feldstein - Mon, 2015-09-14 13:33

By Phil HillMore Posts (365)

Russ Poulin at WCET has a handy summary of the new College Scorecard produced by the Education Department (ED) and the White House. This is a “first read” given the scorecard’s Friday release, but it is quite valuable since Russ participated on an ED Data Panel related to the now-abandoned Ratings System, the precursor to the Scorecard. Russ describes the good, the “not so good”, and the “are you kidding me?” elements. One area in particular highlighted by Russ is the usage of the “dreaded first-time, full-time completion rates”:

I knew this would be the case, but it really irks me. Under current data collected by the Department’s IPEDS surveys. They the group on which they base their “Graduation Rate” as: “Data are collected on the number of students entering the institution as full-time, first-time, degree/certificate-seeking undergraduate students in a particular year (cohort), by race/ethnicity and gender; the number completing their program within 150 percent of normal time to completion; the number that transfer to other institutions if transfer is part of the institution’s mission.”

This rate has long been a massive disservice to institutions focused on serving adults and community colleges. Here are some example rates: Empire State: 28%, Western Governors University: 26%, University of Maryland University College: 4%, Charter Oak Colleges: no data, and Excelsior College: no data.. The problem is that these numbers are based on incredibly small samples for these schools and do not reflect the progress of the bulk of the student body.

I won’t quote data for community colleges because they are all negatively impacted. They often serve a large number of students who are not “first-time” or define “success” in other ways.

I know that they are working on a fix to this problem in the future. Meanwhile, who atones for the damage this causes to these institution’s reputation. This data display rewards colleges who shy away from non-traditional or disadvantaged students. Is this what we want?

Russ is not the only one noting this problem. Consider this analysis from Friday [emphasis added]:

The most commonly referenced completion rates are those reported to IPEDS and are included on the College Scorecard (measuring completion within 150 percent, or six years, for predominantly four-year colleges; and within four years for predominantly two- or less-than-two-year schools). However, they rely on a school’s population of full-time students who are enrolled in college for the first-time. This is increasingly divergent from the profile of the typical college student, particularly at many two-year institutions and some four-year schools. For instance, Marylhurst University in Oregon, a four-year institution that has been recognized for serving adult students, reportedly had a 23 percent, six-year completion rate – namely because a very small subset of its students (just one percent) fall in the first- time, full-time cohort used to calculate completion rates. As with many schools that serve students who already have some college experience, this rate is, therefore, hardly representative of the school’s student body.

Who wrote this critical analysis, you ask? The Education Department in their own Policy Paper on the College Scorecard (p 17). Further down the page:

The Department has previously announced plans to work with colleges and universities to improve the graduation rates measured by the IPEDS system. Beginning in 2016, colleges will begin reporting completion rates for the other subsets of their students: first-time, part-time students; non-first-time, full-time students; and non-first-time, part-time students. In the meantime, by using data on federal financial aid recipients that the Department maintains in the National Student Loan Data System (NSLDS) for the purposes of distributing federal grants and loans, we constructed completion rates of all students receiving Title IV aid at each institution. For many institutions, Title IV completion rates are likely more representative of the student body than IPEDS completion rates – about 70 percent of all graduating postsecondary students receive federal Pell Grants and/or federal loans.

Given concerns about the quality of historical data, these NSLDS completion rates are provided on the technical page, rather than on the College Scorecard itself.

In other words, ED is fully aware of the problems of using IPEDS first-time full-time completion data, and they have plans to help improve the data, yet they chose to make fundamentally-flawed data a centerpiece of the College Scorecard.

Furthermore, the Policy Paper also addressed the need to understand transfer rates and not just graduation rates (p 18) [emphasis in original]:

The Administration also believes it is important that the College Scorecard address students who transfer to a higher degree program. Many students receive great value in attending a two-year institution first, and eventually transferring to a four-year college to obtain their bachelor’s degrees. In many cases, the transfer students do not formally complete the two-year program and so do not receive an associate degree prior to transferring. When done well, with articulation agreements that allow students to transfer their credits, this pathway can be an affordable and important way for students to receive four-year degrees. In particular, according to a recent report from the National Center of Education Statistics (NCES), students were best able to transfer credits when they moved from two-year to four-year institutions, compared with horizontal and reverse transfers.

To address this important issue, ED put the transfer data they have not on the consumer website but in the technical and data site (massive spreadsheets, data dictionaries, crosswalks all found here). Why did they not make this data easier to find? The answer is in a footnote:

We hope to be able to produce those figures for consumers after correcting for the same reporting limitations as exist for the completion rates.

To their credit, ED does address these limitations thoroughly in the Policy Paper and the Technical Paper, but very few people will read them. The end result is a consumer website that is quite misleading. Knowing all the problems of the data, this is what you see for UMUC.


Consider what prospective students will think seeing this page. UMUC sucks, I’m likely to never graduate.

UMUC points out in this document that less than 2% of their student body are first-time full-time, and that the real results paint a different picture.

UMUC report

UMUC report grad

Consider the harm done to prospective UMUC students by seeing the flawed, over-simplified ED College Scorecard data, and consider the harm done to UMUC as they have to play defense and explain why prospects should see a different situation. Given the estimate that non-traditional students – those who would not be covered at all in IPEDS graduation rates – comprise more than 70% of all students, you can see how UMUC is not alone. Community colleges face an even bigger problem with the lack of transfer rate reporting.

And this is how the ED is going to help consumers make informed choices?

Count me as in agreement with Russ in his conclusions:

The site is a good beginning at addressing the needs of the traditional student leaving high school and seeking a college. It leaves much to be desired for the non-traditional students who now comprise a very large portion of the college-seeking population.

I applaud the consumer-focused vision and hope that feedback continues to improve the site. I actually think this could be a fantastic service. I just worry that in the haste to get it out that we did not wait until we had the data to do it correctly.

The post College Scorecard: An example from UMUC on fundamental flaw in the data appeared first on e-Literate.

Join Oracle Service Cloud at OpenWorld 2015 to Talk Trends, Best Practices, Product Strategy, and Gain Business Value

Linda Fishman Hoyle - Mon, 2015-09-14 13:21

A Guest Post by Director Christine Skalkotos, Product Strategy Programs, Oracle Service Cloud (pictured left)

Oracle Service Cloud @ CX Central is returning to San Francisco, October 25-29, 2015!

Oracle Service Cloud is once again excited to join the OpenWorld 2015 customer experience (CX) activities and conversations happening in Moscone West on the second floor. The team has an engaging lineup of more than 20 sessions and demonstrations available for service professionals. You will have the opportunity to discuss pressing industry trends, examine solution best practices, and gain insights into upcoming product strategy to help drive continual business value.You also will get to hear from leading service innovators such as HQ Air Reserve Personnel Center, LinkedIn, and SiriusXM.

Visit Service―CX Central @ OpenWorld

All Oracle Service Cloud sessions will be hosted in Rooms 2006 and 2016 in Moscone West on the second floor. Explore the Oracle Service Cloud demo zone which is also located on the second floor. Details, including all session dates, times, and room numbers, are published at  Service―CX Central @ OpenWorld for your convenience.

What’s New and Different?

  • Sessions that explain the roadmaps for Oracle Service Cloud
  • Sessions showing how Oracle Service Cloud integrates with existing applications
  • Sessions led by partners sharing the latest insights on recent implementations

Guest Customers and Partner Appearances include:

Academy Sports+Outdoors, Dish Network, HQ Air Reserve Personnel Center, Kohls, KP OnCall, LinkedIn, Mazda, Overhead Door Corporation, Pella, Riverbed Technology, SiriusXM, SoftClouds LLC, TCS, and more!

Start the Experience with the Service Cloud General Session!

Oracle’s CIO, Mark Sunday, joins David Vap, GVP Product Development for Oracle Service Cloud, to ignite the CX-Service track at 1:00 – 2:15 p.m. on Monday, October 26 in Room 2006. Walk through Oracle’s product strategy and market trends impacting service professionals, while hearing best practices from innovative brands, like Academy Sports+Outdoors, Mazda, and our Oracle Service Cloud Partner Sponsor TCS, that are meeting today’s customer experience challenges. [GEN9837]

Roadmap and Product Conference Sessions

Oracle Service Cloud @ CX Central will be hosting more than 20 conference sessions. These sessions begin at 2:45 p.m. on Monday, October 26. These sessions are led by Oracle Service Cloud product management team members and highlight customer and partner case studies. Many sessions are aligned with our strategic engagements model called “Roadmap to Modern Customer Service." Here is a listing of sessions:

  • Modern Service for a Changing World: a customer panel featuring HQ Air Reserve Personnel Center, Kohls, LinkedIn, and SiriusXM [CON10020]
  • The Future of Customer Service. Are You Ready? [CON9884]
  • Get Ahead: Strategic Roadmap to Modern Customer Service [CON10325]
  • Tailoring the Agent Experience [HOL10509]
  • Getting the Most Out of Web Self-Service [HOL10510]
  • “Get Going” Sessions: Leading with Connected Customers
    • Get Going: Tear Down This Wall: How Web Self-Service and Communities Are Combining [CON9839]
    • Get Going: Improving Service Engagements with Chat and Cobrowse [CON9891]
    • Get Going: Proven Techniques for Right Channeling within an Online Service [CON9892]
    • Get Going: Make Your Customer Service a Differentiator with Policy Automation (featuring KP OnCall) [CON9896]
  • “Get Better” Sessions: Recognized for Service Quality & Innovation
    • Get Better: Oracle Service Cloud Customer Engagement Center Overview and Roadmap [CON9924]
    • Get Better: Cut Through the Complexity: Delivering Customer Service Excellence in the Engagement Center (featuring Pella and SiriusXM) [CON9925]
    • Get Better: Oracle Service Cloud Knowledge Management Overview and Roadmap [CON9836]
    • Get Better: Knowledge at the Heart of Service Makes Customer Service Hum (featuring Mazda and SoftClouds LLC) [CON9922]
    • Get Better: Raise the Bar: Empower Agents and Enable Change with Knowledge Centered Support (featuring Riverbed Technology) [CON9923]
    • Get Better: Accelerating Oracle Service Cloud and Siebel Integration [CON9887]
    • Get Better: Accelerating Oracle Service Cloud and Oracle E-Business Suite Integration (featuring Overhead Door Corporation) [CON9889]
  • “Get Ahead” Sessions: Differentiated and Leading with Personalized Service
    • Get Ahead: Field Service in the Age of Uber: Vision and Roadmap [CON9899]
    • Get Ahead: Transforming Field Operations: DISH Network and Oracle Field Service Cloud (featuring Dish Network) [CON9898]
    • Get Ahead: Oracle Service Cloud Integration Strategy: The Spectrum of Integrations [CON9843]
    • Get Ahead: Step into the Engine Room: Discover the Platform That Powers Modern Service  [CON9897]

Service Demo Zone

Benefit by engaging with Oracle Service Cloud product demonstrations led by members of the Oracle Service Cloud product management and sales consulting teams.

  • Get Going with Digital Service
  • Get Going with Policy Automation
  • Get Better with Contact Center
  • Get Better with Knowledge Management
  • Get Better with Siebel & EBS Integration
  • Get Ahead with Field Service Cloud
  • Get Ahead with Personalized Service
  • Get Ahead with Oracle Cloud Platform

Customer Events

Finally, a preview of Oracle Service Cloud at OpenWorld would not be complete without a mention of customer appreciation events:

  • Monday, October 26: Oracle Service Cloud Customer Appreciation Reception at Oracle OpenWorld, by invitation only―a chance to network with Oracle Service Cloud product management and peers
  • Tuesday, October 27: CX Central customer appreciation event; planning is in progress!
  • Wednesday, October 28: Oracle Appreciation Event at Treasure Island!

At a Glance

Visit Oracle OpenWorld for full details on speakers, conference sessions, exhibits, and entertainment!

How 1and1 failed me

Sean Hull - Mon, 2015-09-14 11:13
I manage this blog myself. Not just the content, but also the technology it runs on. The systems & servers are from a hosting company called And recently I had some serious problems. Join 31,000 others and follow Sean Hull on twitter @hullsean. The publishing platform wordpress, as a few versions out of date. … Continue reading How 1and1 failed me →

Report Carousel in APEX 5 UT

Dimitri Gielis - Mon, 2015-09-14 09:45
The Universal Theme in APEX 5.0 is full of nice things.
Did you already see the Carousel template for Regions? When you add a region to your page with a couple of sub-regions and you give the parent region the "Carousel Container" template it turns the regions into a carousel, so you can flip between regions.
I was asked to have the same functionality but than on dynamic content.So I decided to build a report template that would be shown as carousel. Here's the result:

You can see it in action at
I really like carousels :)
Here's how you can have this report template in your app:1) Create a new Report Template:

Make sure to select Named Column for the Template Type:

Add following HTML into the template at the given points:

That's it for the template.

Now you can create a new report on your page and give it the template you just created.
Here's the SQL Statement I used:

select PRODUCT_ID          as id,
       PRODUCT_NAME        as title,
       PRODUCT_DESCRIPTION as description,
       dbms_lob.getlength(PRODUCT_IMAGE) as image,
       'no-icon'           as icon,
       null                as link_url 

Note 1: that you have to use the same column aliases as you defined in the template.
Note 2: make sure you keep the real id of your image in the query too, as otherwise you'll get an error (no data found)

To make the carousel a bit nicer I added following CSS to the page, but you could add it to your own CSS file or in the custom css section of Theme Roller.

Note: the carousel can work with an icon or an image. If you want to see an icon you can use for example "fa-edit fa-4x". When using an image, define the icon as no-icon.

Eager for more Universal Theme tips and tricks? check-out our APEX 5.0 UI training in Birmingham on December 10th. :)

For easier copy/paste into your template, you find the source below:

 *** Before Rows ***  
<div class="t-Region t-Region--carousel t-Region--showCarouselControls t-Region--hiddenOverflow" id="R1" role="group" aria-labelledby="R1_heading">
<div class="t-Region-bodyWrap">
<div class="t-Region-body">
<div class="t-Region-carouselRegions">
*** Column Template ***
<div data-label="#TITLE#" id="SR_R#ID#">
<a href="#LINK_URL#">
<div class="t-HeroRegion " id="R#ID#">
<div class="t-HeroRegion-wrap">
<div class="t-HeroRegion-col t-HeroRegion-col--left">
<span class="t-HeroRegion-icon t-Icon #ICON#"></span>
<div class="t-HeroRegion-col t-HeroRegion-col--content">
<h2 class="t-HeroRegion-title">#TITLE#</h2>
<div class="t-HeroRegion-col t-HeroRegion-col--right"><div class="t-HeroRegion-form"></div><div class="t-HeroRegion-buttons"></div></div>
*** After Rows ***
*** Inline CSS ***
.t-HeroRegion-col.t-HeroRegion-col--left {
.t-HeroRegion {
border-bottom:0px solid #CCC;
.t-Region--carousel {
border: 1px solid #d6dfe6 !important;
.t-HeroRegion-col--left img {
max-height: 90px;
max-width: 130px;
.no-icon {
Categories: Development

Discovery and Monitor Oracle Database Appliance (#ODA) using #EM12C

DBASolved - Mon, 2015-09-14 09:40

A few months ago, I heard that Oracle was releasing a plug-in for the Oracle Database Appliance (ODA (Oh Dah). At first I couldn’t find anything on this plug-in, then I was able to find it in the Self-Update for Plug-ins (Extensibility -> Self Update -> Plug-ins).

After finding the plug-in, it needed to be deployed to the Oracle Management Server (OMS). Once deployed, it can be used to monitor the ODA; however, this plug-in is different from plug-ins like the Exadata, where you have a wizard to configure monitoring of the hardware associated with the engineered system. In order to use this plug-in, the two servers in the ODA have to have the EM agents installed on them. Here is a list of articles, by some great guys plus myself, that relate to installing agents in EM12C (if you don’t know how to do that already).

Tim Hall
Gavin Soorma
Gokhan Atil
Javier Ruiz

Once the agents are installed, then the plug-in has to be added to the agent. This is achived by pushing the plug-in from the same screen where the plug-in was deployed to the OMS. Only this time, it is deployed to the newly added agents (Extensibility -> Plug-Ins -> Deploy On -> Management Agent)

Once the plug-in is deployed to the new targets for the ODA servers; then the ODA can be added to OEM.

To add the ODA with the plug-in, it can now be done through the Add Targets Manually process. This step is a wizard that walks through adding the ODA; is done through the Add Targets Using Guided Process.


When starting the discovery, OEM will provide you a Discover Now button which initiates the wizard for discovering the ODA componenets such as ILOM and the servers.

When the wizard starts, it asks for an agent URL. This is the agent installed on the first node of the ODA. Then provide the host root login that is stored in Named Credentials or a new login.

The next step of the wizard, will provide a list of all the discovered targets in the ODA.

On the credential screen, the wizard asks for the root password for both the host and the ILOM. If the passwords are the same across the ODA there is a n option to use the password for both items being granted.

The Tag Cloud step is interesting. You really don’t have to put anything here; however, you can tag the ODA to help identify what is being added. There is a Create Tag button at the bottom if you want to create a tag. (I didn’t create one so I didn’t include a picture in this post).

Finally, the review step shows what is going to be added to OEM as the ODA. Once the targets have been promoted successfully, you will see a green flag in the Discovery Stataus block. At this point the ODA has been added to OEM.

Now that the ODA has been added to OEM, it can be viewed from Targets -> All Targets -> Target Type -> Engineered Systems -> Oracle Database Appliance System. From here, OEM takes you to the home page for the ODA.

From the ODA home page, it provides an overview of all the item going on with the ODA. Click around and have some fun reviewing what is happening with the ODA being monitored.


Filed under: OEM
Categories: DBA Blogs