Skip navigation.

Feed aggregator

Car Logos

iAdvise - Mon, 2016-01-25 21:02
Symbols and elaborate images for car logos can be confusing. So many famous brands use the same animals or intricate images that may seem appealing at first but are actually so similar to each other that you can't tell one company apart from the other unless you're really an expert in the field.
How many auto brands do you know that have used a jungle cat or a horse or a hawk's wings in their trademark?
There're just too many to count.
So how can you create a design for your automobile company that is easy to remember and also sets you apart from the crowd?
Why not use your corporation name in the business mark?
How many of us confuse the Honda trademark with Hyundai's or Mini's with Bentley's?
But that won't happen if your car logos and names are the same.
Remember the Ford and BMW's business image or MG's and Nissan's? The only characteristic that makes them easier to remember is their company name in their brand mark.
Car LogosCar LogosCar LogosCar LogosCar LogosCar LogosCar LogosCar Logos
But it's not really that easy to design a trademark with the corporation name. Since the only things that can make your car brand mark appealing are the fonts and colors, you need to make sure that you use the right ones to make your logo distinct and easy to remember.
What colors to use?
When using the corporation name in trademark, the rule is very simple. Use one solid color for the text and one solid color for the background. Text in silver color with a red or a dark blue background looks appealing but you can experiment with different colors as well. You can also use white colored text on a dark green background which will make your design identifiable from afar. Don't be afraid to use bright colored background but make sure you use the text color that complements the background instead of contrasting with it.
What kind of fonts to use?
Straight and big fonts may be easier to read from a distance but the font style that looks intricate and appealing to the customers and give your design a classic look are the curvier fonts. But make sure that the text is not too curvy that it loses its readability. You can even use the Times Roman font in italic effect or use some other professional font style with curvy effect to make sure that the text is readable and rounded at the same time.
Remember the ford logo? It may just be white text on a red background, but it's the curvy font style that sets it apart from the rest. Remember the Ford business mark or the smart car logo?
What shapes to use?
The vehicle business image has to be enclosed in a shape, of course. The shape that is most commonly used is a circle. You can use an oval, a loose square or even the superman diamond shape to enclose your design. But make sure that your chosen shape does not have too many sides that make the mark complicated.
The whole idea of a car corporation mark is to make it easily memorable and recognizable along with making it a classic. Using the above mentioned ideas can certainly do that for your trademark.
Beverly Houston works as a Senior Design Consultant at a Professional Logo Design Company. For more information on car logos and names find her competitive rates at Logo Design Consultant.
Categories: APPS Blogs

Oracle Midlands : Event #13

Tim Hall - Mon, 2016-01-25 16:41

Tomorrow is Oracle Midlands Event #13.


Franck is a super-smart guy, so please show your support and start the year as you mean to go on!



Oracle Midlands : Event #13 was first posted on January 25, 2016 at 11:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Hybrid Databases: Key to Your Success

Pythian Group - Mon, 2016-01-25 14:51

Almost every company’s success is contingent upon its ability to effectively turn data into knowledge. Data, combined with the power of analytics, brings new, previously unavailable business insights that help companies truly transform the way they do business—allowing them to make more informed decisions, improve customer engagement, and predict trends.

Companies can only achieve these insights if they have a holistic view of their data. Unfortunately, most companies have hybrid data—different types of data, living in different systems that don’t talk to each other. What companies need is a way to consolidate the data they currently have while allowing for the integration of new data.

The Hybrid Data Challenge

There are two primary reasons why data starts life in separate systems. The first is team segregation. Each team in an organization builds its systems independently of one another based on its unique objectives and data requirements. This, in part, is also the second reason. Teams often choose a database system that specializes in handling the specific application tasks. Sometimes they need to prioritize for scalability and performance. Other times, flexibility and reliability are more important.

Businesses need to evolve their data strategy to establish and create a “data singularity,” which summarizes the total of their information assets as a competitive, strategic advantage and enables real-time knowledge. A key objective of this strategy should be to reduce data movement, especially during processing and while executing queries, because both are very expensive operations. As well, data movement, or data replication in general, is a very fragile operation, often requiring significant support resources to run smoothly.

Enter The World Of Hybrid Databases

A hybrid database is a flexible, high-performing data store that can store and access multiple, different data types, including unstructured (pictures, videos, free text) and semi-structured data (XML, JSON). It can locate individual records quickly, handle both analytical and transactional workloads simultaneously, and perform analytical queries at scale. Analytical queries can be quite resource-intensive so a hybrid database needs to be able to scale out and scale linearly. It must also be highly available and include remote replication capabilities to ensure the data is always accessible.

In the past, many organizations consolidated their data in a data warehouse. Data warehouses gave us the ability to access all of our data, and while these systems were highly optimized for analytical long-running queries, they were strictly batch-oriented with weekly or nightly loads. Today, we demand results in real time and query times in milliseconds.

When Cassandra and Hadoop first entered the market, in 2008 and 2011, respectively, they addressed the scalability limitations of traditional relational database systems and data warehouses, but they had restricted functionality. Hadoop offered infinite linear scalability for storing data, but no support for SQL or any kind of defined data structure. Cassandra, a popular NoSQL option, supported semi-structured, distributed document formats, but couldn’t do analytics. They both required significant integration efforts to get the results organizations were looking for.

In 2008, Oracle introduced the first “mixed use” database appliance—Exadata. Oracle Exadata brought a very high performant reference hardware configuration, an engineered system, as well as unique features in query performance and data compression, on top of an already excellent transactional processing system, with full SQL support, document-style datatypes, spatial and graph extensions, and many other features.

In recent years, vendors have started more aggressively pursuing the hybrid market and existing products have started emerging that cross boundaries. We can now run Hadoop-style MapReduce jobs on Cassandra data, and SQL on Hadoop via Impala. Microsoft SQL Server introduced Columnar Format storage for analytical data sets, and In-Memory OLTP—a high performance transactional system, with log-based disk storage. Oracle also introduced improvements to their product line, with Oracle In-Memory, a data warehouse with a specific high-performance memory store, bringing extreme analytical performance.

Choosing A Hybrid Database Solution

Rearchitecting your data infrastructure and choosing the right platform can be complicated—and expensive. To make informed decisions, start with your end goals in mind, know what types of data you have, and ensure you have a scalable solution to meet your growing and changing organizational and business needs—all while maintaining security, accessibility, flexibility, and interoperability with your existing infrastructure. A key design principle is to reduce your data movement and keep your architecture simple. For example, a solution that relies on data being in Hadoop but performs analytics in another database engine is not optimal because large amounts of data must be copied between the two systems during query execution—a very inefficient and compute-intensive process.

Have questions? Contact Pythian. We can help analyze your business data requirements, recommend solutions, and create a roadmap for your data strategy, leveraging either hybrid or special purpose databases.

Categories: DBA Blogs

February 17, 2016: Baxters Food Group―Oracle EPM Cloud Customer Reference Forum

Linda Fishman Hoyle - Mon, 2016-01-25 11:58

Join us for another Oracle Customer Reference Forum on February 16, 2016 at 5:00 p.m. GMT/ 9:00 a.m. PT. Elaine McKechnie, Group Head of MIS at Baxters Food Group, will talk about why the company chose to implement Oracle EPM Cloud, Oracle ERP, and Oracle Business Intelligence. Ms. McKechnie will share Baxter's selection process for its new Oracle EPM Cloud solution, its phased implementation approach, and the expected benefits.

Baxters is a Scottish food processing company with 1000+ employees.

Register now to attend the live forum on Wednesday, February 17, 2016 at 5:00 p.m. GMT / 9:00 a.m. PT and learn more about Baxters Food Group experience with Oracle EPM Cloud.

Advisor Webcast:WebCenter Content 12c, Documents Cloud Service and Hybrid ECM

WebCenter Team - Mon, 2016-01-25 08:49

ADVISOR WEBCAST: WebCenter Content 12c, Documents Cloud Service and Hybrid ECM


  • Wednesday, February 17, 2016 08:00 AM (US Pacific Time)
  • Wednesday, February 17, 2016 11:00 AM (US Eastern Time)
  • Wednesday, February 17, 2016 05:00 PM (Central European Time)
  • Wednesday, February 17, 2016 09:30 PM (India Standard Time)

This one hour session is recommended for technical and functional users who use WebCenter Content. This session will focus on new release information including WebCenter Content 12c, Documents Cloud Service and Hybrid ECM. We will also briefly discuss the future roadmap for WebCenter Content.

Topics Include:
Overview of new features of WebCenter Content 12c

  • WebCenter Imaging embedded
  • Digital Asset Management improvements
  • Content UI improvements
  • Desktop Integration Suite improvements

Overview of Documents Cloud Service

  • File Access/Sync
  • Application Integration
  • Document Collaboration
  • Security

Overview of Hybrid ECM

  • Integration with on-premise WebCenter Content
  • External Collaboration

Duration: 1 hr

Current Schedule and Archived Downloads can be found in Note 740966.1

WebEx Conference Details

Topic: WebCenter Content 12c, Documents Cloud Service and Hybrid ECM
Event Number: 599 080 471
Event Passcode: 909090

Register for this Advisor Webcast:

Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting.

InterCall Audio Instructions

A list of Toll-Free Numbers can be found below.

  • Participant US/Canada Dial-in #: 1866 230 1938   
  • International Toll-Free Numbers
  • Alternate International Dial-In #: +44 (0) 1452 562 665
  • Conference ID: 25721968


Kafka and more

DBMS2 - Mon, 2016-01-25 05:28

In a companion introduction to Kafka post, I observed that Kafka at its core is remarkably simple. Confluent offers a marchitecture diagram that illustrates what else is on offer, about which I’ll note:

  • The red boxes — “Ops Dashboard” and “Data Flow Audit” — are the initial closed-source part. No surprise that they sound like management tools; that’s the traditional place for closed source add-ons to start.
  • “Schema Management”
    • Is used to define fields and so on.
    • Is not equivalent to what is ordinarily meant by schema validation, in that …
    • … it allows schemas to change, but puts constraints on which changes are allowed.
    • Is done in plug-ins that live with the producer or consumer of data.
    • Is based on the Hadoop-oriented file format Avro.

Kafka offers little in the way of analytic data transformation and the like. Hence, it’s commonly used with companion products. 

  • Per Confluent/Kafka honcho Jay Kreps, the companion is generally Spark Streaming, Storm or Samza, in declining order of popularity, with Samza running a distant third.
  • Jay estimates that there’s such a companion product at around 50% of Kafka installations.
  • Conversely, Jay estimates that around 80% of Spark Streaming, Storm or Samza users also use Kafka. On the one hand, that sounds high to me; on the other, I can’t quickly name a counterexample, unless Storm originator Twitter is one such.
  • Jay’s views on the Storm/Spark comparison include:
    • Storm is more mature than Spark Streaming, which makes sense given their histories.
    • Storm’s distributed processing capabilities are more questionable than Spark Streaming’s.
    • Spark Streaming is generally used by folks in the heavily overlapping categories of:
      • Spark users.
      • Analytics types.
      • People who need to share stuff between the batch and stream processing worlds.
    • Storm is generally used by people coding up more operational apps.

If we recognize that Jay’s interests are obviously streaming-centric, this distinction maps pretty well to the three use cases Cloudera recently called out.

Complicating this discussion further is Confluent 2.1, which is expected late this quarter. Confluent 2.1 will include, among other things, a stream processing layer that works differently from any of the alternatives I cited, in that:

  • It’s a library running in client applications that can interrogate the core Kafka server, rather than …
  • … a separate thing running on a separate cluster.

The library will do joins, aggregations and so on, and while relying on core Kafka for information about process health and the like. Jay sees this as more of a competitor to Storm in operational use cases than to Spark Streaming in analytic ones.

We didn’t discuss other Confluent 2.1 features much, and frankly they all sounded to me like items from the “You mean you didn’t have that already??” list any young product has.

Related links

Categories: Other

Kafka and Confluent

DBMS2 - Mon, 2016-01-25 05:27

For starters:

  • Kafka has gotten considerable attention and adoption in streaming.
  • Kafka is open source, out of LinkedIn.
  • Folks who built it there, led by Jay Kreps, now have a company called Confluent.
  • Confluent seems to be pursuing a fairly standard open source business model around Kafka.
  • Confluent seems to be in the low to mid teens in paying customers.
  • Confluent believes 1000s of Kafka clusters are in production.
  • Confluent reports 40 employees and $31 million raised.

At its core Kafka is very simple:

  • Kafka accepts streams of data in substantially any format, and then streams the data back out, potentially in a highly parallel way.
  • Any producer or consumer of data can connect to Kafka, via what can reasonably be called a publish/subscribe model.
  • Kafka handles various issues of scaling, load balancing, fault tolerance and so on.

So it seems fair to say:

  • Kafka offers the benefits of hub vs. point-to-point connectivity.
  • Kafka acts like a kind of switch, in the telecom sense. (However, this is probably not a very useful metaphor in practice.)

Jay also views Kafka as something like a file system. Kafka doesn’t actually have a file-system-like interface for managing streams, but he acknowledges that as a need and presumably a roadmap item.

The most noteworthy technical point for me was that Kafka persists data, for reasons of buffering, fault-tolerance and the like. The duration of the persistence is configurable, and can be different for different feeds, with two main options:

  • Guaranteed to have the last update of anything.
  • Complete for the past N days.

Jay thinks this is a major difference vs. messaging systems that have come before. As you might expect, given that data arrives in timestamp order and then hangs around for a while:

  • Kafka can offer strong guarantees of delivering data in the correct order.
  • Persisted data is automagically broken up into partitions.

Technical tidbits include:

  • Data is generally fresh to within 1.5 milliseconds.
  • 100s of MB/sec/server is claimed. I didn’t ask how big a server was.
  • LinkedIn runs >1 trillion messages/day through Kafka.
  • Others in that throughput range include but are not limited to Microsoft and Netflix.
  • A message is commonly 1 KB or less.
  • At a guesstimate, 50%ish of messages are in Avro. JSON is another frequent format.

Jay’s answer to any concern about performance overhead for current or future features is usually to point out that anything other than the most basic functionality:

  • Runs in different processes from core Kafka …
  • … if it doesn’t actually run on a different cluster.

For example, connectors have their own pools of processes.

I asked the natural open source question about who contributes what to the Apache Kafka project. Jay’s quick answers were:

  • Perhaps 80% of Kafka code comes from Confluent.
  • LinkedIn has contributed most of the rest.
  • However, as is typical in open source, the general community has contributed some connectors.
  • The general community also contributes “esoteric” bug fixes, which Jay regards as evidence that Kafka is in demanding production use.

Jay has a rather erudite and wry approach to naming and so on.

  • Kafka got its name because it was replacing something he regarded as Kafkaesque. OK.
  • Samza is an associated project that has something to do with transformations. Good name. (The central character of The Metamorphosis was Gregor Samsa, and the opening sentence of the story mentions a transformation.)
  • In his short book about logs, Jay has a picture caption “ETL in Ancient Greece. Not much has changed.” The picture appears to be of Sisyphus. I love it.
  • I still don’t know why he named a key-value store Voldemort. Perhaps it was something not to be spoken of.

What he and his team do not yet have is a clear name for their product category. Difficulties in naming include:

Confluent seems to be using “stream data platform” as a placeholder. As per the link above, I once suggested Data Stream Management System, or more concisely Datastream Manager. “Event”, “event stream” or “event series” could perhaps be mixed in as well. I don’t really have an opinion yet, and probably won’t until I’ve studied the space in a little more detail.

And on that note, I’ll end this post for reasons of length, and discuss Kafka-related technology separately.

Related links

Categories: Other

ServletContextAware Controller class with Spring

Pas Apicella - Mon, 2016-01-25 03:55
I rarely need to save state within the Servlet Context via an application scope, but recently I did and here is what your controller class would look like to get access to the ServletConext with Spring. I was using Spring Boot 1.3.2.RELEASE.

In short you implement the "org.springframework.web.context.ServletContextAware" interface as shown below. In this example we retrieve an application scope attribute.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.json.JsonParser;
import org.springframework.boot.json.JsonParserFactory;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.context.ServletContextAware;

import javax.servlet.ServletContext;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

public class CommentatorController implements ServletContextAware
private static final Logger log = LoggerFactory.getLogger(CommentatorController.class);
private static final JsonParser parser = JsonParserFactory.getJsonParser();

private ServletContext context;

public void setServletContext(ServletContext servletContext)
this.context = servletContext;

@RequestMapping(value="/", method = RequestMethod.GET)
public String listTeams (Model model)
String jsonString = (String) context.getAttribute("riderData");
List<Rider> riders = new ArrayList<>();

if (jsonString != null)
if (jsonString.trim().length() != 0)
Map<String, Object> jsonMap = parser.parseMap(jsonString);
List<Object> riderList = (List<Object>) jsonMap.get("Riders");

for (Object rider: riderList)
Map m = (Map) rider;
new Rider((String)m.get("RiderId"),

//"Riders = " + riders.size());
model.addAttribute("ridercount", riders.size());
model.addAttribute("ridercount", 0);

model.addAttribute("riders", riders);

return "commentator";

Categories: Fusion Middleware

SafeDrop – Part 2: Function and Benefits

Oracle AppsLab - Mon, 2016-01-25 02:14

SafeDrop is a secure box for receiving a physical package delivery, without the need of recipient to be present. If you recall, it was my team’s project at the AT&T Developer Summit Hackathon earlier this month.

SafeDrop is implemented with Intel Edison board at its core, coordinating various peripheral devices to produce a secure receiving product, and it won second place in the “Best Use of Intel Edison” at the hackathon.

SafeDrop box with scanner

SafeDrop box with scanner

Components built in the SafeDrop

Components built in the SafeDrop

While many companies have focused on the online security of eCommerce, the current package delivery at the last mile is still very much insecure. ECommerce is ubiquitous, and somehow people need to receive the physical goods.

The delivery company would tell you the order will be delivered on a particular day. You can wait at home all day to receive the package, or let the package sit in front of your house and risk someone stealing it.

Every year there are reports of package theft during holiday season, but more importantly, the inconvenience of staying home to accept goods and the lack of peace of mind, really annoys many people.

With SafeDrop, your package is delivered, securely!

How SafeDrop works:

1. When a recipient is notified a package delivery with a tracking number, he puts that tracking number in a mobile app, which essentially registers it to SafeDrop box that it is expecting a package with that tracking number.

2. When delivery person arrives, he just scans the tracking number barcode on the SafeDrop box, and that barcode is the unique key to open the SafeDrop. Once the tracking number is verified, the SafeDrop will open.

Delivery person scans the package in front of SafeDrop

Delivery person scans the package in front of SafeDrop

3. When the SafeDrop is opened, a video is recorded for the entire duration when the door is open, as a security measure. If the SafeDrop is not closed, a loud buzz sound continues until it is closed properly.

 Intel Edison board, sensor, buzzer, LED, servo, and webcam

Inside of SafeDrop: Intel Edison board, sensor, buzzer, LED, servo, and webcam

4. Once the package is within SafeDrop, a notification is sent to recipient’s mobile app, indicating the expected package has been delivered. Also shows a link to the video recorded.

In the future, SafeDrop could be integrated with USPS, UPS and FedEx, to verify the package tracking number for the SafeDrop automatically. When delivery person scans tracking number on SafeDrop, it automatically update status to “delivered” into that tracking record in delivery company’s database too. That way, the entire delivery process is automated in a secure fashion.

This SafeDrop design highlights three advantages:

1. Tracking number barcode as the key to the SafeDrop.
That barcode is tracked during the entire delivery, and it is with the package always, and it is fitting to use that as “key” to open its destination. We do not introduce any “new” or “additional” thing for the process.

2. Accurate delivery, which eliminates human mistakes.
Human error sometimes causes deliveries to the wrong address. With SafeDrop integrated into the shipping system, the focus is on a package (with tracking number as package ID) going to a target (SafeDrop ID associated with an address).

In a sense, the package (package ID) has its intended target (SafeDrop ID). The package can only be deposited into one and only one SafeDrop, which eliminates the wrong delivery issue.

3. Non-disputable delivery.
This dispute could happen: delivery company says a package has been delivered, but the recipient says that it has not arrived. The possible reasons: a) delivery person didn’t really deliver it; b) delivery person dropped it to a wrong address; c) a thief came by and took it; d) the recipient got it but was making a false claim.

SafeDrop makes things clear! If it is really delivered to SafeDrop, it is being recorded, and delivery company has done its job. If it is in the SafeDrop, the recipient has it. Really there is no dispute.

I will be showing SafeDrop at Modern Supply Chain Experience in San Jose, January 26 and 27 in the Maker Zone in the Solutions Showcase. If you’re attending the show, come by and check out SafeDrop.Possibly Related Posts:

Links for 2016-01-24 []

Categories: DBA Blogs

OBIEE12c – Three Months In, What’s the Verdict?

Rittman Mead Consulting - Sun, 2016-01-24 13:22

I’m over in Redwood Shores, California this week for the BIWA Summit 2016 conference where I’ll be speaking about BI development on Oracle’s database and big data platforms. As it’s around three months since OBIEE12c came out and we were here for Openworld, I thought it’d be a good opportunity to reflect on how OBIEE12c has been received by ourselves, the partner community and of course by customers. Given OBIEE11g was with us for around five years it’s clearly still early days in the overall lifecycle of this release, but it’s also interesting to compare back to where we were around the same time with 11g and see if we can spot any similarities and differences to take-up then.

Starting with the positives; Visual Analyzer (note – not the Data Visualization option, I’ll get to that later) has gone down well with customers, at least over in the UK. The major attraction for customers seems to be “Tableau with a shared governed data model and integrated with the rest of our BI platform” (see Oracle’s slide from Oracle Openworld 2015 below), and seeing as the Data Visualization Option is around the same price as Tableau Server the synergies around not having to deploy and support a new platform seem to outweigh the best-of-breed query functionality of Tableau and Qlikview.


Given that VA is an extra-cost option what I’m seeing is customers planning to upgrade their base OBIEE platform from 11g to 12c as part of their regular platform refresh, and considering the VA part after the main upgrade as part of a separate cost/benefit exercise. VA however seems to be the trigger for customer interest in an upgrade, with the business typically now holding the budget for BI rather than IT and Visual Analyzer (like Mobile with 11g) being the new capability that unlocks the upgrade spend.

On the negative side, Oracle charging for VA hasn’t gone down well, either from the customer side who ask what it is they actually get for their 22% upgrade and maintenance fee if they they have to pay for anything new that comes with the upgrade; or from partners who now see little in the 12c release to incentivise customers to upgrade that’s not an additional cost option. My response is usually to point to previous releases – 11g with Scorecards and Mobile, the database with In-Memory, RAC and so on – and say that it’s always the case that anything net-new comes at extra cost, whereas the upgrade should be something you do anyway to keep the platform up-to-date and be prepared to uptake new features. My observation over the past month or so is that this objection seems to be going away as people get used to the fact that VA costs extra; the other push-back I get a lot is from IT who don’t want to introduce data mashups into their environment, partly I guess out of fear of the unknown but also partly because of concerns around governance, how well it’ll work in the real world, so on and so on. I’d say overall VA has gone down well at least once we got past the “shock” of it costing extra, I’d expect there’ll be some bundle along the lines of BI Foundation Suite (BI Foundation Suite Plus?) in the next year or so that’ll bundle BIF with the DV option, maybe include some upcoming features in 12c that aren’t exposed yet but might round out the release. We’ll see.

The other area where OBIEE12c has gone down well, surprisingly well, is with the IT department for the new back-end features. I’ve been telling people that whilst everyone thinks 12c is about the new front-end features (VA, new look-and-feel etc) it’s actually the back-end that has the most changes, and will lead to the most financial and cost-saving benefits to customers – again note the slide below from last year’s Openworld summarising these improvements.


Simplifying install, cloning, dev-to-test and so on will make BI provisioning considerably faster and cheaper to do, whilst the introduction of new concepts such as BI Modules, Service Instances, layered RPD customizations and BAR files paves the way for private cloud-style hosting of multiple BI applications on a single OBIEE12c domain, hybrid cloud deployments and mass customisation of hosted BI environments similar to what we’ve seen with Oracle Database over the past few years.

What’s interesting with 12c at this point though is that these back-end features are only half-deployed within the platform; the lack of a proper RPD upload tool, BI Modules and Services Instances only being in the singular and so on point to a future release where all this functionality gets rounded-off and fully realised in the platform, so where we are now is that 12c seems oddly half-finished and over-complicated for what it is, but it’s what’s coming over the rest of the lifecycle that will make this part of the product most interesting – see the slide below from Openworld 2014 where this full vision was set-out, but in Openworld this year was presumably left-out of the launch slides as the initial release only included the foundation and not the full capability.


Compared back to where we were with OBIEE11g (, at the start of the product cycle) which was largely feature-complete but had significant product quality issues, with 12c we’ve got less of the platform built-out but (with a couple of notable exceptions) generally good product quality, but this half-completed nature of the back-end must confuse some customers and partners who aren’t really aware of the full vision for the platform.

And finally, cloud; BICS had an update some while ago where it gained Visual Analyzer and data mashups earlier than the on-premise release, and as I covered in my recent UKOUG Tech’15 conference presentation it’s now possible to upload an on-premise RPD (but not the accompanying catalog, yet) and run it in BICS, giving you the benefit of immediate availability of VA and data mashups without having to do a full platform upgrade to 12c.


In-practice there are still some significant blockers for customers looking to move their BI platform wholesale into Oracle Cloud; there’s no ability yet to link your on-premise Active Directory or other security setup to BICS meaning that you need to recreate all your users as Oracle Cloud users, and there’s very limited support for multiple subject areas, access to on-premise data sources and other more “enterprise” characterises of an Oracle BI platform. And Data Visualisation Cloud Service (DVCS) has so far been a non-event; for partners the question is why would we get involved and sell this given the very low cost and the lack of any area we can really add value, while for customers it’s perceived as interesting but too limited to be of any real practical use. Of course, over the long term this is the future – I expect on-premise installs of OBIEE will be the exception rather than the rule in 5 or 10 years time – but for now Cloud is more “one to monitor for the future” rather than something to plan for now, as we’re doing with 12c upgrades and new implementations.

So in summary, I’d say with OBIEE12c we were pleased and surprised to see it out so early, and VA in-particular has driven a lot of interest and awareness from customers that has manifested itself in enquires around upgrades and new features presentations. The back-end for me is the most interesting new part of the release, promising significant time-saving and quality-improving benefits for the IT department, but at present these benefits are more theoretical than realisable until such time as the full BI Modules/multiple Services Instances feature is rolled-out later this year or next. Cloud is still “one for the future” but there’s significant interest from customers in moving either part or all of their BI platform to the cloud, but given the enterprise nature of OBIEE it’s likely BI will follow after a wider adoption of Oracle Cloud for the customer rather than being the trailblazer given the need to integrate with cloud security, data sources and the need to wait for some other product enhancements to match on-premise functionality.

The post OBIEE12c – Three Months In, What’s the Verdict? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing


Jonathan Lewis - Sun, 2016-01-24 05:42

Here’s one of those odd little tricks that (a) may help in a couple of very special cases and (b) may show up at some future date – or maybe it already does – in the optimizer if it is recognised as a solution to a more popular problem. It’s about an apparent restriction on how the optimizer uses the BITMAP MERGE operation, and to demonstrate a very simple case I’ll start with a data set with just one bitmap index:

create table t1
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
        rownum                  id,
        mod(rownum,1000)        n1,
        rpad('x',10,'x')        small_vc,
        rpad('x',100,'x')       padding
        generator       v1,
        generator       v2
        rownum <= 1e6
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'

create bitmap index t1_b1 on t1(n1);

select index_name, leaf_blocks, num_rows from user_indexes;

-------------------- ----------- ----------
T1_B1                        500       1000

Realistically we don’t expect to use a single bitmap index to access data from a large table, usually we expect to have queries that give the optimizer the option to choose and combine several bitmap indexes (possibly driving through dimension tables first) to reduce the target row set in the table to a cost-effective level.

In this example, though, I’ve created a column data set that many people might view as “inappropriate” as the target for a bitmap index – in one million rows I have one thousand distinct values, it’s not a “low cardinality” column – but, as Richard Foote (among others) has often had to point out, it’s wrong to think that bitmap indexes are only suitable for columns with a very small number of distinct values. Moreover, it’s the only index on the table, so no chance of combining bitmaps.

Another thing to notice about my data set is that the n1 column has been generated by the mod() function; because of this the column cycles through the 1,000 values I’ve allowed for it, and this means that the rows for any given value are scattered widely across the table, but it also means that if I find a row with the value X in it then there could well be a row with the value X+4 (say) in the same block.

I’ve reported the statistics from user_indexes at the end of the sample code. This shows you that the index holds 1,000 “rows” – i.e. each key value requires only one bitmap entry to cover the whole table, with two rows per leaf block.  (By comparison, a B-tree index oon the column was 2,077 leaf block uncompressed, or 1,538 leaf blocks when compressed).

So here’s the query I want to play with, followed by the run-time execution plan with stats (in this case from a instance):

alter session set statistics_level = all;

        n1 in (1,5)

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
|   0 | SELECT STATEMENT                      |       |      1 |        |      1 |00:00:00.03 |    2006 |      4 |
|   1 |  SORT AGGREGATE                       |       |      1 |      1 |      1 |00:00:00.03 |    2006 |      4 |
|   2 |   INLIST ITERATOR                     |       |      1 |        |   2000 |00:00:00.03 |    2006 |      4 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      2 |   2000 |   2000 |00:00:00.02 |    2006 |      4 |
|   4 |     BITMAP CONVERSION TO ROWIDS       |       |      2 |        |   2000 |00:00:00.01 |       6 |      4 |
|*  5 |      BITMAP INDEX SINGLE VALUE        | T1_B1 |      2 |        |      2 |00:00:00.01 |       6 |      4 |

Predicate Information (identified by operation id):
   5 - access(("N1"=1 OR "N1"=5))

The query is selecting 2,000 rows from the table, for n1 = 1 and n1 = 5, and the plan shows us that the optimizer probes the bitmap index twice (operation 5), once for each value, fetching all the rows for n1 = 1, then fetching all the rows for n1 = 5. This entails 2,000 buffer gets. However, we know that for every row where n1 = 1 there is another row nearby (probably in the same block) where n1 = 5 – it would be nice if we could pick up the 1 and the 5 at the same time and do less work.

Technically the optimizer has the necessary facility to do this – it’s known as the BITMAP MERGE – Oracle can read two or more entries from a bitmap index, superimpose the bits (effectively a BITMAP OR), then convert to rowids and visit the table. Unfortunately there are cases (and it seems to be only the simple cases) where this doesn’t appear to be allowed even when we – the users – can see that it might be a very effective strategy. So can we make it happen – and since I’ve asked the question you know that the answer is almost sure to be yes.

Here’s an alternate (messier) SQL statement that achieves the same result:

        n1 in (
                select /*+ qb_name(subq) */
                from    (
                        select 1 from dual
                        union all
                        select 5 from dual

| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT                     |       |      1 |        |      1 |00:00:00.02 |    1074 |       |       |          |
|   1 |  SORT AGGREGATE                      |       |      1 |      1 |      1 |00:00:00.02 |    1074 |       |       |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |   2000 |   2000 |00:00:00.02 |    1074 |       |       |          |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |      1 |        |   2000 |00:00:00.01 |       6 |       |       |          |
|   4 |     BITMAP MERGE                     |       |      1 |        |      1 |00:00:00.01 |       6 |  1024K|   512K| 8192  (0)|
|   5 |      BITMAP KEY ITERATION            |       |      1 |        |      2 |00:00:00.01 |       6 |       |       |          |
|   6 |       VIEW                           |       |      1 |      2 |      2 |00:00:00.01 |       0 |       |       |          |
|   7 |        UNION-ALL                     |       |      1 |        |      2 |00:00:00.01 |       0 |       |       |          |
|   8 |         FAST DUAL                    |       |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   9 |         FAST DUAL                    |       |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|* 10 |       BITMAP INDEX RANGE SCAN        | T1_B1 |      2 |        |      2 |00:00:00.01 |       6 |       |       |          |

Predicate Information (identified by operation id):
  10 - access("N1"="from$_subquery$_002"."1")

Key points from this plan – and I’ll comment on the SQL in a moment: The number of buffer visits is roughly halved (In many cases we picked up two rows as we visited each buffer); operation 4 shows us that we did a BITMAP MERGE, and we can see in operations 5 to 10 that we did a BITMAP KEY ITERATION (which is a bit like a nested loop join – “for each row returned by child 1 (operation 6) we executed child 2 (operation 10)”) to probe the index twice and get two strings of bits that operation 4 could merge before operation 3 converted to rowids.

For a clearer picture of how we visit the table, here are the first few rows and last few rows from a version of the two queries where we simply select the ID column rather than aggregating on the small_vc column:

select  id from ...

Original query structure

2000 rows selected.

Modified query structure:

2000 rows selected.

As you can see, one query returns all the n1 = 1 rows then all the n1 = 5 rows while the other query alternates as it walks through the merged bitmap. You may recall the Exadata indexing problem (now addressed, of course) from a few years back where the order in which rows were visited after a (B-tree) index range scan made a big difference to performance. This is the same type of issue – when the optimizer’s default plan gets the right data in the wrong order we may be able to find ways of modifying the SQL to visit the data in a more efficient order. In this case we save only fractions of a second because all the data is buffered, but it’s possible that in a production environment with much larger tables many, or all, of the re-visits could turn into physical reads.

Coming back to the SQL, the key to the re-write is to turn my IN-list into a subquery, and then tell the optimizer to use that subquery as a “semijoin driver”. This is essentially the mechanism used by the Star Tranformation, where the optimizer rewrites a simple join so that each dimension table (typically) appears twice, first as an IN subquery driving the bitmap selection then as a “joinback”. But (according to the manuals) a star transformation requires at least two dimension tables to be involved in a join to the central fact table – and that may be why the semi-join approach is not considered in this (and slightly more complex) cases.



My reference: bitmap_merge.sql, star_hack3.sql

Trace Files -- 10c : Query and DML (INSERT)

Hemant K Chitale - Sun, 2016-01-24 04:52
In the previous posts, I have traced either

I have pointed out that the block statistics are reported as "FETCH" statistics for SELECTs and "EXECUTE" statistics for the DMLs

What if we have an INSERT ... AS SELECT ?

SQL ID: 5pf0puy1pmcvc Plan Hash: 2393429040

insert into all_objects_many_list select * from all_objects_short_list

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.73 16.19 166 1655 14104 28114
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.74 16.21 166 1655 14104 28114

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=1766 pr=166 pw=0 time=16204380 us)
28114 28114 28114 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=579 pr=0 pw=0 time=127463 us cost=158 size=2614881 card=28117)

insert into all_objects_many_list
select * from all_objects_short_list
where rownum < 11

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 6 11 10
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 6 11 10

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=6 pr=0 pw=0 time=494 us)
10 10 10 COUNT STOPKEY (cr=3 pr=0 pw=0 time=167 us)
10 10 10 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=3 pr=0 pw=0 time=83 us cost=158 size=2614881 card=28117)

Since it is an INSERT statement, the block statistics are reported against EXECUTE (nothing reported against the FETCH), even though the Row Source Operations section shows use Table Access (i.e. SELECT) against the ALL_OBJECTS_SHORT_LIST table. Also note, as we have seen in the previous trace on INSERTs, the target table does not get reported in the Row Source Operations.

Here's another example.

SQL ID: 5fgnrpgk3uumc Plan Hash: 2703984749

insert into all_objects_many_list select * from dba_objects

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.25 0 0 0 0
Execute 1 0.63 3.94 82 2622 13386 28125
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.67 4.20 82 2622 13386 28125

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=2736 pr=82 pw=0 time=3954384 us)
28125 28125 28125 VIEW DBA_OBJECTS (cr=360 pr=2 pw=0 time=1898588 us cost=105 size=5820219 card=28117)
28125 28125 28125 UNION-ALL (cr=360 pr=2 pw=0 time=1735711 us)
0 0 0 TABLE ACCESS BY INDEX ROWID SUM$ (cr=2 pr=2 pw=0 time=65154 us cost=1 size=11 card=1)
1 1 1 INDEX UNIQUE SCAN I_SUM$_1 (cr=1 pr=1 pw=0 time=61664 us cost=0 size=0 card=1)(object id 986)
0 0 0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us cost=3 size=25 card=1)
0 0 0 INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 36)
28124 28124 28124 FILTER (cr=354 pr=0 pw=0 time=1094905 us)
28124 28124 28124 HASH JOIN (cr=354 pr=0 pw=0 time=911322 us cost=102 size=3402036 card=28116)
68 68 68 TABLE ACCESS FULL USER$ (cr=5 pr=0 pw=0 time=328 us cost=3 size=1224 card=68)
28124 28124 28124 HASH JOIN (cr=349 pr=0 pw=0 time=612040 us cost=99 size=2895948 card=28116)
68 68 68 INDEX FULL SCAN I_USER2 (cr=1 pr=0 pw=0 time=159 us cost=1 size=1496 card=68)(object id 47)
28124 28124 28124 TABLE ACCESS FULL OBJ$ (cr=348 pr=0 pw=0 time=147446 us cost=98 size=2277396 card=28116)
0 0 0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us cost=2 size=30 card=1)
0 0 0 INDEX SKIP SCAN I_USER2 (cr=0 pr=0 pw=0 time=0 us cost=1 size=20 card=1)(object id 47)
0 0 0 INDEX RANGE SCAN I_OBJ4 (cr=0 pr=0 pw=0 time=0 us cost=1 size=10 card=1)(object id 39)
1 1 1 NESTED LOOPS (cr=4 pr=0 pw=0 time=262 us cost=3 size=38 card=1)
1 1 1 TABLE ACCESS FULL LINK$ (cr=2 pr=0 pw=0 time=180 us cost=2 size=20 card=1)
1 1 1 TABLE ACCESS CLUSTER USER$ (cr=2 pr=0 pw=0 time=50 us cost=1 size=18 card=1)
1 1 1 INDEX UNIQUE SCAN I_USER# (cr=1 pr=0 pw=0 time=21 us cost=0 size=0 card=1)(object id 11)

Only the source tables are reported in the Row Source Operations section. All the blocks are reported in the EXECUTE phase. Why does the "query" count in the EXECUTE statistics differ from the "cr" count reported for the LOAD TABLE CONVENTIONAL. (LOAD TABLE CONVENTIONAL indicates a regular non-direct-path INSERT).

Categories: DBA Blogs

Database Change Notification Listener Implementation

Andrejus Baranovski - Sat, 2016-01-23 08:43
Oracle DB could notify client, when table data changes. Data could be changed by third party process or by different client session. Notification implementation is important, when we want to inform client about changes in the data, without manual re-query. I had a post about ADS (Active Data Service), where notifications were received from DB through change notification listener - Practical Example for ADF Active Data Service. Now I would like to focus on change notification listener, because it can be used in ADF not only with ADS, but also with WebSockets. ADF BC is using change notification to reload VO, when configured for auto refresh - Auto Refresh for ADF BC Cached LOV.

Change notification is iniatilized by DB and handled by JDBC listener. There are two important steps for such listener - start and stop. Sample application -, implements servlet DBNotificationManagerImpl:

I'm starting and stopping DB change listener in servlet init and destroy methods:

DB listener initialization code is straightforward. Data changes are handled in onDatabaseChangeNotification method. Listener is registered againt DB table executed in initial query:

In my example I'm listening for changes in Employees table. Each time, when change happens - query is re-executed to calculate new average salary values for jobs:

On first load, when servlet is initialized - listener is registered. Initial data read happens and listener is reported as started:

I can change value in DB, commit it:

DB change listener is notified and query is being executed to fetch new averages. Value for SA_REP is changed:

I hope it makes it more clear how to use DB change listener. In my future posts I will describe how to push changes to JET through WebSocket.

Packt - Time to learn Oracle and Linux

Surachart Opun - Sat, 2016-01-23 00:01
What is your resolution for learning? Learn Oracle, Learn Linux or both. It' s a good news for people who are interested in improving Oracle and Linux skills. Packt Promotional (discount of 50%) for eBooks & videos from today until 23rd Feb, 2016. 

 XM6lxr0 for OracleOracle (Code:XM6lxr0) 
 ILYTW for LinuxLinux (Code:ILYTW)Written By: Surachart Opun
Categories: DBA Blogs

Formatting a Download Link

Scott Spendolini - Fri, 2016-01-22 14:38
Providing file upload and download capabilities has been native functionality in APEX for a couple major releases now. In 5.0, it's even more streamlined and 100% declarative.
In the interest of saving screen real estate, I wanted to represent the download link in an IR with an icon - specifically fa-download. This is a simple task to achieve - edit the column and set the Download Text to this:
<i class="fa fa-lg fa-download"></i>
The fa-lg will make the icon a bit larger, and is not required. Now, instead of a "download" link, you'll see the icon rendered in each row. Clicking on the icon will download the corresponding file. However, when you hover over the icon, instead of getting the standard text, it displays this:
2016 01 13 16 28 16
Clearly not optimal, and very uninformative. Let's fix this with a quick Dynamic Action. I placed mine on the global page, as this application has several places where it can download files. You can do the same or simply put on the page that needs it.
The Dynamic Action will fire on Page Load, and has one true action - a small JavaScript snippet:
$(".fa-download").prop('title','Download File');
This will find any instance of fa-download and replace the title with the text "Download File":
2016 01 13 16 28 43
If you're using a different icon for your download, or want it to say something different, then be sure to alter the code accordingly.

Blackboard Did What It Said It Would Do. Eventually.

Michael Feldstein - Fri, 2016-01-22 10:21

By Michael FeldsteinMore Posts (1052)

Today we have a prime example of how Blackboard has been failing by not succeeding fast enough. The company issued a press release announcing “availability of new SaaS offerings.” After last year’s BbWorld, I wrote a post about how badly the company was communicating with its customers about important issues. One of the examples I cited was the confusion around their new SaaS offerings versus managed hosting:

What is “Premium SaaS”? Is it managed hosting? Is it private cloud? What does it mean for current managed hosting customers? What we have found is that there doesn’t seem to be complete shared understanding even among the Blackboard management team about what the answers to these questions are.

A week later, (as I wrote at the time), the company acted to clarify the situation. We got some documentation on what the forthcoming SaaS tiers would look like and how they related to existing managed hosting options. Good on them for responding quickly and appropriately to criticism.

Now, half a year after the announcement, the company has released said SaaS offerings. Along with it, they put out an FAQ and a comparison of the tiers. So they said what they were going to do, they did it, and they said what they did. All good. But half a year later?

In my recent post about Blackboard’s new CEO, I wrote,

Ballhaus inherits a company with a number of problems. Their customers are increasingly unhappy with the support they are getting on the current platform, unclear about how they will be affected by future development plans, and unconvinced that Blackboard will deliver a next-generation product in the near future that will be a compelling alternative to the competitors in the market. Schools going out to market for an LMS seem less and less likely to take Blackboard serious as a contender, which is particularly bad news since a significant proportion of those schools are currently Blackboard schools. The losses have been incremental so far, but it feels like we are at an inflection point. The dam is leaking, and it could burst.

Tick-tock tick-tock.

The post Blackboard Did What It Said It Would Do. Eventually. appeared first on e-Literate.

[Cloud | On-Premise | Outsourcing | In-Sourcing] and why you will fail!

Tim Hall - Fri, 2016-01-22 07:47

error-24842_640I was reading this article about UK government in-sourcing all the work they previously outsourced.

This could be a story about any one of a number of failed outsourcing or cloud migration projects I’ve read about over the years. They all follow the same pattern.

  • The company is having an internal problem, that they don’t know how to solve. It could be related to costs, productivity, a paradigm shift in business practices or just an existing internal project that is failing.
  • They decide launch down a path of outsourcing or cloud migration with unrealistic expectations of what they can achieve and no real ideas about what benefits they will get, other than what Gartner told them.
  • When it doesn’t go to plan, they blame the outsourcing company, the cloud provider, the business analysts, Gartner, terrorists etc. Notably, the only thing that doesn’t get linked to the failure is themselves.

You might have heard this saying,

“You can’t outsource a problem!”

Just hoping to push your problems on to someone else is a guaranteed fail. If you can’t clearly articulate what you want and understand the consequences of your choices, how will you ever get a result you are happy with?

Over the years we’ve seen a number of high profile consultancies get kicked off government projects. The replacement consultancy comes in, hires all the same staff that failed last time, then continue on the failure train. I’m not going to mention names, but if you have paid any attention to UK government IT projects over the last decade you will know who and what I mean.

Every time you hear someone complaining about failing projects or problems with a specific model (cloud, on-premise, outsourcing, in-sourcing), it’s worth taking a step back and asking yourself where the problem really is. It’s much easier to blame other people than admit you’re part of the problem! These sayings spring to mind.

“Garbage in, garbage out!”

“A bad workman blames his tools!”



PS. I’ve never done anything wrong. It’s the rest of the world that is to blame… :)

Update: I wasn’t suggesting this is only an issue in public sector projects. It just so happens this rant was sparked by a story about public sector stuff. :)

[Cloud | On-Premise | Outsourcing | In-Sourcing] and why you will fail! was first posted on January 22, 2016 at 2:47 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

FREE Live Demo class with Hands-On : Oracle Access Manager : Sunday

Online Apps DBA - Fri, 2016-01-22 04:42
This entry is part 5 of 5 in the series Oracle Access Manager


This Sunday 24th Jan at 7:30 AM PST / 10:30 AM EST/ 3:30 PM GMT/ 9 PM GMT  I’ll be doing FREE Demo Class on Oracle Access Manager with Hands-on for my Upcoming Oracle Access Manager course.

But before I can Jump into this  FREE Demo class with some Hands-On exercises I need your help!

I’m starting this class just for you…that’s why I’ve have decided why not to take your respective suggestion.

I want to get crystal clear idea about where you need help with Oracle Access Manager ?

  • What Topic you would like me to cover in class?
  • What are the Topics that you feel you are lack in and want to increase your knowledge?

If I know exactly what kind of struggles you’ve with Oracle Access Manager & Integration with Oracle Applications, I can tailor this class specifically to what you need right now.

Would you be so kind as to take 5 minutes to give me your respective suggestions!

Just follow below button to reserve you seat for Demo class on Oracle Access Manager and giving your suggestion what you want me to cover at time registration ?

Click Here to Join Class Click Here to Join Class

If you do, I’ve got something for you in return! If you  provide your valuable suggestion I’ll be very glad to provide you some extra discount on my Oracle Access Manager Training that is going to start from 31st Jan

You’ll have first-round access to this Demo class that will give you a sneak-peek It will fill up fast, but you will be guaranteed a seat!

Thanks so much for helping me make this class amazing. I can’t wait to show it to you!

Click here  to get your premium access to January’s free Demo class.

Don’t forget to share this post if you think this could be useful to others and also Subscribe to this blog for more such FREE Webinars and useful content related to Oracle.

The post FREE Live Demo class with Hands-On : Oracle Access Manager : Sunday appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs