Feed aggregator

Get Proactive - Follow the Oracle Support Events Calendar

Joshua Solomin - Mon, 2015-11-30 16:36
See Upcoming Support Events with the Get Proactive Events Calendar
Web application that automatically tracks Advisor Webcasts / newsletter releases / Support training events and also synchronizes events you select into your calendar

Oracle Support sponsors a variety of activities (like our popular Advisor Webcasts) to help customers work more effectively with their Oracle products. Follow our Event Calendar to to stay up to date on upcoming Webcasts and events.

The web app allows you to filter activities by product line, making it easier to see the most relevant items. As new events are added to the schedule, the calendar updates automatically to include sessions, dates, and times. For consistency displayed times will automatically adjust based on your time zone.

The calendar is built using the standard iCalendar format, so you can automatically integrate the calendar data directly in Outlook and Thunderbird. Follow the instructions below to set up your integration and take advantage.

Calendar
Click the image to visit the app
Calendar Integration
  1. Go to the calendar link here.
  2. Follow the instructions on the page to add the calendar to your email/calendar client.

We've written a brief document detailing some of the features for the calendar. Visit Document 125716.1 to find out more.

Watch featured OTN Virtual Technology Summit Replay Sessions

OTN TechBlog - Mon, 2015-11-30 16:08

Today we are featuring a session from each OTN Virtual Technology Summit Replay Group.  See session titles and abstracts below.  Watch right away and then join the group to interact with other community members and stay up to date on when NEW content is coming!

Best Practices for Migrating On-Premises Databases to the Cloud

By Leighton Nelson, Oracle ACE
Oracle Multitenant is helping organizations reduce IT costs by simplifying database consolidation, provisioning, upgrades, and more. Now you can combine the advantages of multitenant databases with the benefits of the cloud by leveraging Database as a Service (DBaaS). In this session, you’ll learn about key best practices for moving your databases from on-premises environments to the Oracle Database Cloud and back again.

What's New for Oracle and .NET - (Part 1)
By Alex Keh, Senior Principal Product Manager, Oracle
With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore new features introduced in recent releases with code and tool demonstrations using Visual Studio 2015.

Docker for Java Developers
By Roland Huss, Principal Software Engineer at Red Hat
Docker, the OS-level visualization platform, takes the IT world by storm. In this session, we will see what features Docker has for us Java developers. It is now possible to create truly isolated, self-contained and robust integration tests in which external dependencies are realized as Docker containers. Docker also changes the way we ship applications in that we are not only deploying application artifacts like WARs or EARs but also their execution contexts. Besides elaborating on these concepts and more, this presentation will focus on how Docker can best be integrated into the Java build process by introducing a dedicated Docker Maven plugin which is shown in a live demo.

Debugging Weblogic Authentication
By Maarten Smeets, Senior Oracle SOA / ADF Developer, AMIS
Enterprises often centrally manage login information and group memberships (identity). Many systems use this information to achieve Single Sign On (SSO) functionality, for example. Surprisingly, access to the Weblogic Server Console is often not centrally managed. This video explains why centralizing management of these identities not only increases security, but can also reduce operational cost and even increase developer productivity. The video demonstrates several methods for debugging authentication using an external LDAP server in order to lower the bar to apply this pattern. This technically-oriented presentation will be especially useful for people working in operations who are responsible for managing Weblogic Servers.

Designing a Multi-Layered Security Strategy
By Glenn Brunette, Cybersecurity, Oracle Public Sector, Oracle
Security is a concern of every IT manager and it is clear that perimeter defense, trying to keep hackers out of your network, is not enough. At some point someone with bad intentions will penetrate your network and to prevent significant damage it is necessary to make sure there are multiple layers of defense. Hear about Oracle’s defense in depth for data centers including some new and unique security features built into the new SPARC M7 processor.

Licensing Cloud Control

Laurent Schneider - Mon, 2015-11-30 12:08

I just read the Enterprise Manager Licensing Information User Manual today. They are a lot of packs there, and you may not even know that autodiscovering targets is part of the lifecycle management pack or that blackouts are part of the diagnostic pack.

Have a look

RAM is the new disk – and how to measure its performance – Part 3 – CPU Instructions & Cycles

Tanel Poder - Mon, 2015-11-30 00:45

If you haven’t read the previous parts of this series yet, here are the links: [ Part 1 | Part 2 ].

A Refresher

In the first part of this series I said that RAM access is the slow component of a modern in-memory database engine and for performance you’d want to reduce RAM access as much as possible. Reduced memory traffic thanks to the new columnar data formats is the most important enabler for the awesome In-Memory processing performance and SIMD is just icing on the cake.

In the second part I also showed how to measure the CPU efficiency of your (Oracle) process using a Linux perf stat command. How well your applications actually utilize your CPU execution units depends on many factors. The biggest factor is your process’es cache efficiency that depends on the CPU cache size and your application’s memory access patterns. Regardless of what the OS CPU accounting tools like top or vmstat may show you, your “100% busy” CPUs may actually spend a significant amount of their cycles internally idle, with a stalled pipeline, waiting for some event (like a memory line arrival from RAM) to happen.

Luckily there are plenty of tools for measuring what’s actually going on inside the CPUs, thanks to modern processors having CPU Performance Counters (CPC) built in to them.

A key derived metric for understanding CPU-efficiency is the IPC (instructions per cycle). Years ago people were actually talking about the inverse metric CPI (cycles per instruction) as on average it took more than one CPU cycle to complete an instruction’s execution (again, due to the abovementioned reasons like memory stalls). However, thanks to today’s superscalar processors with out-of-order execution on a modern CPU’s multiple execution units – and with large CPU caches – a well-optimized application can execute multiple instructions per a single CPU cycle, thus it’s more natural to use the IPC (instructions-per-cycle) metric. With IPC, higher is better.

Here’s a trimmed snippet from the previous article, a process that was doing a fully cached full table scan of an Oracle table (stored in plain old row-oriented format):

Performance counter stats for process id '34783':

      27373.819908 task-clock                #    0.912 CPUs utilized
    86,428,653,040 cycles                    #    3.157 GHz                     [33.33%]
    32,115,412,877 instructions              #    0.37  insns per cycle
                                             #    2.39  stalled cycles per insn [40.00%]
    76,697,049,420 stalled-cycles-frontend   #   88.74% frontend cycles idle    [40.00%]
    58,627,393,395 stalled-cycles-backend    #   67.83% backend  cycles idle    [40.00%]
       256,440,384 cache-references          #    9.368 M/sec                   [26.67%]
       222,036,981 cache-misses              #   86.584 % of all cache refs     [26.66%]

      30.000601214 seconds time elapsed

The IPC of the above task is pretty bad – the CPU managed to complete only 0.37 instructions per CPU cycle. On average every instruction execution was stalled in the execution pipeline for 2.39 CPU cycles.

Note: Various additional metrics can be used for drilling down into why the CPUs spent so much time stalling (like cache misses & RAM access). I covered the typical perf stat metrics in the part 2 of this series so won’t go in more detail here.

Test Scenarios

The goal of my experiments was to measure the number CPU-efficiency of different data scanning approaches in Oracle – on different data storage formats. I focused only on data scanning and filtering, not joins or aggregations. I ensured that everything would be cached in Oracle’s buffer cache or in-memory column store for all test runs – so disk IO was not a factor here (again, read more about my test environment setup in part 2 of this series).

The queries I ran were mostly variations of this:

SELECT COUNT(cust_valid) FROM customers_nopart c WHERE cust_id > 0

Although I was after testing the full table scanning speeds, I also added two examples of scanning through the entire table’s rows via index range scans. This allows me to show how inefficient index range scans can be when accessing a large part of a table’s rows even when all is cached in memory. Even though you see different WHERE clauses in some of the tests, they all are designed so that they go through all rows of the table (just using different access patterns and code paths).

The descriptions of test runs should be self-explanatory:

1. INDEX RANGE SCAN BAD CLUSTERING FACTOR

SELECT /*+ MONITOR INDEX(c(cust_postal_code)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_postal_code > '0';

2. INDEX RANGE SCAN GOOD CLUSTERING FACTOR

SELECT /*+ MONITOR INDEX(c(cust_id)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_id > 0;

3. FULL TABLE SCAN BUFFER CACHE (NO INMEMORY)

SELECT /*+ MONITOR FULL(c) NO_INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c WHERE cust_id > 0;

4. FULL TABLE SCAN IN MEMORY WITH WHERE cust_id > 0

SELECT /*+ MONITOR FULL(c) INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c WHERE cust_id > 0;

5. FULL TABLE SCAN IN MEMORY WITHOUT WHERE CLAUSE

SELECT /*+ MONITOR FULL(c) INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c;

6. FULL TABLE SCAN VIA BUFFER CACHE OF HCC QUERY LOW COLUMNAR-COMPRESSED TABLE

SELECT /*+ MONITOR */ COUNT(cust_valid) 
FROM customers_nopart_hcc_ql WHERE cust_id > 0

Note how all experiments except the last one are scanning the same physical table just with different options (like index scan or in-memory access path) enabled. The last experiment is against a copy of the same table (same columns, same rows), but just physically formatted in the HCC format (and fully cached in buffer cache).

Test Results: Raw Numbers

It is not enough to just look into the CPU performance counters of different experiments, they are too low level. For the full picture, we also want to know how much work (like logical IOs etc) the application was doing and how many rows were eventually processed in each case. Also I verified that I did get the exact desired execution plans, access paths and that no physical IOs or other wait events happened using the usual Oracle metrics (see the log below).

Here’s the experiment log file with full performance numbers from SQL Monitoring reports, Snapper and perf stat:

I also put all these numbers (plus some derived values) into a spreadsheet. I’ve pasted a screenshot of the data below for convenience, but you can access the entire spreadsheet with its raw data and charts here (note that the spreadsheet has multiple tabs and configurable pivot charts in it):

Raw perf stat data from the experiments:

oracle scan test results.png

Now let’s plot some charts!

Test Results: CPU Instructions

Let’s start from something simple and gradually work our way deeper. I will start from listing the task-clock-ms metric that shows the CPU time usage of the Oracle process in milliseconds for each of my test table scans. This metric comes from the OS-level and not from within the CPU:

task-clock-ms.pngCPU time used for scanning the dataset (in milliseconds)

As I mentioned earlier, I added two index (full) range scan based approaches for comparison. Looks like the index-based “full table scans” seen in first and second columns are using the most CPU-time as the OS sees it (~120 and close to 40 seconds of CPU respectively).

Now let’s see how many CPU instructions (how much work “requested” from CPU) the Oracle process executed for scanning the same dataset using different access paths and storage formats:

oracle table scan instructions clean.pngCPU instructions executed for scanning the dataset

Wow, the index-based approaches seem to be issuing multiple times more CPU instructions per query execution than any of the full table scans. Whatever loops the Oracle process is executing for processing the index-based query, it runs more of them. Or whatever functions it calls within those loops, the functions are “fatter”. Or both.

Let’s look into an Oracle-level metric session logical reads to see how many buffer gets it is doing:

oracle buffer gets clean.pngBuffer gets done for a table scan

 

Wow, using the index with bad clustering factor (1st bar) causes Oracle to do over 60M logical IOs, while the table scans do around 1.6M of logical IOs each. Retrieving all rows of a table via an index range scan is super-inefficient, given that the underlying table size is only 1613824 blocks.

This inefficiency is due to index range scans having to re-visit the same datablocks multiple times (up to one visit per row, depending on the clustering factor of the index used). This would cause another logical IO and use more CPU cycles for each buffer re-visit, except in cases where Oracle has managed to keep a buffer pinned since last visit. The index range scan with a good clustering factor needs to do much fewer logical IOs as given the more “local” clustered table access pattern, the re-visited buffers are much more likely found already looked-up and pinned (shown as the buffer is pinned count metric in V$SESSTAT).

Knowing that my test table has 69,642,625 rows in it, I can also derive an average CPU instructions per row processed metric from the total instruction amounts:

instructions per row.png

The same numbers in tabular form:

Screen Shot 2015-11-30 at 00.38.12

Indeed there seem to be radical code path differences (that come from underlying data and cache structure differences) that make an index-based lookup use thousands of instructions per row processed, while an in-memory scan with a single predicate used only 102 instructions per row processed on average. The in-memory counting without any predicates didn’t need to execute any data comparison logic in it, so could do its data access and counting with only 43 instructions per row on average.

So far I’ve shown you some basic stuff. As this article is about studying the full table scan efficiency, I will omit the index-access metrics from further charts. The raw metrics are all available in the raw text file and spreadsheet mentioned above.

Here are again the buffer gets of only the four different full table scan test cases:

oracle buffer gets table scan only.pngBuffer gets done for full table scans

All test cases except the HCC-compressed table scan cause the same amount of buffer gets (~1.6M) as this is the original table’s size in blocks. The HCC table is only slightly smaller – didn’t get great compression with the query low setting.

Now let’s check the number CPU instructions executed by these test runs:

oracle table scan only instructions.pngCPU instructions executed for full table scans

Wow, despite the table sizes and number of logical IOs being relatively similar, the amount of machine code the Oracle process executes is wildly different! Remember, all that my query is doing is just scanning and filtering the data followed with a basic COUNT(column) operation – no additional sorting, joining is done. The in-memory access paths (column 3 & 4) get away with executing much fewer CPU instructions than the regular buffered tables in row-format and HCC format (columns 1 & 2 in the chart).

All the above shows that not all logical IOs are equal, depending on your workload and execution plans (how many block visits, how many rows extracted per block visit) and underlying storage formats (regular row-format, HCC in buffer cache or compressed columns in In-Memory column store), you may end up doing a different amount of CPU work per row retrieved for your query.

This was true before the In-Memory option and even more noticeable with the In-Memory option. But more about this in a future article.

Test Results: CPU Cycles

Let’s go deeper. We already looked into how many buffer gets and CPU instructions the process executed for the different test cases. Now let’s look into how much actual CPU time (in form of CPU cycles) these tests consumed. I added the CPU cycles metric to instructions for that:

instructions and cycles.pngCPU instructions and cycles used for full table scans

Hey, what? How come the regular row-oriented block format table scan (TABLE BUFCACHE) takes over twice more CPU cycles compared to its instructions executed?

Also, how come all the other table access methods use noticeably less CPU cycles than the number of instructions they’ve executed?

If you paid attention to this article (and previous ones) you’ll already know why. In the 1st example (TABLE BUFCACHE) the CPU must have been “waiting” for something a lot, instructions having spent multiple cycles “idle”, stalled in the pipeline, waiting for some event or necessary condition to happen (like a memory line arriving from RAM).

For example, if you are constantly waiting for the “random” RAM lines you want to access due to inefficient memory structures for scanning (like Oracle’s row-oriented datablocks), the CPU will be bottlenecked by RAM access. The CPU’s internal execution units, other than the load-store units, would be idle most of the time. The OS top command would still show you 100% utilization of a CPU by your process, but in reality you could squeeze much more out of your CPU if it didn’t have to wait for RAM so much.

In the other 3 examples above (columns 2-4), apparently there is no serious RAM (or other pipeline-stalling) bottleneck as in all cases we are able to use the multiple execution units of modern superscalar CPUs to complete more than one instruction per CPU cycle. Of course more improvements might be possible, but more about this in a following post.

For now I’ll conclude this (lengthy) post with one more chart with the fundamental derived metric instructions per cycle (IPC):

instructions per cycle.png

The IPC metric is derived from the previously shown instructions and CPU cycles metrics by a simple division. Higher IPC is better as it means that your CPU execution units are more utilized, it gets more done. However, as IPC is a ratio, you should never look into the IPC value alone, always look into it together with instructions and cycles metrics. It’s better to execute 1 Million instructions with IPC of 0.5 than 1 Billion instructions with an IPC of 3 – but looking into IPC in isolation doesn’t tell you how much work was actually done. Additionally, you’d want to use your application level metrics that give you an indication of how much application work got done (I used Oracle’s buffer gets and rows processed metrics for this).

Looks like there’s at least 2 more parts left in this series (advanced metrics and a summary), but let’s see how it goes. Sorry for any typos, it’s getting quite late and I’ll fix ’em some other day :)

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Learn About Hyperion & Oracle BI... 5 Minutes at a Time

Look Smarter Than You Are - Fri, 2015-11-27 14:13
Since early 2015, we've been trying to figure out how to help educate more people around the world on Oracle BI and Oracle EPM. Back in 2006, interRel launched a webcast series that started out once every two weeks and then rapidly progressed to 2-3 times per week. We presented over 125 webcasts last year to 5,000+ people from our customers, prospective customers, Oracle employees, and our competitors.

In 2007, we launched our first book and in the last 8 years, we've released over 10 books on Essbase, Planning, Smart View, Essbase Studio, and more. (We even wrote a few books we didn't get to publish on Financial Reporting and the dearly departed Web Analysis.) In 2009, we started doing free day-long, multi-track conferences across North America and participating in OTN tours around the world. We've also been trying to speak at as many user groups and conferences as we can possibly fit in. Side note, if you haven't signed up for Kscope16 yet, it's the greatest conference ever: go to kscope16.com and register (make sure you use code IRC at registration to take $100 off each person's costs).

We've been trying to innovate our education offerings since then to make sure there were as many happy Hyperion, OBIEE, and Essbase customers around the world as possible. Since we started webcasts, books, and free training days, others have started doing them too which is awesome in that it shares the Oracle Business Analytics message with even more people.

The problem is that the time we have for learning and the way we learn has changed. We can no longer take the time to sit and read an entire book. We can't schedule an hour a week at a specific time to watch an hour webcast when we might only be interested in a few minutes of the content. We can't always take days out of our lives to attend conferences no matter how good they are.  So in June 2015 at Kscope16, we launched the next evolution in training (epm.bi/videos):


#PlayItForward is our attempt to make it easier for people to learn by making it into a series of free videos.  Each one focuses on a single topic. Here's one I did that attempts to explain What Is Big Data? in under 12 minutes:

As you can see from the video, the goal is to teach you a specific topic with marketing kept to an absolute minimum (notice that there's not a single slide in there explaining what interRel is). We figure if we remove the marketing, people will not only be more likely to watch the videos but share them as well (competitors: please feel free to watch, learn, and share too). We wanted to get to the point and not teach multiple things in each video.

Various people from interRel have recorded videos in several different categories including What's New (new features in the new versions of various products), What Is? (introductions to various products), Tips & Tricks, deep-dive series (topics that take a few videos to cover completely), random things we think are interesting, and my personal pet project, the Essbase Technical Reference.
Essbase Technical Reference on VideoYes, I'm trying to convert the Essbase Technical Reference into current, easy-to-use videos. This is a labor of love (there are hundreds of videos to be made on just Essbase calc functions alone) and I needed to start somewhere. For the most part, I'm focusing on Essbase Calc Script functions and commands first, because that's where I get the most questions (and where some of the examples in the TechRef are especially horrendous). I've done a few Essbase.CFG settings that are relevant to calculations and a few others I just find interesting.  I'm not the only one at interRel doing them, because if we waited for me to finish, well, we'd never finish. The good news is that there are lots of people at interRel who learned things and want to pass them on.

I started by doing the big ones (like CALC DIM and AGG) but then decided to tackle a specific function category: the @IS... boolean functions. I have one more of those to go and then I'm not sure what I'm tackling next. For the full ever-increasing list, go to http://bit.ly/EssTechRef, but here's the list as of this posting: 
To see all the videos we have at the moment, go to epm.bi/videos. I'm looking for advice on which TechRef videos I should record next. I'm trying to do a lot more calculation functions and Essbase.CFG settings before I move on to things like MDX functions and MaxL commands, but others may take up that mantle. If you have functions you'd like to see a video on, shoot an email over to epm.bi/videos, click on the discussion tab, and make a suggestion or two. If you like the videos and find them helpful (or you have suggestions on how to make them more helpful), please feel free to comment too.

I think I'm going to go start working on my video on FIXPARALLEL.
Categories: BI & Warehousing

What's new in Forms 12c, part 2

Gerd Volberg - Thu, 2015-11-26 02:19
Let's now look into the Form Builder and what changes we got there.

First we see the facelift in the Open-Dialog. It's now the typical Windows-Dialog, where you have much more flexibility.


New features in the Convert-Dialog:

  • New Feature "Converting to and from XML"
  • Checkbox "Overwrite"
  • Checkbox "Keep window open", if you have to convert many files at once.


Preferences-Dialog, Tab "General":

  • The checkbox "Hide PL/SQL Compile Dialog" is new
  • Web Browser location (Forms 11g: Tab "Runtime")
  • Compiler Output is new

Tab "Subclass": No changes


Tab "Wizards": No changes


 Tab "Runtime":
  • The checkbox "Show URL Parameters" is new
  • Application Server URL is much bigger!
  • The Web Browser location vanished to Tab "General"



Have fun
Gerd

UX Empathy and the Art of Storytelling

Usable Apps - Wed, 2015-11-25 13:14

At this year’s Web Summit in Dublin, Ireland, I had the opportunity to observe thousands of attendees. They came from 135 different countries and represented different generations.

Despite these enormous differences, they came together and communicated.

But how? With all of the hype about how different communication styles are among the Baby Boomers, Gen Xers, Millennials, and Generation Zers, I expected to see lots of small groupings of attendees based on generation. And I thought that session audiences would mimic this, too. But I could not have been more wrong.

How exactly, then, did speakers, panelists, and interviewers keep the attention of attendees in the 50+ crowd, the 40+ crowd, and the 20+ crowd while they sat in the same room?

The answer is far simpler than I could have imagined: Authenticity. They kept their messages simple, specific, honest, and in context of the audience and the medium in which they were delivering them.

 Estee Lalonde in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"

Web Summit: Estée Lalonde (@EsteeLalonde) in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"

Simplicity in messaging was key across Web Summit sessions: Each session was limited to 20 minutes, no matter whether the stage was occupied by one speaker or a panel of interviewees. For this to be successful, those onstage needed to understand their brands as well as the audience and what they were there to hear.

Attention spans are shortening, so it’s increasingly critical to deliver an honest, authentic, personally engaging story. Runar Reistrup, Depop, said it well at the Web Summit when he said:

 Runar Reistrup in conversation during the Fashion Summit session "A branding lesson from the fashion industry"

Web Summit: Runar Reistrup (@runarreistrup) in conversation during the Fashion Summit session "A branding lesson from the fashion industry"

While lots of research, thought, and hard work goes into designing and building products, today’s brand awareness is built with social media. Users need to understand the story you’re telling but not be overwhelmed by contrived messaging.

People want to connect with stories and learn key messages through those stories. Storytelling is the important challenge of our age. And how we use each social medium to tell a story is equally important. Storytelling across mediums is not a one-size-fits-all experience; each medium deserves a unique messaging style. As Mark Little (@marklittlenews), founder of Storyful, makes a point of saying, "This is the golden age of storytelling.

The Oracle Applications User Experience team recognizes this significance of storytelling and the importance of communicating the personality of our brand. We take time to nurture connections and relationships with those who use our applications, which enables us to empathize with our users in authentic ways.

 Aine Kerr talking about the art of storytelling

Web Summit: Áine Kerr (@AineKerr) talking about the art of storytelling

The Oracle simplified user interface is designed with consideration of our brand and the real people—like you—who use our applications. We want you to be as comfortable using our applications as you are having a conversation in your living room. We build intuitive applications that that are based on real-world stories—yours—and that solve real-world challenges that help make your work easier.

We experiment quite a bit, and we purposefully “think as if there is no box.” (Maria Hatzistefanis, Rodial)

 Maria Hatzistefanis in conversation during the Fashion Summit session "Communication with your customer in the digital age"

Web Summit: Maria Hatzistefanis (@MrsRodial) in conversation during the Fashion Summit session "Communication with your customer in the digital age"

We strive for finding that authentic connection between the simplified user interface design and the user.  We use context and content (words) to help shape and inform what message we promote on each user interface page. We choose the words we use as well as the tone carefully because we recognize the significance of messaging, whether the message is a two-word field label or a tweet.  And we test, modify, and retest our designs with real users before we build applications to ensure that the designs respond to you and your needs.

If you want to take advantage of our design approach and practices, download our simplified user experience design patterns eBook for free and design a user experience that mimics the one we deliver in the simplified user interface. And if you do, please let us know what you think at @usableapps.

Oracle Priority Support Infogram for 25-NOV-2015 1000th posting!

Oracle Infogram - Wed, 2015-11-25 11:44

This marks the 1000th post to the Infogram. I am awarding myself a low-carb lollipop.

Data Warehouse


Oracle VM

Oracle VM Performance and Tuning - Part 4, from Virtually All The Time.

Fusion

Changing Appearances: Give The Apps Your Corporate Look, from Fusion Applications Developer Relations.

SmartScan


DRM

Patch Set Update: Oracle Data Relationship Management 11.1.2.4.321, from Business Analytics - Proactive Support.

ACM

Oracle and Adaptive Case Management: Part 1 , from SOA & BPM Partner Community Blog.

NetBeans


EBS

From the Oracle E-Business Suite Support blog:


From the Oracle E-Business Suite Technology blog:



Why Data Virtualization Is so Vital

Kubilay Çilkara - Tue, 2015-11-24 16:35
In today’s day and age, it probably seems like every week you hear about a new form of software you absolutely have to have. However, as you’re about to find out, data virtualization is actually worth the hype.

The Old Ways of Doing Things

Traditionally, data management has been a cumbersome process, to say the least. Usually, it means data replication, data management or using intermediary connectors and servers to pull off point-to-point integration. Of course, in some situations, it’s a combination of the three.

Like we just said, though, these methods were never really ideal. Instead, they were just the only options given the complete lack of alternatives available. That’s the main reason you’re seeing these methods less and less. The moment something better came along, companies jumped all over them.
However, their diminishing utility can also be traced to three main factors. These would be:

  • ·      High costs related to data movement
  • ·      The astronomical growth in data (also referred to as Big  Data)
  • ·      Customers that expect real-time information
These three elements are probably fairly self-explanatory, but that last one is especially interesting to elaborate on. Customers these days really don’t understand why they can’t get all the information they want exactly when they want it. How could they possibly make sense of that when they can go online and get their hands on practically any data they could ever want thanks to the likes of Google? If you’re trying to explain to them that your company can’t do this, they’re most likely going to have a hard time believing you. Worse, they may believe you, but assume that this is a problem relative to your company and that some competitor won’t have this issue.

Introducing Data Virtualization

It was only a matter of time before this problem was eventually addressed. Obviously, when so many companies are struggling with this kind of challenge, there’s quite the incentive for another one to solve it.

That’s where data virtualization comes into play. Companies that are struggling with having critical information spread out across their entire enterprise in all kinds of formats and locations never have to worry about the hardships of having to get their hands on it. Instead, they can use virtualization platforms to search out what they need.

Flexible Searches for Better Results

It wouldn’t make much sense for this type of software to not have a certain amount of agility built in. After all, that’s sort of its main selling point. The whole reason companies invest in it is because it doesn’t get held back by issues with layout or formatting. Whatever you need, it can find.

Still, for best results, many now offer a single interface that can be used to separate and extract aggregates of data in all kinds of ways. The end result is a flexible search which can be leverage toward all kinds of ends. It’s no longer about being able to find any type of information you need, but finding it in the most efficient and productive way possible.

Keep Your Mainframe

One misconception that some companies have about data virtualization is that it will need certain adjustments to be made to your mainframe before it can truly be effective. This makes sense because, for many platforms, this is definitely the case. These are earlier versions, though, and some that just aren’t of the highest quality.

With really good versions, though, you can basically transform your company’s mainframe into a virtualization platform. Such an approach isn’t just cost-effective. It also makes sure you aren’t wasting resources, including time, addressing the shortcomings of your current mainframe, something no company wants to do.

Don’t get turned off from taking a virtualization approach to your cache of data because you’re imagining a long list of chores that will be necessary for transforming your mainframe. Instead, just be sure you invest in a high-end version that will actually transform your current version into something much better.

A Better Approach to Your Current Mainframe

Let’s look at some further benefits that come from taking this approach. First, if the program you choose comes with the use of a high-performance server, you’ll immediately eliminate the redundancy of integrating from point-to-point. This will definitely give you better performance in terms of manageability. Plus, if you ever want to scale up, this will make it much easier to do so.

Proper data migration is key to a good virtualization process. If it is done right the end user wont have to worry out corrupted data and communication between machines will be crystal clear.
If you divert the data mapping you need to do at processing-intensive level and transformation processes away from the General Purpose Processor of your mainframe to the zIIP specialty engine, you’ll get to dramatically reduce your MIPS capacity usage and, therefore, also reduce your company’s TCO (Total Cost of Ownership).

Lastly, maybe you’d like to exploit of every last piece of value you derive from your mainframe data. If so, good virtualization software will not only make this possible, but do so in a way that will let you dramatically turn all of your non-relational mainframe data virtualization into relational formats that any business analytics or intelligence application can use.

Key Features to Look for in Your Virtualization Platform

If you’re now sold on the idea of investing in a virtualization platform, the next step is getting smart about what to look for. As you can imagine, you won’t have trouble finding a program to buy, but you want to make sure it’s actually going to be worth every penny.

The first would be, simply, the amount of data providers available. You want to be able to address everything from big data to machine data to syslogs, distributed and mainframe. Obviously, this will depend a lot on your current needs, but think about the future too.

Then, there’s the same to think about in terms of data consumers. We’re talking about the cloud, analytics, business intelligence and, of course, the web. Making sure you will be able to stay current for some time is very important. Technology changes quickly and the better your virtualization process is the longer you’ll have before having to upgrade. Look closely at the migration process, and whether or not the provider can utilize your IT team to increase work flow. This will help you company get back on track more quickly and with better results.

Finally, don’t forget to look at costs, especially where scalability is concerned. If you have plans of getting bigger in the future, you don’t want it to take a burdensome investment to do so.
As you can see, virtualization platforms definitely live up to the hype.You just have to be sure you spend your money on the right kind.

Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket software about topics such as Terminal emulation,  Enterprise Mobility and more.
Categories: DBA Blogs

The Times They Are A-Changin'

Floyd Teter - Mon, 2015-11-23 19:36
Come gather 'round people
Wherever you roam
And admit that the waters
Around you have grown
And accept it that soon
You'll be drenched to the bone
If your time to you
Is worth savin'
Then you better start swimmin'
Or you'll sink like a stone
For the times they are a-changin'.

                     -- From Bob Dylan's "The Times They Are A-Changin'"


Spent some time with more really smart folks at Oracle last week.  Yeah, this people are really smart...I'm still wondering how they let me in the door.

During that time, I probably had three separate conversations with different people on how SaaS changes the consulting model.  Frankly, implementation is no longer a big money maker in the SaaS game.  The combination of reducing the technology overhead, reducing customizations, and a sharp focus on customer success is killing the IT consulting goose that lays the golden eggs:  application implementation.  You can see indications of it just in the average cycle times between subscription and go-live:  they're down to about 4.5 months and still on a down trend.  Bringing SaaS customers up in less than 30 days is something Oracle can see on the near horizon.  Unfortunately, as the cycle times for SaaS implementations shortens, it gets more difficult for an implementation partner to generate significant revenues and margins.  The entire model is built around 12-t0-24 month implementations - SaaS make those time frames a thing of the past.

So, if I were a SaaS implementation partner today, what would I do?  Frankly, I'd be switching to a relationship - retainer approach with my customers (not my idea...all those smart people I mentioned suggested it).  I'd dedicate teams that would implement SaaS, extend SaaS functionality, test new upgrades prior to rollout, and maintain your SaaS apps.  I'd build a relationship with those customers rather than simply attempt to sell implementation services.  The value to customers?  Your workforce focuses on the business rather than the software.  You need new reports or business intelligence?  Covered in our agreement.  Test this new release before we upgrade our production instance?  Covered in our agreement.  Some new fields on a user page or an add-on business process?  Covered in our agreement.  Something not working?  Let my team deal with Oracle Support...covered in our agreement.

Other ideas?  The comments await.

The times they are a-changin'...quickly.  Better start swimmin'.


I Had Low Expectations for the APEX Gaming Competition 2015 from ODTUG. Wow, Was I Ever Wrong!

Joel Kallman - Mon, 2015-11-23 16:27


When the APEX Gaming Competition 2015 was announced by Vincent Morneau from Insum at the ODTUG Kscope15 conference this past year, I was very suspect.  I've seen many contests over the years that always seemed to have very few participants, and only one or two people would really put forth effort.  Why would this "Gaming Competition" be any different?  Who has time for games, right?  Well...I could not have been more wrong.

I was given the honor of being a judge for the APEX Gaming Competition 2015, and not only did I get to see the front-end of these games, I also was able to see them behind the scenes as well - how they were constructed, how much of the database they used, how much of declarative APEX did they use, etc.  I was completely blown away by the creativity and quality of these games.  There were 15 games submitted, in all, and as I explained to the other judges, it was clear that people really put their heart and soul into these games.  I saw excellent programming practices, extraordinarily smart use of APEX, SQL and PL/SQL, and an unbelievable amount of creativity and inventiveness.

I hated having to pick "winners", because these were all simply a magnificent collection of modern Web development and Oracle Database programming.  If you haven't seen the actual games and the code behind them, I encourage you to take a look at any one of them.

I truly don't know how these people found the time to work on these games.  It takes time and effort to produce such high quality.  These are people who have day jobs and families and responsibilities and no time.  In an effort to simply acknowledge and offer our praise to these contributors, I'd like to list them all here (sorted by last name descending, just to be different):

Scott Wesley
Maxime Tremblay
Douglas Rofes
Anderson Rodrigues
Pavel
Matt Mulvaney
Jari Laine
Daniel Hochleitner
Marc Hassan
Nihad Hasković
Lev Erusalimskiy
Gabriel Dragoi
Nick Buytaert
Marcelo Burgos

Thanks to each of you for being such a great champion for the global #orclapex community.  You're all proud members of the #LetsWreckThisTogether club!

P.S.  Thanks to ODTUG for sponsoring this event and Vincent Morneau for orchestrating the whole contest.

Join OTN Virtual Technology Summit Replay Groups Today!

OTN TechBlog - Mon, 2015-11-23 10:31

Join one of the  OTN Virtual Technology Summit Replay Libraries to view video tech sessions produced by Oracle ACEs and Oracle Product Team Experts. These videos present technical insight and expertise through highly technical presentations and demos created to help you master the skills you need to meet today’s IT challenges.


Group membership entitles you not only to access to the library of VTS session videos, but also provides the means for you to engage directly with session presenters through online discussion for answers to your questions about any of the material in the presentations.

What are you waiting for? Join one of the groups below today!

Doughnut Chart - a Yummy Addition to Oracle ADF Faces

Shay Shmeltzer - Mon, 2015-11-23 02:16

Another new feature in Oracle ADF 12.2.1 is the new Doughnut Chart capability.

It looks like this:

When I first tried to create this one, I couldn't find the option for doughnut chart in the JDeveloper wizard.

Then I was told that a doughnut is just a pie with a hole in the center - so you actually just need to create a pie chart, and then specify some properties.

And indeed, if you'll look at the property inspector for pie charts you'll see a few new properties you can leverage. 

For example there is the InnerRadius property - that expects a value between 1 and 0 - this controls how big is the hole in your doughnut.

Another nice capability is the you can put some filling in your doughnut - basically put some text that will go in the middle empty area. You do this by using centerLabel property. In the example above I used the center of the doughnut to report the total salary of a department - using a groovy sum expression in the Departments ViewObject - learn how here.

(Don't forget to use the centerLabelStyle property to assign it a bigger font - a best practice from the Oracle Alta UI book).

Here is the code from the JSF page:

<dvt:pieChart selectionListener="#{bindings.EmployeesView4.collectionModel.makeCurrent}"
   dataSelection="single" id="pieChart1" var="row"
   value="#{bindings.EmployeesView4.collectionModel}"
   centerLabel="Total Salary: #{bindings.SumSalary.inputValue}" innerRadius="0.7"
   centerLabelStyle="font-size:large;" title="Salary BreakDown" 
   sliceLabelPosition="inside">
         <dvt:chartLegend id="cl1" position="bottom"/>
         <dvt:pieDataItem id="di1" label="#{row.LastName}" value="#{row.Salary}"/>
</dvt:pieChart>

Try it out - it's a yummy new addition to Oracle's set of bakery based charts. 

Categories: Development

DOAG 2015 - "Oracle 12c Parallel Execution New Features" presentation material

Randolf Geist - Sun, 2015-11-22 13:16
Thanks to the many attendees that came to my presentation "Oracle 12c Parallel Execution New Features" at the DOAG conference 2015. You can download the presentation material here in Powerpoint of PDF format, as well as check the Slideshare upload.

Note that the Powerpoint format adds value in that sense that many of the slides come with additional explanations in the notes section.

If you are interested in more details I recommend visiting this post which links to many other posts describing the different new features in greater depth.

3 Lessons from the Darkness for Cloud Developers: Design Patterns

Usable Apps - Sun, 2015-11-22 08:39
Simplified UI UX Design Patterns eBook

A visit to a very unusual restaurant in Berlin reveals how following familiar and established user experience (UX) design patterns makes things easy for developers and users of cloud applications alike.

Meat-eaters may like to dive right in and consume the free Oracle Cloud Applications simplified UI UX design patterns first. 

That UX Homework Assignment

Just returned from Berlin. While I was there I completed a reverse UX homework assignment given to me by Oracle partner Certus Solutions Cloud Services VP Debra Lilley (@debralilley): to visit a restaurant called Dunkel.

Dunkel Unsicht-Bar and Restaurant is where you are seated, served, and eat in total darkness (Dunkel means dark in German).

To begin with, you order from a set menu, in the light. Then, your assigned server appears, asks you to put your hands on their shoulders, and to follow you downstairs into the darkness of the restaurant itself.

I entered a world that was pitch black. Really. No smartphone UIs glowing, no luminous wristwatch dials twinkled, not even the blink of an optical heart rate monitor sensor on a smartwatch could be glanced anywhere

The server seats you, gives you a quick verbal orientation as to what is, and will be in front, of you.

All around me was the sound of other diners enjoying themselves.

Yet, I enjoyed one of the best vegetarian meals I’ve had in years.

title=

Instagram pic of the awesome meal I had in Dunkel.

I had no problems whatsoever in finding or using the cutlery, the breadbasket, or eating any of the food served (four courses) in the total darkness. I ate as normal, at my usual pace, and when the meal was complete, I emerged into the light, again guided by the server, and without looking like I had been in a food fight. 

An amazing, one of a kind, experience! I even left a tip! Try it yourself if you visit Berlin.

Lessons from the Darkness

So, what are the UX lessons from Dunkel? Why was it that I could so easily eat there, without ending up in a complete mess, screaming for help?

  1. Firstly, keep it simple. I didn’t have to deal with, for example, a complex floral arrangement or other decoration shoved into the middle of the table. Everything in front of me was functional or consumable.
  2. Secondly, the experience must be what consumers  expect and be about things they are familiar with from everyday use. The layout of the cutlery (and yes, there was more than one spoon and no, I never used my hands), the positioning of the plates, even where my drink was placed, was familiar to me and as expected. They followed a pattern. No nasty surprises!
  3. Thirdly, if you do need to provide guidance, keep it short and about completing the task at hand, but encourage discovery. For example, my dessert was made of three parts (of crème of pomegranate, mango chili sauce, and homemade pralines) and served in one of those little swing-top glass bottles you need to flip open. But, again, no issue in consuming the lot.

Keeping things simple, familiar,  providing concise task guidance and playing on a sense of discovery is an experiential approach also evident in the simplified UIs in Oracle’s Cloud Applications. The UX follows design patterns.

Oracle Cloud Applications simplified UI UX design patterns

The Oracle Cloud Applications simplified UI UX design patterns for Release 10 eBook is available for free.

Your UX Assignment's Solution

If you’re an Oracle ADF developer or partner building Oracle Cloud Applications Release 10 solutions, you can now get the Oracle Cloud Applications simplified UI UX design patterns for free in eBook format and make it easy for yourself and your users too.

Looking forward to my next UX homework exchange with Debra!

Enabling CORS for ADF Business Component REST Services

Shay Shmeltzer - Fri, 2015-11-20 05:12

CORS (which stands for Cross-Origin Resource Sharing) is a setting that will enable your REST services running on one server to be invoked from applications running on another server.

I first encountered this when I was trying to run an Oracle JET project in my NetBeans IDE that will access a set of REST services I exposed using Oracle ADF Business Component in my JDeveloper environment. Since NetBeans runs the HTML on a GlassFish instance, while JDeveloper ran the ADF BC layer on a WebLogic instance I got the dreaded No 'Access-Control-Allow-Origin' header is present error:

 XMLHttpRequest cannot load http://127.0.0.1:7101/Application14-RESTWebService-context-root/rest/1/dept/20. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8383' is therefore not allowed access.

There is no built-in functionality to enable CORS for ADF BC in JDeveloper, but I found it very easy to leverage the CORS Filter libraries to do this. All you need to do is add the two JAR files it provides to your project and configure the web.xml to support the filter and the specific REST operations you want to enable CORS for.

Here is a quick video showing you the complete setup (using the REST ADF BC project created here).

The web.xml addition is:

   <filter>
    <filter-name>CORS</filter-name>
    <filter-class>com.thetransactioncompany.cors.CORSFilter</filter-class>
            <init-param>
                <param-name>cors.supportedMethods</param-name>
                <param-value>GET, POST, HEAD, PUT, PATCH, DELETE</param-value>
        </init-param>
  </filter>
  <filter-mapping>
    <filter-name>CORS</filter-name>
    <url-pattern>/*</url-pattern>
  </filter-mapping>

If you follow my approach in the video and add the JARs as a new user library to JDeveloper and don't forget to check the "Deploy by Default" check box for the library.

Categories: Development

IBM Bluemix Secure Gateway Service with Oracle

Pas Apicella - Thu, 2015-11-19 22:47
I previously blogged about using the IBM Bluemix Secure Gateway Service as follows

http://theblasfrompas.blogspot.com.au/2015/11/ibm-bluemix-secure-gateway-service-step.html

I decided I would extend on this and Connect a Spring Boot Application to Oracle and consume Oracle data using the Secure Gateway Service.

The full demo is as follows

https://dl.dropboxusercontent.com/u/15829935/bluemix-docs/secure-gateway-oracle/index.html


Categories: Fusion Middleware

Oracle Priority Support Infogram for 19-NOV-2015

Oracle Infogram - Thu, 2015-11-19 16:11

OpenWorld

Some more recaps and reminiscences of OOW this week:

Database In-Memory OOW Recap, from the Oracle Database Insider Blog.

Oracle OpenWorld 2015 Highlights, from the SOA & BPM Partner Community Blog.


HA

Oracle Database Disaster Recovery on Public Cloud, from Oracle Partner Hub: ISV Migration Center Team.

Exalogic and Exadata


New Exadata and Systems Public References, from Exadata Partner Community – EMEA.

VM Server

Oracle VM Server for SPARC Support, from the Ops Center blog.

Data Integration

Introducing Oracle Data Integrator (ODI) Exchange, from the Data Integration blog.

Oracle Coherence

Coherence Forums and Support, from the Oracle Coherence blog.

WLS




Fusion

Fusion Middleware 12.2.1 is available, from WebLogic Partner Community EMEA.

Java

FlexDeploy and Java Cloud Service (Part II) , from WebLogic Partner Community EMEA.


Oracle R


BAM

Getting started with BAM 12c, from the SOA & BPM Partner Community Blog.Primavera

WebCenter

Using WebCenter Sites with a CDN, from PDIT Collaborative Application Services.

PeopleSoft


Primavera

Top Questions to Answer Before Setting Up Primavera Analtyics, from the Oracle Primavera Analytics Blog.

Demantra


EBS

From the Oracle E-Business Suite Support blog:




From the Oracle E-Business Suite Technology blog:








Lost SYSMAN password OEM CC 12gR5

Hans Forbrich - Thu, 2015-11-19 10:42
I run my own licensed Oracle products in-house.  Since it is a very simple environment, largely used to learn how things run and verify what I see at customer sites, it is not very active at all.  But it is important enough to me to keep it maintained.

After a bit of a hiatus in looking at the OEM, which is at 12cR5 patched, I went back and noted that I was using the wrong password.  No problem, I thought: since OMS uses VPD and database security, just change the password in the database.

While I'm there, might as well change the SYSMAN password as well, since I have a policy of rotated passwords.

A few things to highlight (as a reminder for next time):


  • Use the right emctl.  There is an emctl for the OMS, the AGENT and the REPO DB.  In this case, I've installed the OMS under middleware, therefore  
    • /u01/app/oracle/middleware/oms/bin/emctl
  • Check the repository and the listener
  • Start the OMS.  
    • If the message is "Failed to connect to repository database. OMS will be automatically restarted once it identifies that database and listener are up." there are a few possibilities:
      • database is down
      • listener is down
    • If the message is "Connection to the repository failed. Verify that the repository connection information provided is correct." check whether 
      • SYSMAN password is changed or 
      • SYSMAN is locked out
  • To change the sysman password:
    • In database 
      • sqlplus / as sysdba
      • alter user SYSMAN identified by new_pwd account unlock;
    • In oms
      • ./emctl stop oms
      • ./emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd sys_pwd -new_pwd new_pwd
      • ./emctl stop oms 
      • ./emctl start oms
And test it out using the browser ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator