Feed aggregator

Watch featured OTN Virtual Technology Summit Replay Sessions

OTN TechBlog - Mon, 2015-11-30 16:08

Today we are featuring a session from each OTN Virtual Technology Summit Replay Group.  See session titles and abstracts below.  Watch right away and then join the group to interact with other community members and stay up to date on when NEW content is coming!

Best Practices for Migrating On-Premises Databases to the Cloud

By Leighton Nelson, Oracle ACE
Oracle Multitenant is helping organizations reduce IT costs by simplifying database consolidation, provisioning, upgrades, and more. Now you can combine the advantages of multitenant databases with the benefits of the cloud by leveraging Database as a Service (DBaaS). In this session, you’ll learn about key best practices for moving your databases from on-premises environments to the Oracle Database Cloud and back again.

What's New for Oracle and .NET - (Part 1)
By Alex Keh, Senior Principal Product Manager, Oracle
With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore new features introduced in recent releases with code and tool demonstrations using Visual Studio 2015.

Docker for Java Developers
By Roland Huss, Principal Software Engineer at Red Hat
Docker, the OS-level visualization platform, takes the IT world by storm. In this session, we will see what features Docker has for us Java developers. It is now possible to create truly isolated, self-contained and robust integration tests in which external dependencies are realized as Docker containers. Docker also changes the way we ship applications in that we are not only deploying application artifacts like WARs or EARs but also their execution contexts. Besides elaborating on these concepts and more, this presentation will focus on how Docker can best be integrated into the Java build process by introducing a dedicated Docker Maven plugin which is shown in a live demo.

Debugging Weblogic Authentication
By Maarten Smeets, Senior Oracle SOA / ADF Developer, AMIS
Enterprises often centrally manage login information and group memberships (identity). Many systems use this information to achieve Single Sign On (SSO) functionality, for example. Surprisingly, access to the Weblogic Server Console is often not centrally managed. This video explains why centralizing management of these identities not only increases security, but can also reduce operational cost and even increase developer productivity. The video demonstrates several methods for debugging authentication using an external LDAP server in order to lower the bar to apply this pattern. This technically-oriented presentation will be especially useful for people working in operations who are responsible for managing Weblogic Servers.

Designing a Multi-Layered Security Strategy
By Glenn Brunette, Cybersecurity, Oracle Public Sector, Oracle
Security is a concern of every IT manager and it is clear that perimeter defense, trying to keep hackers out of your network, is not enough. At some point someone with bad intentions will penetrate your network and to prevent significant damage it is necessary to make sure there are multiple layers of defense. Hear about Oracle’s defense in depth for data centers including some new and unique security features built into the new SPARC M7 processor.

Multiple Tenant Support

Dylan's BI Notes - Mon, 2015-11-30 15:57
The definition of multi-tenancy varies.  Some people think that the tenants are data stripping and adding a tenant ID to every table is to support multiple tenants. One of such example is Oracle BI Applications.  We added the tenant_id everywhere and assume that we can use it later to partition the data. In the cloud […]
Categories: BI & Warehousing

Licensing Cloud Control

Laurent Schneider - Mon, 2015-11-30 12:08

I just read the Enterprise Manager Licensing Information User Manual today. They are a lot of packs there, and you may not even know that autodiscovering targets is part of the lifecycle management pack or that blackouts are part of the diagnostic pack.

Have a look

RAM is the new disk – and how to measure its performance – Part 3 – CPU Instructions & Cycles

Tanel Poder - Mon, 2015-11-30 00:45

If you haven’t read the previous parts of this series yet, here are the links: [ Part 1 | Part 2 ].

A Refresher

In the first part of this series I said that RAM access is the slow component of a modern in-memory database engine and for performance you’d want to reduce RAM access as much as possible. Reduced memory traffic thanks to the new columnar data formats is the most important enabler for the awesome In-Memory processing performance and SIMD is just icing on the cake.

In the second part I also showed how to measure the CPU efficiency of your (Oracle) process using a Linux perf stat command. How well your applications actually utilize your CPU execution units depends on many factors. The biggest factor is your process’es cache efficiency that depends on the CPU cache size and your application’s memory access patterns. Regardless of what the OS CPU accounting tools like top or vmstat may show you, your “100% busy” CPUs may actually spend a significant amount of their cycles internally idle, with a stalled pipeline, waiting for some event (like a memory line arrival from RAM) to happen.

Luckily there are plenty of tools for measuring what’s actually going on inside the CPUs, thanks to modern processors having CPU Performance Counters (CPC) built in to them.

A key derived metric for understanding CPU-efficiency is the IPC (instructions per cycle). Years ago people were actually talking about the inverse metric CPI (cycles per instruction) as on average it took more than one CPU cycle to complete an instruction’s execution (again, due to the abovementioned reasons like memory stalls). However, thanks to today’s superscalar processors with out-of-order execution on a modern CPU’s multiple execution units – and with large CPU caches – a well-optimized application can execute multiple instructions per a single CPU cycle, thus it’s more natural to use the IPC (instructions-per-cycle) metric. With IPC, higher is better.

Here’s a trimmed snippet from the previous article, a process that was doing a fully cached full table scan of an Oracle table (stored in plain old row-oriented format):

Performance counter stats for process id '34783':

      27373.819908 task-clock                #    0.912 CPUs utilized
    86,428,653,040 cycles                    #    3.157 GHz                     [33.33%]
    32,115,412,877 instructions              #    0.37  insns per cycle
                                             #    2.39  stalled cycles per insn [40.00%]
    76,697,049,420 stalled-cycles-frontend   #   88.74% frontend cycles idle    [40.00%]
    58,627,393,395 stalled-cycles-backend    #   67.83% backend  cycles idle    [40.00%]
       256,440,384 cache-references          #    9.368 M/sec                   [26.67%]
       222,036,981 cache-misses              #   86.584 % of all cache refs     [26.66%]

      30.000601214 seconds time elapsed

The IPC of the above task is pretty bad – the CPU managed to complete only 0.37 instructions per CPU cycle. On average every instruction execution was stalled in the execution pipeline for 2.39 CPU cycles.

Note: Various additional metrics can be used for drilling down into why the CPUs spent so much time stalling (like cache misses & RAM access). I covered the typical perf stat metrics in the part 2 of this series so won’t go in more detail here.

Test Scenarios

The goal of my experiments was to measure the number CPU-efficiency of different data scanning approaches in Oracle – on different data storage formats. I focused only on data scanning and filtering, not joins or aggregations. I ensured that everything would be cached in Oracle’s buffer cache or in-memory column store for all test runs – so disk IO was not a factor here (again, read more about my test environment setup in part 2 of this series).

The queries I ran were mostly variations of this:

SELECT COUNT(cust_valid) FROM customers_nopart c WHERE cust_id > 0

Although I was after testing the full table scanning speeds, I also added two examples of scanning through the entire table’s rows via index range scans. This allows me to show how inefficient index range scans can be when accessing a large part of a table’s rows even when all is cached in memory. Even though you see different WHERE clauses in some of the tests, they all are designed so that they go through all rows of the table (just using different access patterns and code paths).

The descriptions of test runs should be self-explanatory:

1. INDEX RANGE SCAN BAD CLUSTERING FACTOR

SELECT /*+ MONITOR INDEX(c(cust_postal_code)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_postal_code > '0';

2. INDEX RANGE SCAN GOOD CLUSTERING FACTOR

SELECT /*+ MONITOR INDEX(c(cust_id)) */ COUNT(cust_valid)
FROM customers_nopart c WHERE cust_id > 0;

3. FULL TABLE SCAN BUFFER CACHE (NO INMEMORY)

SELECT /*+ MONITOR FULL(c) NO_INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c WHERE cust_id > 0;

4. FULL TABLE SCAN IN MEMORY WITH WHERE cust_id > 0

SELECT /*+ MONITOR FULL(c) INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c WHERE cust_id > 0;

5. FULL TABLE SCAN IN MEMORY WITHOUT WHERE CLAUSE

SELECT /*+ MONITOR FULL(c) INMEMORY */ COUNT(cust_valid) 
FROM customers_nopart c;

6. FULL TABLE SCAN VIA BUFFER CACHE OF HCC QUERY LOW COLUMNAR-COMPRESSED TABLE

SELECT /*+ MONITOR */ COUNT(cust_valid) 
FROM customers_nopart_hcc_ql WHERE cust_id > 0

Note how all experiments except the last one are scanning the same physical table just with different options (like index scan or in-memory access path) enabled. The last experiment is against a copy of the same table (same columns, same rows), but just physically formatted in the HCC format (and fully cached in buffer cache).

Test Results: Raw Numbers

It is not enough to just look into the CPU performance counters of different experiments, they are too low level. For the full picture, we also want to know how much work (like logical IOs etc) the application was doing and how many rows were eventually processed in each case. Also I verified that I did get the exact desired execution plans, access paths and that no physical IOs or other wait events happened using the usual Oracle metrics (see the log below).

Here’s the experiment log file with full performance numbers from SQL Monitoring reports, Snapper and perf stat:

I also put all these numbers (plus some derived values) into a spreadsheet. I’ve pasted a screenshot of the data below for convenience, but you can access the entire spreadsheet with its raw data and charts here (note that the spreadsheet has multiple tabs and configurable pivot charts in it):

Raw perf stat data from the experiments:

oracle scan test results.png

Now let’s plot some charts!

Test Results: CPU Instructions

Let’s start from something simple and gradually work our way deeper. I will start from listing the task-clock-ms metric that shows the CPU time usage of the Oracle process in milliseconds for each of my test table scans. This metric comes from the OS-level and not from within the CPU:

task-clock-ms.pngCPU time used for scanning the dataset (in milliseconds)

As I mentioned earlier, I added two index (full) range scan based approaches for comparison. Looks like the index-based “full table scans” seen in first and second columns are using the most CPU-time as the OS sees it (~120 and close to 40 seconds of CPU respectively).

Now let’s see how many CPU instructions (how much work “requested” from CPU) the Oracle process executed for scanning the same dataset using different access paths and storage formats:

oracle table scan instructions clean.pngCPU instructions executed for scanning the dataset

Wow, the index-based approaches seem to be issuing multiple times more CPU instructions per query execution than any of the full table scans. Whatever loops the Oracle process is executing for processing the index-based query, it runs more of them. Or whatever functions it calls within those loops, the functions are “fatter”. Or both.

Let’s look into an Oracle-level metric session logical reads to see how many buffer gets it is doing:

oracle buffer gets clean.pngBuffer gets done for a table scan

 

Wow, using the index with bad clustering factor (1st bar) causes Oracle to do over 60M logical IOs, while the table scans do around 1.6M of logical IOs each. Retrieving all rows of a table via an index range scan is super-inefficient, given that the underlying table size is only 1613824 blocks.

This inefficiency is due to index range scans having to re-visit the same datablocks multiple times (up to one visit per row, depending on the clustering factor of the index used). This would cause another logical IO and use more CPU cycles for each buffer re-visit, except in cases where Oracle has managed to keep a buffer pinned since last visit. The index range scan with a good clustering factor needs to do much fewer logical IOs as given the more “local” clustered table access pattern, the re-visited buffers are much more likely found already looked-up and pinned (shown as the buffer is pinned count metric in V$SESSTAT).

Knowing that my test table has 69,642,625 rows in it, I can also derive an average CPU instructions per row processed metric from the total instruction amounts:

instructions per row.png

The same numbers in tabular form:

Screen Shot 2015-11-30 at 00.38.12

Indeed there seem to be radical code path differences (that come from underlying data and cache structure differences) that make an index-based lookup use thousands of instructions per row processed, while an in-memory scan with a single predicate used only 102 instructions per row processed on average. The in-memory counting without any predicates didn’t need to execute any data comparison logic in it, so could do its data access and counting with only 43 instructions per row on average.

So far I’ve shown you some basic stuff. As this article is about studying the full table scan efficiency, I will omit the index-access metrics from further charts. The raw metrics are all available in the raw text file and spreadsheet mentioned above.

Here are again the buffer gets of only the four different full table scan test cases:

oracle buffer gets table scan only.pngBuffer gets done for full table scans

All test cases except the HCC-compressed table scan cause the same amount of buffer gets (~1.6M) as this is the original table’s size in blocks. The HCC table is only slightly smaller – didn’t get great compression with the query low setting.

Now let’s check the number CPU instructions executed by these test runs:

oracle table scan only instructions.pngCPU instructions executed for full table scans

Wow, despite the table sizes and number of logical IOs being relatively similar, the amount of machine code the Oracle process executes is wildly different! Remember, all that my query is doing is just scanning and filtering the data followed with a basic COUNT(column) operation – no additional sorting, joining is done. The in-memory access paths (column 3 & 4) get away with executing much fewer CPU instructions than the regular buffered tables in row-format and HCC format (columns 1 & 2 in the chart).

All the above shows that not all logical IOs are equal, depending on your workload and execution plans (how many block visits, how many rows extracted per block visit) and underlying storage formats (regular row-format, HCC in buffer cache or compressed columns in In-Memory column store), you may end up doing a different amount of CPU work per row retrieved for your query.

This was true before the In-Memory option and even more noticeable with the In-Memory option. But more about this in a future article.

Test Results: CPU Cycles

Let’s go deeper. We already looked into how many buffer gets and CPU instructions the process executed for the different test cases. Now let’s look into how much actual CPU time (in form of CPU cycles) these tests consumed. I added the CPU cycles metric to instructions for that:

instructions and cycles.pngCPU instructions and cycles used for full table scans

Hey, what? How come the regular row-oriented block format table scan (TABLE BUFCACHE) takes over twice more CPU cycles compared to its instructions executed?

Also, how come all the other table access methods use noticeably less CPU cycles than the number of instructions they’ve executed?

If you paid attention to this article (and previous ones) you’ll already know why. In the 1st example (TABLE BUFCACHE) the CPU must have been “waiting” for something a lot, instructions having spent multiple cycles “idle”, stalled in the pipeline, waiting for some event or necessary condition to happen (like a memory line arriving from RAM).

For example, if you are constantly waiting for the “random” RAM lines you want to access due to inefficient memory structures for scanning (like Oracle’s row-oriented datablocks), the CPU will be bottlenecked by RAM access. The CPU’s internal execution units, other than the load-store units, would be idle most of the time. The OS top command would still show you 100% utilization of a CPU by your process, but in reality you could squeeze much more out of your CPU if it didn’t have to wait for RAM so much.

In the other 3 examples above (columns 2-4), apparently there is no serious RAM (or other pipeline-stalling) bottleneck as in all cases we are able to use the multiple execution units of modern superscalar CPUs to complete more than one instruction per CPU cycle. Of course more improvements might be possible, but more about this in a following post.

For now I’ll conclude this (lengthy) post with one more chart with the fundamental derived metric instructions per cycle (IPC):

instructions per cycle.png

The IPC metric is derived from the previously shown instructions and CPU cycles metrics by a simple division. Higher IPC is better as it means that your CPU execution units are more utilized, it gets more done. However, as IPC is a ratio, you should never look into the IPC value alone, always look into it together with instructions and cycles metrics. It’s better to execute 1 Million instructions with IPC of 0.5 than 1 Billion instructions with an IPC of 3 – but looking into IPC in isolation doesn’t tell you how much work was actually done. Additionally, you’d want to use your application level metrics that give you an indication of how much application work got done (I used Oracle’s buffer gets and rows processed metrics for this).

Looks like there’s at least 2 more parts left in this series (advanced metrics and a summary), but let’s see how it goes. Sorry for any typos, it’s getting quite late and I’ll fix ’em some other day :)

 

NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Table Definitions in Oracle #GoldenGate #12c Trail Files

DBASolved - Sat, 2015-11-28 09:07

Oracle GoldenGate 12c (12.2.0.1.0) has changed the information that is stored in the trail files. All the standard information is still there. What Oracle changed has to do with the meta data that is used to define a table.

Note: If you want a understand how to use log dump and general trail information, look here.

Prior to 12.2.0.1.0 release of Oracle GoldenGate, if the column order of tables between source and target were different, you needed to generate a “definition” file using the “defgen” utility located in $OGG_HOME. This file allowed you to specify either a source or target definitions file which could be used to map the order of columns correctly. This was a nice tool when needed.

In 12.2.0.1.0, Oracle took this concept a little bit further. Instead of using a definitions file to do the mapping between source and target tables; Oracle has started to provide this information in the trail files. Review the image below, and you will see the table definition for SOE.ORDERS, which I run in my test environment.

Notice at the top, the general header information is still available for view. Directly under that, you will see a line that has the word “metadata” in it. This is the start of the “metadata” section. Below this is the name of the table and a series of number categories (keep this in mind). Then below this, is the definition of the table with columns and the length of the record.

A second ago, I mentioned the “numbered categories”. The categories correspond to the information defined to the right of the columns defined for the table. When comparing the table/columns between the database and trail file, as few things stand out.

In column 2 (Data Types), the following numbers correspond to this information:

134 = NUMBER
192 = TIMESTAMP (6) WITH LOCAL TIME ZONE
64 = VARCHAR2

In column 3 (External Length), is the size of the data type:

13 = NUMBER(12,0) + 1
29 = Length of TIMESTAMP (6) WITH LOCAL TIME ZONE
8 = VARCHAR2 length of 8
15 = VARCHAR2 length of 15
30 = VARCHAR2 length of 30

There is more information that stands out, but I’ll leave a little bit for you to decode. Below is the table structure that is currently mapped to the example given so far.

Now, you may be wondering, how do you get this information to come up in the logdump interface? Oracle has provided a logdump command that is used to display/find metadata information. This command is:

SCANFORMETADATA (SFMD)

There are a few options that can be passed to this command to gather specific information. These options are:

DDR | TDR
NEXT | INDEX

If you issue:

SCANFORMETADATA DDR

You will get information related to Data Definition Records (DDR) of the table. Information this provides includes the following output:

If you issue:

SCANFORMETADATA TDR

You will get information related to Table Definition Record (TDR) on the table. Information provide includes the output already discussed earlier.

As you can tell, Oracle has provided a lot of information that is traditionally in the definitions files for mapping tables directly into the trail files. This will make mapping data between systems a bit easier and less complicated architectures.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Learn About Hyperion & Oracle BI... 5 Minutes at a Time

Look Smarter Than You Are - Fri, 2015-11-27 14:13
Since early 2015, we've been trying to figure out how to help educate more people around the world on Oracle BI and Oracle EPM. Back in 2006, interRel launched a webcast series that started out once every two weeks and then rapidly progressed to 2-3 times per week. We presented over 125 webcasts last year to 5,000+ people from our customers, prospective customers, Oracle employees, and our competitors.

In 2007, we launched our first book and in the last 8 years, we've released over 10 books on Essbase, Planning, Smart View, Essbase Studio, and more. (We even wrote a few books we didn't get to publish on Financial Reporting and the dearly departed Web Analysis.) In 2009, we started doing free day-long, multi-track conferences across North America and participating in OTN tours around the world. We've also been trying to speak at as many user groups and conferences as we can possibly fit in. Side note, if you haven't signed up for Kscope16 yet, it's the greatest conference ever: go to kscope16.com and register (make sure you use code IRC at registration to take $100 off each person's costs).

We've been trying to innovate our education offerings since then to make sure there were as many happy Hyperion, OBIEE, and Essbase customers around the world as possible. Since we started webcasts, books, and free training days, others have started doing them too which is awesome in that it shares the Oracle Business Analytics message with even more people.

The problem is that the time we have for learning and the way we learn has changed. We can no longer take the time to sit and read an entire book. We can't schedule an hour a week at a specific time to watch an hour webcast when we might only be interested in a few minutes of the content. We can't always take days out of our lives to attend conferences no matter how good they are.  So in June 2015 at Kscope16, we launched the next evolution in training (epm.bi/videos):


#PlayItForward is our attempt to make it easier for people to learn by making it into a series of free videos.  Each one focuses on a single topic. Here's one I did that attempts to explain What Is Big Data? in under 12 minutes:

As you can see from the video, the goal is to teach you a specific topic with marketing kept to an absolute minimum (notice that there's not a single slide in there explaining what interRel is). We figure if we remove the marketing, people will not only be more likely to watch the videos but share them as well (competitors: please feel free to watch, learn, and share too). We wanted to get to the point and not teach multiple things in each video.

Various people from interRel have recorded videos in several different categories including What's New (new features in the new versions of various products), What Is? (introductions to various products), Tips & Tricks, deep-dive series (topics that take a few videos to cover completely), random things we think are interesting, and my personal pet project, the Essbase Technical Reference.
Essbase Technical Reference on VideoYes, I'm trying to convert the Essbase Technical Reference into current, easy-to-use videos. This is a labor of love (there are hundreds of videos to be made on just Essbase calc functions alone) and I needed to start somewhere. For the most part, I'm focusing on Essbase Calc Script functions and commands first, because that's where I get the most questions (and where some of the examples in the TechRef are especially horrendous). I've done a few Essbase.CFG settings that are relevant to calculations and a few others I just find interesting.  I'm not the only one at interRel doing them, because if we waited for me to finish, well, we'd never finish. The good news is that there are lots of people at interRel who learned things and want to pass them on.

I started by doing the big ones (like CALC DIM and AGG) but then decided to tackle a specific function category: the @IS... boolean functions. I have one more of those to go and then I'm not sure what I'm tackling next. For the full ever-increasing list, go to http://bit.ly/EssTechRef, but here's the list as of this posting: 
To see all the videos we have at the moment, go to epm.bi/videos. I'm looking for advice on which TechRef videos I should record next. I'm trying to do a lot more calculation functions and Essbase.CFG settings before I move on to things like MDX functions and MaxL commands, but others may take up that mantle. If you have functions you'd like to see a video on, shoot an email over to epm.bi/videos, click on the discussion tab, and make a suggestion or two. If you like the videos and find them helpful (or you have suggestions on how to make them more helpful), please feel free to comment too.

I think I'm going to go start working on my video on FIXPARALLEL.
Categories: BI & Warehousing

What's new in Forms 12c, part 2

Gerd Volberg - Thu, 2015-11-26 02:19
Let's now look into the Form Builder and what changes we got there.

First we see the facelift in the Open-Dialog. It's now the typical Windows-Dialog, where you have much more flexibility.


New features in the Convert-Dialog:

  • New Feature "Converting to and from XML"
  • Checkbox "Overwrite"
  • Checkbox "Keep window open", if you have to convert many files at once.


Preferences-Dialog, Tab "General":

  • The checkbox "Hide PL/SQL Compile Dialog" is new
  • Web Browser location (Forms 11g: Tab "Runtime")
  • Compiler Output is new

Tab "Subclass": No changes


Tab "Wizards": No changes


 Tab "Runtime":
  • The checkbox "Show URL Parameters" is new
  • Application Server URL is much bigger!
  • The Web Browser location vanished to Tab "General"



Have fun
Gerd

富山の泌尿器科に行けばシアリスを入手できる

Marian Crkon - Wed, 2015-11-25 20:30

富山の泌尿器科に行けばシアリスを入手することが出来ます。富山の泌尿器科には、国から認可された三つのED治療薬が提供されています。その三つの内の一つがシアリスという薬で、問診を受けて必要があると判断されたときにはその場でシ […]

投稿富山の泌尿器科に行けばシアリスを入手できるシアリス通販情報の最初に登場しました。

富山の泌尿器科に行けばシアリスを入手できる

The Feature - Wed, 2015-11-25 20:30

富山の泌尿器科に行けばシアリスを入手することが出来ます。富山の泌尿器科には、国から認可された三つのED治療薬が提供されています。その三つの内の一つがシアリスという薬で、問診を受けて必要があると判断されたときにはその場でシアリスを購入することが出来るのです。富山の泌尿器科には、正規品のED治療薬を置いてあるので保険証を持っていかなくても欲しいときに相談することが出来ます。
ED治療薬の入手方法は、現状このようにクリニックに行く方法がメジャーです。そもそも、日本国内と海外では薬に対する考え方が全く異なるということを知っておかなくてはいけません。基本的に、日本は海外で作られた薬に対する審査がものすごく厳しいです。特定の病院で使うことは認めても、一般的な市場でその薬を使うためには未だに許可を与えないことも珍しくありません。
世界では、一般的な市場で既に利用することが出来る薬のことをジェネリック医薬品と言います。ジェネリック医薬品は、病院で処方されている薬と全く同じ効果を発揮することが出来るため市場で発売されている薬は大変人気があります。しかし、海外の市場では特にED治療薬の方面で偽物が出回っていることもあります。こうした偽物がジェネリック医薬品として国内に入ってくるのを防ぐのも日本の役目であるため、薬の精査は非常に大切なのです。
シアリスは、ED治療薬の中でも最も優れた薬です。薬の持続時間だけではなく副作用に関しても他の薬と比べて格段に低く世界で最も人気のある薬なのです。そのため、偽物が製造される割合も非常に多いという事情があります。病院で処方をしてもらえば確実に正規のシアリスを入手することが出来るため安心なのです。

投稿富山の泌尿器科に行けばシアリスを入手できるシアリス通販情報の最初に登場しました。

Categories: APPS Blogs

UX Empathy and the Art of Storytelling

Usable Apps - Wed, 2015-11-25 13:14

At this year’s Web Summit in Dublin, Ireland, I had the opportunity to observe thousands of attendees. They came from 135 different countries and represented different generations.

Despite these enormous differences, they came together and communicated.

But how? With all of the hype about how different communication styles are among the Baby Boomers, Gen Xers, Millennials, and Generation Zers, I expected to see lots of small groupings of attendees based on generation. And I thought that session audiences would mimic this, too. But I could not have been more wrong.

How exactly, then, did speakers, panelists, and interviewers keep the attention of attendees in the 50+ crowd, the 40+ crowd, and the 20+ crowd while they sat in the same room?

The answer is far simpler than I could have imagined: Authenticity. They kept their messages simple, specific, honest, and in context of the audience and the medium in which they were delivering them.

 Estee Lalonde in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"

Web Summit: Estée Lalonde (@EsteeLalonde) in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"

Simplicity in messaging was key across Web Summit sessions: Each session was limited to 20 minutes, no matter whether the stage was occupied by one speaker or a panel of interviewees. For this to be successful, those onstage needed to understand their brands as well as the audience and what they were there to hear.

Attention spans are shortening, so it’s increasingly critical to deliver an honest, authentic, personally engaging story. Runar Reistrup, Depop, said it well at the Web Summit when he said:

 Runar Reistrup in conversation during the Fashion Summit session "A branding lesson from the fashion industry"

Web Summit: Runar Reistrup (@runarreistrup) in conversation during the Fashion Summit session "A branding lesson from the fashion industry"

While lots of research, thought, and hard work goes into designing and building products, today’s brand awareness is built with social media. Users need to understand the story you’re telling but not be overwhelmed by contrived messaging.

People want to connect with stories and learn key messages through those stories. Storytelling is the important challenge of our age. And how we use each social medium to tell a story is equally important. Storytelling across mediums is not a one-size-fits-all experience; each medium deserves a unique messaging style. As Mark Little (@marklittlenews), founder of Storyful, makes a point of saying, "This is the golden age of storytelling.

The Oracle Applications User Experience team recognizes this significance of storytelling and the importance of communicating the personality of our brand. We take time to nurture connections and relationships with those who use our applications, which enables us to empathize with our users in authentic ways.

 Aine Kerr talking about the art of storytelling

Web Summit: Áine Kerr (@AineKerr) talking about the art of storytelling

The Oracle simplified user interface is designed with consideration of our brand and the real people—like you—who use our applications. We want you to be as comfortable using our applications as you are having a conversation in your living room. We build intuitive applications that that are based on real-world stories—yours—and that solve real-world challenges that help make your work easier.

We experiment quite a bit, and we purposefully “think as if there is no box.” (Maria Hatzistefanis, Rodial)

 Maria Hatzistefanis in conversation during the Fashion Summit session "Communication with your customer in the digital age"

Web Summit: Maria Hatzistefanis (@MrsRodial) in conversation during the Fashion Summit session "Communication with your customer in the digital age"

We strive for finding that authentic connection between the simplified user interface design and the user.  We use context and content (words) to help shape and inform what message we promote on each user interface page. We choose the words we use as well as the tone carefully because we recognize the significance of messaging, whether the message is a two-word field label or a tweet.  And we test, modify, and retest our designs with real users before we build applications to ensure that the designs respond to you and your needs.

If you want to take advantage of our design approach and practices, download our simplified user experience design patterns eBook for free and design a user experience that mimics the one we deliver in the simplified user interface. And if you do, please let us know what you think at @usableapps.

Oracle Priority Support Infogram for 25-NOV-2015 1000th posting!

Oracle Infogram - Wed, 2015-11-25 11:44

This marks the 1000th post to the Infogram. I am awarding myself a low-carb lollipop.

Data Warehouse


Oracle VM

Oracle VM Performance and Tuning - Part 4, from Virtually All The Time.

Fusion

Changing Appearances: Give The Apps Your Corporate Look, from Fusion Applications Developer Relations.

SmartScan


DRM

Patch Set Update: Oracle Data Relationship Management 11.1.2.4.321, from Business Analytics - Proactive Support.

ACM

Oracle and Adaptive Case Management: Part 1 , from SOA & BPM Partner Community Blog.

NetBeans


EBS

From the Oracle E-Business Suite Support blog:


From the Oracle E-Business Suite Technology blog:



Why Data Virtualization Is so Vital

Kubilay Çilkara - Tue, 2015-11-24 16:35
In today’s day and age, it probably seems like every week you hear about a new form of software you absolutely have to have. However, as you’re about to find out, data virtualization is actually worth the hype.

The Old Ways of Doing Things

Traditionally, data management has been a cumbersome process, to say the least. Usually, it means data replication, data management or using intermediary connectors and servers to pull off point-to-point integration. Of course, in some situations, it’s a combination of the three.

Like we just said, though, these methods were never really ideal. Instead, they were just the only options given the complete lack of alternatives available. That’s the main reason you’re seeing these methods less and less. The moment something better came along, companies jumped all over them.
However, their diminishing utility can also be traced to three main factors. These would be:

  • ·      High costs related to data movement
  • ·      The astronomical growth in data (also referred to as Big  Data)
  • ·      Customers that expect real-time information
These three elements are probably fairly self-explanatory, but that last one is especially interesting to elaborate on. Customers these days really don’t understand why they can’t get all the information they want exactly when they want it. How could they possibly make sense of that when they can go online and get their hands on practically any data they could ever want thanks to the likes of Google? If you’re trying to explain to them that your company can’t do this, they’re most likely going to have a hard time believing you. Worse, they may believe you, but assume that this is a problem relative to your company and that some competitor won’t have this issue.

Introducing Data Virtualization

It was only a matter of time before this problem was eventually addressed. Obviously, when so many companies are struggling with this kind of challenge, there’s quite the incentive for another one to solve it.

That’s where data virtualization comes into play. Companies that are struggling with having critical information spread out across their entire enterprise in all kinds of formats and locations never have to worry about the hardships of having to get their hands on it. Instead, they can use virtualization platforms to search out what they need.

Flexible Searches for Better Results

It wouldn’t make much sense for this type of software to not have a certain amount of agility built in. After all, that’s sort of its main selling point. The whole reason companies invest in it is because it doesn’t get held back by issues with layout or formatting. Whatever you need, it can find.

Still, for best results, many now offer a single interface that can be used to separate and extract aggregates of data in all kinds of ways. The end result is a flexible search which can be leverage toward all kinds of ends. It’s no longer about being able to find any type of information you need, but finding it in the most efficient and productive way possible.

Keep Your Mainframe

One misconception that some companies have about data virtualization is that it will need certain adjustments to be made to your mainframe before it can truly be effective. This makes sense because, for many platforms, this is definitely the case. These are earlier versions, though, and some that just aren’t of the highest quality.

With really good versions, though, you can basically transform your company’s mainframe into a virtualization platform. Such an approach isn’t just cost-effective. It also makes sure you aren’t wasting resources, including time, addressing the shortcomings of your current mainframe, something no company wants to do.

Don’t get turned off from taking a virtualization approach to your cache of data because you’re imagining a long list of chores that will be necessary for transforming your mainframe. Instead, just be sure you invest in a high-end version that will actually transform your current version into something much better.

A Better Approach to Your Current Mainframe

Let’s look at some further benefits that come from taking this approach. First, if the program you choose comes with the use of a high-performance server, you’ll immediately eliminate the redundancy of integrating from point-to-point. This will definitely give you better performance in terms of manageability. Plus, if you ever want to scale up, this will make it much easier to do so.

Proper data migration is key to a good virtualization process. If it is done right the end user wont have to worry out corrupted data and communication between machines will be crystal clear.
If you divert the data mapping you need to do at processing-intensive level and transformation processes away from the General Purpose Processor of your mainframe to the zIIP specialty engine, you’ll get to dramatically reduce your MIPS capacity usage and, therefore, also reduce your company’s TCO (Total Cost of Ownership).

Lastly, maybe you’d like to exploit of every last piece of value you derive from your mainframe data. If so, good virtualization software will not only make this possible, but do so in a way that will let you dramatically turn all of your non-relational mainframe data virtualization into relational formats that any business analytics or intelligence application can use.

Key Features to Look for in Your Virtualization Platform

If you’re now sold on the idea of investing in a virtualization platform, the next step is getting smart about what to look for. As you can imagine, you won’t have trouble finding a program to buy, but you want to make sure it’s actually going to be worth every penny.

The first would be, simply, the amount of data providers available. You want to be able to address everything from big data to machine data to syslogs, distributed and mainframe. Obviously, this will depend a lot on your current needs, but think about the future too.

Then, there’s the same to think about in terms of data consumers. We’re talking about the cloud, analytics, business intelligence and, of course, the web. Making sure you will be able to stay current for some time is very important. Technology changes quickly and the better your virtualization process is the longer you’ll have before having to upgrade. Look closely at the migration process, and whether or not the provider can utilize your IT team to increase work flow. This will help you company get back on track more quickly and with better results.

Finally, don’t forget to look at costs, especially where scalability is concerned. If you have plans of getting bigger in the future, you don’t want it to take a burdensome investment to do so.
As you can see, virtualization platforms definitely live up to the hype.You just have to be sure you spend your money on the right kind.

Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket software about topics such as Terminal emulation,  Enterprise Mobility and more.
Categories: DBA Blogs

Comment on The Collection in The Collection by lotharflatz

Oracle Riddle Blog - Mon, 2015-11-23 21:32

Hi Bryn,
thanks for replying. You are raising an important point here by suggesting to do the join in SQL rather than in PL/SQL. However as I wrote I wanted two loops rather than one. In your solution you are replacing the other loop with an IF checking for the change of the department number. I was aiming for the employees nested as a collection in the departments. Well, and You don’t need to bother to limit the bulk collect. (You can, if you like).

Like

Categories: DBA Blogs

The Times They Are A-Changin'

Floyd Teter - Mon, 2015-11-23 19:36
Come gather 'round people
Wherever you roam
And admit that the waters
Around you have grown
And accept it that soon
You'll be drenched to the bone
If your time to you
Is worth savin'
Then you better start swimmin'
Or you'll sink like a stone
For the times they are a-changin'.

                     -- From Bob Dylan's "The Times They Are A-Changin'"


Spent some time with more really smart folks at Oracle last week.  Yeah, this people are really smart...I'm still wondering how they let me in the door.

During that time, I probably had three separate conversations with different people on how SaaS changes the consulting model.  Frankly, implementation is no longer a big money maker in the SaaS game.  The combination of reducing the technology overhead, reducing customizations, and a sharp focus on customer success is killing the IT consulting goose that lays the golden eggs:  application implementation.  You can see indications of it just in the average cycle times between subscription and go-live:  they're down to about 4.5 months and still on a down trend.  Bringing SaaS customers up in less than 30 days is something Oracle can see on the near horizon.  Unfortunately, as the cycle times for SaaS implementations shortens, it gets more difficult for an implementation partner to generate significant revenues and margins.  The entire model is built around 12-t0-24 month implementations - SaaS make those time frames a thing of the past.

So, if I were a SaaS implementation partner today, what would I do?  Frankly, I'd be switching to a relationship - retainer approach with my customers (not my idea...all those smart people I mentioned suggested it).  I'd dedicate teams that would implement SaaS, extend SaaS functionality, test new upgrades prior to rollout, and maintain your SaaS apps.  I'd build a relationship with those customers rather than simply attempt to sell implementation services.  The value to customers?  Your workforce focuses on the business rather than the software.  You need new reports or business intelligence?  Covered in our agreement.  Test this new release before we upgrade our production instance?  Covered in our agreement.  Some new fields on a user page or an add-on business process?  Covered in our agreement.  Something not working?  Let my team deal with Oracle Support...covered in our agreement.

Other ideas?  The comments await.

The times they are a-changin'...quickly.  Better start swimmin'.


Comment on The Collection in The Collection by Bryn

Oracle Riddle Blog - Mon, 2015-11-23 19:07

I had to tidy up the example to impose proper style (like adding “order by”) and to make it do something:

declare
cursor c1 is
select d.Deptno, d.Dname from Dept d order by d.Deptno;
cursor c2 (Deptno Dept.Deptno%type) is
select e.Empno, e.Ename from Emp e where e.Deptno = c2.Deptno order by e.Empno;
begin
for r1 in c1 loop
DBMS_Output.Put_Line(Chr(10)||r1.Deptno||’ ‘||r1.Dname);
for r2 in c2(r1.Deptno) loop
DBMS_Output.Put_Line(‘ ‘||r2.Empno||’ ‘||r2.Ename);
end loop;
end loop;
end;

It’s performing a left outer join using nested loops programmed in PL/SQL. Here is the output:

10 ACCOUNTING
7782 CLARK
7839 KING
7934 MILLER

20 RESEARCH
7369 SMITH
7566 JONES
7788 SCOTT
7876 ADAMS
7902 FORD

30 SALES
7499 ALLEN
7521 WARD
7654 MARTIN
7698 BLAKE
7844 TURNER
7900 JAMES

40 OPERATIONS

Here’s the SQL:

select Deptno, d.Dname, e.Empno, e.Ename
from Dept d left outer join Emp e using (Deptno)
order by Deptno, e.Empno

Programming a join in PL/SQL is one of the famous crimes of procedural guys who are new to SQL.

We can simply bulk collect this — using the “limit” clause if called for.

I have to assume that the “complex logic” does something for each row in the driving master Dept loop and then, within that, something for each child Emp row within each master. This is like the SQL*Plus “break” report of old. So is the question actually “How to program ‘break’ logic?”

Here you are:

declare
type Row_t is record(
Deptno Dept.Deptno %type not null := -1,
Dname Dept.Dname %type,
Empno Emp. Empno %type,
Ename Emp. Ename %type);
type Rows_t is table of Row_t index by pls_integer;
Rows Rows_t;
Prev_Deptno Dept.Deptno%type not null := -1;
begin
select Deptno, d.Dname, e.Empno, e.Ename
bulk collect into Rows
from Dept d left outer join Emp e using (Deptno)
order by Deptno, e.Empno;

for j in 1..Rows.Count loop
if Rows(j).Deptno Prev_Deptno then
DBMS_Output.Put_Line(Chr(10)||Rows(j).Deptno||’ ‘||Rows(j).Dname);
Prev_Deptno := Rows(j).Deptno;
end if;
if Rows(j).Empno is null then
DBMS_Output.Put_Line(‘ No employees’);
else
DBMS_Output.Put_Line(‘ ‘||Rows(j).Empno||’ ‘||Rows(j).Ename);
end if;
end loop;
end;

Here is the output:

10 ACCOUNTING
7782 CLARK
7839 KING
7934 MILLER

20 RESEARCH
7369 SMITH
7566 JONES
7788 SCOTT
7876 ADAMS
7902 FORD

30 SALES
7499 ALLEN
7521 WARD
7654 MARTIN
7698 BLAKE
7844 TURNER
7900 JAMES

40 OPERATIONS
No employees

Now tell me what in your question I’m failing to grasp.

Like

Categories: DBA Blogs

I Had Low Expectations for the APEX Gaming Competition 2015 from ODTUG. Wow, Was I Ever Wrong!

Joel Kallman - Mon, 2015-11-23 16:27


When the APEX Gaming Competition 2015 was announced by Vincent Morneau from Insum at the ODTUG Kscope15 conference this past year, I was very suspect.  I've seen many contests over the years that always seemed to have very few participants, and only one or two people would really put forth effort.  Why would this "Gaming Competition" be any different?  Who has time for games, right?  Well...I could not have been more wrong.

I was given the honor of being a judge for the APEX Gaming Competition 2015, and not only did I get to see the front-end of these games, I also was able to see them behind the scenes as well - how they were constructed, how much of the database they used, how much of declarative APEX did they use, etc.  I was completely blown away by the creativity and quality of these games.  There were 15 games submitted, in all, and as I explained to the other judges, it was clear that people really put their heart and soul into these games.  I saw excellent programming practices, extraordinarily smart use of APEX, SQL and PL/SQL, and an unbelievable amount of creativity and inventiveness.

I hated having to pick "winners", because these were all simply a magnificent collection of modern Web development and Oracle Database programming.  If you haven't seen the actual games and the code behind them, I encourage you to take a look at any one of them.

I truly don't know how these people found the time to work on these games.  It takes time and effort to produce such high quality.  These are people who have day jobs and families and responsibilities and no time.  In an effort to simply acknowledge and offer our praise to these contributors, I'd like to list them all here (sorted by last name descending, just to be different):

Scott Wesley
Maxime Tremblay
Douglas Rofes
Anderson Rodrigues
Pavel
Matt Mulvaney
Jari Laine
Daniel Hochleitner
Marc Hassan
Nihad Hasković
Lev Erusalimskiy
Gabriel Dragoi
Nick Buytaert
Marcelo Burgos

Thanks to each of you for being such a great champion for the global #orclapex community.  You're all proud members of the #LetsWreckThisTogether club!

P.S.  Thanks to ODTUG for sponsoring this event and Vincent Morneau for orchestrating the whole contest.

Join OTN Virtual Technology Summit Replay Groups Today!

OTN TechBlog - Mon, 2015-11-23 10:31

Join one of the  OTN Virtual Technology Summit Replay Libraries to view video tech sessions produced by Oracle ACEs and Oracle Product Team Experts. These videos present technical insight and expertise through highly technical presentations and demos created to help you master the skills you need to meet today’s IT challenges.


Group membership entitles you not only to access to the library of VTS session videos, but also provides the means for you to engage directly with session presenters through online discussion for answers to your questions about any of the material in the presentations.

What are you waiting for? Join one of the groups below today!

Doughnut Chart - a Yummy Addition to Oracle ADF Faces

Shay Shmeltzer - Mon, 2015-11-23 02:16

Another new feature in Oracle ADF 12.2.1 is the new Doughnut Chart capability.

It looks like this:

When I first tried to create this one, I couldn't find the option for doughnut chart in the JDeveloper wizard.

Then I was told that a doughnut is just a pie with a hole in the center - so you actually just need to create a pie chart, and then specify some properties.

And indeed, if you'll look at the property inspector for pie charts you'll see a few new properties you can leverage. 

For example there is the InnerRadius property - that expects a value between 1 and 0 - this controls how big is the hole in your doughnut.

Another nice capability is the you can put some filling in your doughnut - basically put some text that will go in the middle empty area. You do this by using centerLabel property. In the example above I used the center of the doughnut to report the total salary of a department - using a groovy sum expression in the Departments ViewObject - learn how here.

(Don't forget to use the centerLabelStyle property to assign it a bigger font - a best practice from the Oracle Alta UI book).

Here is the code from the JSF page:

<dvt:pieChart selectionListener="#{bindings.EmployeesView4.collectionModel.makeCurrent}"
   dataSelection="single" id="pieChart1" var="row"
   value="#{bindings.EmployeesView4.collectionModel}"
   centerLabel="Total Salary: #{bindings.SumSalary.inputValue}" innerRadius="0.7"
   centerLabelStyle="font-size:large;" title="Salary BreakDown" 
   sliceLabelPosition="inside">
         <dvt:chartLegend id="cl1" position="bottom"/>
         <dvt:pieDataItem id="di1" label="#{row.LastName}" value="#{row.Salary}"/>
</dvt:pieChart>

Try it out - it's a yummy new addition to Oracle's set of bakery based charts. 

Categories: Development

DOAG 2015 - "Oracle 12c Parallel Execution New Features" presentation material

Randolf Geist - Sun, 2015-11-22 13:16
Thanks to the many attendees that came to my presentation "Oracle 12c Parallel Execution New Features" at the DOAG conference 2015. You can download the presentation material here in Powerpoint of PDF format, as well as check the Slideshare upload.

Note that the Powerpoint format adds value in that sense that many of the slides come with additional explanations in the notes section.

If you are interested in more details I recommend visiting this post which links to many other posts describing the different new features in greater depth.

Pages

Subscribe to Oracle FAQ aggregator