Skip navigation.

Feed aggregator

ADF 11.1.1.9 Goodies - Conveyor Belt Component and Alta UI

Andrejus Baranovski - Sat, 2015-05-23 09:59
It doesn't seem to be announced, but newly released ADF 11.1.1.9 is shipped with Alta UI support. All you need to do, is to set alta skin name in trinidad config file. This enables applications running on ADF 11g platform to leverage new Oracle UI layout and to be prepared for ADF 12c upgrade. Besides Alta UI, there are several new UI components, one of them is Scrollable Bar (ConveyorBelt) - Displaying Components in a Scrollable Bar. This is simple, but quite useful component - I'm going to demonstrate how it works.

Conveyor Belt is able to render elements horizontally or vertically. This is perfect component to render key information, user could select one of the items available on the belt and get more info fetched. This is how it looks like - conveyor of employee pictures with first/last names, rendered in the vertical belt on the left:


If there are more items in the conveyor belt, than could fit into visible UI part - user could scroll (right/left and down/up). Here I scroll down and select another employee, his detail data is displayed in the main section:


As you could notice, UI is displayed using Alta skin. This is configured in trinidad file:


Here is the structure for implemented conveyor belt, items in the conveyor are stamped dynamically through ADF UI iterator component. Java Script client/server listeners are responsible to process item selection and keep detail info in synch for the selected employee:


This is the source code structure for conveyor belt UI implementation, based on the structure from above:


Conveyor item selection is processed in the server listener method, VO current row is set based on the selected key:


If you want to try it yourself, download sample application - ConveyorBeltApp.zip.

Another Take on Twilio Signal 2015

Oracle AppsLab - Fri, 2015-05-22 16:21

Editor’s note: Mark (@mvilrokx) and Raymond are at it again. Earlier in the week, they each provided a take on last weekend’s Bay Are Maker Faire, and this week, they both attended Twilio’s (@twilio) first developer conference, Signal. Mark’s take is here; now, it’s Raymond’s turn. Enjoy.

Twilio is no stranger to us at AppsLab. We have embedded Twilio Voice, SMS in applications such as Taleo Interview Evaluations, IoT call at Maker Faire 2014, and Daily Asteroid report, etc. It is simple yet powerful approach to achieve some real useful communication for some interesting projects.

But I never imagined Twilio is so big, that it is big enough to host a conference and get thousands of enthusiastic attendees.

They have come a long way – at the conference, they announcemed a slew of new products, and some of them are rightfully timely and empowering. A couple of samples:

  • Twilio Authy, a perfect way of embedding a two-factor Authentication into your next awesome and secure application.
  • Twilio Conference (basic, global, epic) – lets you bypass carrier lock-down, and bypass the nasty roaming.
  • Twilio Video – a Twilio flavored WebRTC, provides your application to conserve context over communication through voice, text, and video. That’s contextual communication even when you jump from your application over to communication channel.
  • Twilio IP Message – allow us embedded advanced messaging in all types of mobile and web apps

I think Twilio worked on its strength to position itself really well. They strive to provide composeable API as building block (just like Lego), and make it easy for developer to embed communication capability, and non-friction from users (no need to install anything).

In the current world, you pretty much have one app for one of anything, you have one app for ordering pizza, one app for calling taxi. Let’s Magic help you, a service built on Twilio, by just texting your desire to a number, and “hopefully” your wish is fulfilled :) That’s called non-friction!

Another use case is “Code for American”. Users can text to a number, and get your card balance. Such quick easy way to access some quick information provides real “accessibility to information”.

And one more use case is “American Red Cross” for disaster response, where they can form and coordinate the ad-hoc group of volunteers, where the group may be fluid.

In retrospective, our Taleo Interview Evaluation demo build can be thought of a very good use case for providing easy access to information and transaction.

With Twilio’s new release and capabilities, I look forward to building new contextual enterprise application for easy access and interaction.

Now here is a fun bit:

As usual, the Conference gives every attendee a backpack, and this time, with a twist. It has littleBits to power a 8×8 LED panel which can be attached to the backpack.

littleBit

And during $Bash event, they have cloudBit as prize to give out. Mark and I were determined to win that cloudBit, so that we can extend the LED panel display on the backpack, to be controlled remotely over Internet! We found out the most efficient way to win points, which is by playing Pinball games. We worked together, and of course, we got what we aimed for.

cloudBit

And by the way, I became the champion of the night for Pinball games on the floor, by scoring over 430,000 points in one game.
That’s a nice surprise to me too, that I have got talent in Pinball game!Possibly Related Posts:

Fluid UI Development Training Available

PeopleSoft Technology Blog - Fri, 2015-05-22 10:56
PeopleSoft's new Fluid user interface rocks.  The response from customers has been enthusiastic, and many would like to learn more about developing with Fluid.  Training is available from Oracle University both in classroom and live virtual training.  Note that Oracle University has scheduled classroom training in Belmont, California to correspond with our Open World conference at the end of October.  If you are planning to attend Open World, you have a great opportunity to attend this valuable training as part of your trip.

Twilio Signal Conference 2015

Oracle AppsLab - Fri, 2015-05-22 08:40

Editor’s note: If you read here, you know we heart Twilio, especially Noel (@noelportugal). Remember the Rock ’em Sock ’em robot build?

This week, Twilio (@twilio) held its first Signal conference and Raymond and I were there to see what’s new in the world of web enabled communications and the likes.

Signal-Twilio-Conference-640x265

For those of you not familiar with Twilio, here’s their spiel from their About page:

Twilio powers the future of business communications.  Enabling phones, VoIP, and messaging to be embedded into web, desktop, and mobile software.

For example, they provide REST APIs that can send and receive phone calls and text messages (SMS), allowing you, as a user of their services, to implement these extremely complex features in your applications, whether they are mobile, web or desktop apps with very little effort.  They provide many more features and announced a bunch of new ones at the conference, see their website for more details on those features.

I had no idea that Twilio is as big as it is: there were 2000 attendees at the conference and apparently, Twilio is the second largest provider of phone numbers in the us, right behind T-Mobile.

The conference started of with a pretty impressive magician’s act in which actual Twilio APIs were used, very original I thought.  It the proceeded with a bunch of keynotes, lead by the CEO of Twilio, Jeff Lawson.  He stressed the importance of services, comparing them to Lego blocks that, in the right hands, allow you to build anything by composing these services, just like you would do with Lego.

Among the lineup of key speakers was Werner Vogels, CTO of Amazon who gave a history of how Amazon moved from a monolithic architecture to a more Service Oriented Architecture, then towards Micro Services and finally towards an architecture that now aggregates these Services into useful components.  They had to build an infrastructure to support these changes which eventually led to what we now know as AWS, very interesting talk.

One other interesting topic I remember from the opening presentations was Jeff Lawson mentioning that the next big evolution in communication will be for them to become context-aware. i.e. rather than you having to enter your 17-digit account number on your phone and then having to identify yourself again and again to the agent that you get transferred to with some weird question about the street you grew up in, this information should be available when a call gets made, leading to much better quality of service and a much higher throughput of calls.

The rest consisted of product announcements and partners getting to explain how they use Twilio in their business.  We then attended a bunch of sessions, some more interesting than others, I’ll limit myself here to the more interesting ones.

4Y6A3701-640x265

Image from Twilio

I’m a huge fan of ngrok so I was delighted to attend a session by the maker of this tool, Alan Shreve.  Turns out that it was written in Go, and Alan gave a few examples of how this language made it easier to build these types of tools.  He also mentioned that rewriting an existing tool into a new language is a great way to learn that new language as you limit the scope and can focus purely on the language itself.  He also stressed  not to be discouraged if you discover that a tool already exists, competition is a good thing and it validates the business case.

Also very informative was a talk from Guillermo Rauch, the creator of socket.io of which I also am a huge fan.  The talk didn’t focus on socket.io itself, but on the challenges you will face when you start building realtime applications, something that socket.io allows you to do: conflict resolution, throughput, diffing etc.

Kate Heddleston gave a talk about One-click deploy for service-oriented architectures which is a project that she worked on that allows you to deploy (with 1 click), a fully operational environment, including load balancers, db servers etc. on Amazon EC2, using Docker.  It seemed like an excellent alternative to the likes of Heroku and I definitely will check this out more in the near future and see if this could be leverage somewhere for our work in the AppsLab.

Probably the most interesting talk of the whole conference, for me at least, was by Neil Mansilla from Runscope about API testing & debugging.  He didn’t just gave a sales pitch about Runscope but laid out a whole bunch of tools that you can use to test APIs, from Apache Benchmark to Charles and Wireshark.  I am definitely going to check out Runscope!

What I took away most from this conference though is that APIs are the future: IT infrastructure is turning into APIs (AWS), electronics is turning into APIs (littleBits) and telecommunication is turning into APIs (Twilio, of course, but also switch).  I am convinced that Enterprise apps will also evolve into this direction and Enterprise APIs will enable developers to compose and integrate easily with other, non-enterprise APIs, allowing them to build new and exciting applications, just as developers started doing with tele-communications when Twilio appeared.Possibly Related Posts:

Connecting OBIEE 11.1.1.9 to Hive, HBase and Impala Tables for a DW-Offloading Project

Rittman Mead Consulting - Fri, 2015-05-22 07:28

In two previous posts this week I talk about a client request to offload part of their data warehouse top Hadoop, taking data from a source application and loading it into Hive tables on Hadoop for subsequent reporting-on by OBIEE11g. In the first post I talked about hosting the offloaded data warehouse elements on Cloudera Hadoop CDH5.3, and how I used Apache Hive and Apache HBase to support insert/update/delete activity to the fact and dimension tables, and how we’d copy the Hive-on-HBase fact table data into optimised Impala tables stored in Parquet files to make sure reports and dashboards ran fast.

NewImage

In the second post I got into the detail of how we’d keep the Hive-on-HBase tables up-to-date with new and changed data from the source system, using HiveQL bulk-inserts to load up the initial table data and a Python script to handle subsequent inserts, updates and deletes by working directly with the HBase Client and the HBase Thrift Server. Where this leaves us at the end then is with a set of fact and dimension tables stored as optimised Impala tables and updatable Hive-on-HBase tables, and our final step is to connect OBIEE11g to it and see how it works for reporting.

NewImage

As I mentioned in another post a week or so ago, the new 11.1.1.9 release of OBIEE11g supports Cloudera Impala connections from Linux servers to Hadoop, with the Linux Impala drivers being shipped by Oracle as part of the Linux download and the Windows ones used for the Admin Tool workstation downloadable directly from Cloudera. Once you’ve got all the drivers and OBIEE software setup, it’s then just a case of setting up the ODBC connections on the Windows and Linux environments, and you should then be in a position to connect it all up.

NewImage

In the Impala side, I first need to create a copy of the Hive-on-HBase table I’ve been using to load the fact data into from the source system, after running the invalidate metadata command to refresh Impala’s view of Hive’s metastore.

[oracle@bigdatalite~]$impala-shell
 
[bigdatalite.localdomain:21000]>invalidate metadata;
 
[bigdatalite.localdomain:21000]>create table impala_flight_delays
                                >stored as parquet
                                >as select *from hbase_flight_delays;

Next I import the Hive-on-HBase and the Impala table through the Impala ODBC connection – even though only one of the tables (the main fact table snapshot copy) was created using Impala, I still get the Impala speed benefit for the other three tables created in Hive (against the HBase source, no less). Once the table metadata is imported into the RPD physical layer, I can then create a business model and subject area as I would do normally, so my final RPD looks like this:

NewImage

Now it’s just a case of saving the repository online and creating some reports. If you’re using an older version of Impala you may need to disable the setting where a LIMIT clause is needed for every GROUP BY (see the docs for more details, but recent (CDH5+) versions will work fine without this). Something you’ll also need to do back in Impala is compute statistics for each of the tables, like this:

[bigdatalite.localdomain:21000] > compute stats default.impala_flight_delays;
Query: compute stats default.impala_flight_delays
+-----------------------------------------+
| summary                                 |
+-----------------------------------------+
| Updated 1 partition(s) and 8 column(s). |
+-----------------------------------------+
Fetched 1 row(s) in 2.73s
[bigdatalite.localdomain:21000] > show table stats impala_flight_delays;
Query: show table stats impala_flight_delays
+---------+--------+---------+--------------+---------+-------------------+
| #Rows   | #Files | Size    | Bytes Cached | Format  | Incremental stats |
+---------+--------+---------+--------------+---------+-------------------+
| 2514141 | 1      | 10.60MB | NOT CACHED   | PARQUET | false             |
+---------+--------+---------+--------------+---------+-------------------+
Fetched 1 row(s) in 0.01s

Fetched 1 row(s) in 0.01s

Apart from being generic “good practice” and giving the Impala query optimizer better information to form a query plan with, you might hit the error below in OBIEE if you don’t do this.

NewImage

If you do hit this error, go back to the Impala Shell or Hue and compute statistics, and it should go away next time. Then, finally, you can go and create some analyses and dashboards and you should find the queries run fine against the various tables in Hadoop, and moreover the response time is excellent if you use Impala as the main query engine.

NewImage

I did a fair bit of testing of OBIEE 11.1.1.9 running against Cloudera Impala, and my findings were that all of the main analysis features worked (prompts, hierarchies, totals and subtotals etc) and the response time was comparable with a well-turned data warehouse, maybe even Exalytics-level of speed. If you take a look at the nqquery.log file for the Impala SQL queries OBIEE is sending to Impala, you can see they get fairly complex (which is good, as I didn’t hit any errors when running the dashboards) and you can also see where the BI Server takes a more simple approach to creating subtotals, nested queries etc compared to the GROUP BY … GROUPING SETS that you get when using a full Oracle database.

select D1.c1 as c1,
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4,
     D1.c5 as c5,
     D1.c6 as c6,
     D1.c7 as c7,
     D1.c8 as c8,
     D1.c9 as c9,
     D1.c10 as c10,
     D1.c11 as c11,
     D1.c12 as c12
from 
     (select 0 as c1,
               D1.c3 as c2,
               substring(cast(NULL as  STRING ), 1, 1 ) as c3,
               substring(cast(NULL as  STRING ), 1, 1 ) as c4,
               substring(cast(NULL as  STRING ), 1, 1 ) as c5,
               'All USA' as c6,
               substring(cast(NULL as  STRING ), 1, 1 ) as c7,
               1 as c8,
               substring(cast(NULL as  STRING ), 1, 1 ) as c9,
               substring(cast(NULL as  STRING ), 1, 1 ) as c10,
               D1.c2 as c11,
               D1.c1 as c12
          from 
               (select sum(T44037.late) as c1,
                         sum(T44037.flights) as c2,
                         T43925.carrier_desc as c3
                    from 
                         
                              hbase_carriers T43925 inner join 
                              impala_flight_delays T44037 On (T43925.key = T44037.carrier)
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc
               ) D1
          union all
          select 1 as c1,
               D1.c3 as c2,
               substring(cast(NULL as  STRING ), 1, 1 ) as c3,
               substring(cast(NULL as  STRING ), 1, 1 ) as c4,
               D1.c4 as c5,
               'All USA' as c6,
               substring(cast(NULL as  STRING ), 1, 1 ) as c7,
               1 as c8,
               substring(cast(NULL as  STRING ), 1, 1 ) as c9,
               D1.c4 as c10,
               D1.c2 as c11,
               D1.c1 as c12
          from 
               (select sum(T44037.late) as c1,
                         sum(T44037.flights) as c2,
                         T43925.carrier_desc as c3,
                         T43928.dest_state as c4
                    from 
                         
                              
                                   hbase_carriers T43925 inner join 
                                   impala_flight_delays T44037 On (T43925.key = T44037.carrier) inner join 
                              hbase_geog_dest T43928 On (T43928.key = T44037.dest)
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc, T43928.dest_state
               ) D1
          union all
          select 2 as c1,
               D1.c3 as c2,
               substring(cast(NULL as  STRING ), 1, 1 ) as c3,
               D1.c4 as c4,
               D1.c5 as c5,
               'All USA' as c6,
               substring(cast(NULL as  STRING ), 1, 1 ) as c7,
               1 as c8,
               D1.c4 as c9,
               D1.c5 as c10,
               D1.c2 as c11,
               D1.c1 as c12
          from 
               (select sum(T44037.late) as c1,
                         sum(T44037.flights) as c2,
                         T43925.carrier_desc as c3,
                         T43928.dest_city as c4,
                         T43928.dest_state as c5
                    from 
                         
                              
                                   hbase_carriers T43925 inner join 
                                   impala_flight_delays T44037 On (T43925.key = T44037.carrier) inner join 
                              hbase_geog_dest T43928 On (T43928.key = T44037.dest and T43928.dest_state = 'Georgia')
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc, T43928.dest_city, T43928.dest_state
               ) D1
          union all
          select 3 as c1,
               D1.c3 as c2,
               D1.c4 as c3,
               D1.c5 as c4,
               D1.c6 as c5,
               'All USA' as c6,
               D1.c4 as c7,
               1 as c8,
               D1.c5 as c9,
               D1.c6 as c10,
               D1.c2 as c11,
               D1.c1 as c12
          from 
               (select sum(T44037.late) as c1,
                         sum(T44037.flights) as c2,
                         T43925.carrier_desc as c3,
                         T43928.dest_airport_name as c4,
                         T43928.dest_city as c5,
                         T43928.dest_state as c6
                    from 
                         
                              
                                   hbase_carriers T43925 inner join 
                                   impala_flight_delays T44037 On (T43925.key = T44037.carrier) inner join 
                              hbase_geog_dest T43928 On (T43928.key = T44037.dest and T43928.dest_city = 'Atlanta, GA')
                    where  ( T43925.carrier_desc = 'American Airlines Inc.' or T43925.carrier_desc = 'Delta Air Lines Inc.' or T43925.carrier_desc = 'Southwest Airlines Co.' or T43925.carrier_desc = 'Spirit Air Lines' or T43925.carrier_desc = 'Virgin America' ) 
                    group by T43925.carrier_desc, T43928.dest_airport_name, T43928.dest_city, T43928.dest_state
               ) D1
     ) D1
order by c1, c6, c8, c5, c10, c4, c9, c3, c7, c2 limit 65001

Not bad though for a data warehouse offloaded entirely to Hadoop, and it’s good to see such a system handling full updates and deletes to data as well as insert appends, and it’s also good to see OBIEE working against an Impala datasource and with such good response times. If any of this interests you as a potential customer, feel free to drop me an email at mark.rittman@rittmanmead.com, or check-out our Big Data Quickstart page on the website.

Categories: BI & Warehousing

Opportunities for Cloud based BI

Dylan's BI Notes - Thu, 2015-05-21 18:33
Three opportunities for the future Cloud based BI: 1. Provide the tools for consolidating data  Investing the the data matching and merging technologies is too costly for on-premise BI implementations. Providing the services within the cloud BI will lower the cost. 2. Provide the External Data Bringing the external data into the individual on-premise env […]
Categories: BI & Warehousing

EM12c : Login to GUI with the correct password causes authentication failure

Pythian Group - Thu, 2015-05-21 16:47

So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

“Authentication failed. If problem persists, contact your system administrator”

I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.


SQL> connect sysman/
Enter password:
Connected.

So I went to the<gc_inst>/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error


2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 492
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 406
ORA-06512: at line 1

Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:


SQL&gt; show parameter JOB_QUEUE_PROCESSES

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 50
SQL&gt; alter system set JOB_QUEUE_PROCESSES=1000 scope = both;

System altered.

SQL&gt; show parameter both
SQL&gt; show parameter job

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 1000
SQL&gt; connect / as sysdba
Connected.
SQL&gt; alter system set job_queue_processes = 0;

System altered.

SQL&gt; connect sysman/alyarog1605
Connected.
SQL&gt; exec emd_maintenance.remove_em_dbms_jobs;

PL/SQL procedure successfully completed.

SQL&gt; exec gc_interval_partition_mgr.partition_maintenance;

PL/SQL procedure successfully completed.

SQL&gt; @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
old 11: AND owner = upper('&amp;RECOMPILE_REPOS_USER')
new 11: AND owner = upper('SYSMAN')
old 26: dbms_utility.compile_schema(upper('&amp;RECOMPILE_REPOS_USER'),FALSE);
new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
old 41: WHERE owner = upper('&amp;RECOMPILE_REPOS_USER')
new 41: WHERE owner = upper('SYSMAN')
old 84: AND owner = upper('&amp;RECOMPILE_REPOS_USER')
new 84: AND owner = upper('SYSMAN')
old 104: AND ds.table_owner = upper('&amp;RECOMPILE_REPOS_USER')
new 104: AND ds.table_owner = upper('SYSMAN')

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

SQL&gt; connect / as sysdba
Connected.
SQL&gt; alter system set job_queue_processes = 1000;

System altered.

SQL&gt; connect sysman/
Enter password:
Connected.
SQL&gt; exec emd_maintenance.submit_em_dbms_jobs;

PL/SQL procedure successfully completed.

SQL&gt; commit;

Commit complete.

After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:


2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN

So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:


oracle $ sqlplus

&amp;nbsp;

Enter user-name: sysman
Enter password:

SQL&gt; update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;

1 rows updated.

SQL&gt; select count(1) from mgmt_audit_master where prepopulate_days is null;

COUNT(1)
----------
0

SQL&gt; exec mgmt_audit_admin.add_audit_partition;

PL/SQL procedure successfully completed.

SQL&gt; commit;

Commit complete.

Once I did this, I was able to login with all my EM12c administrators without any issues:


oracle@em12cr4.localdomain [emrep] /home/oracle
oracle $ emcli login -username=ssa_admin
Enter password

Login successful

Conclusion

Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

Note– This was originally published on rene-ace.com

Categories: DBA Blogs

New Style Sheet Guide Posted for Working with the PeopleSoft Fluid User Experience

PeopleSoft Technology Blog - Thu, 2015-05-21 16:08

Customers and partners have asked us for guidelines and standards for developing with PeopleSoft's Fluid UI.  To support that, we've just posted the PeopleSoft Fluid User Interface CSS Guide on the FLUID UI: PeopleSoft Fluid User Interface Documentation Updates page on My Oracle Support (Doc ID 1909955.1).  This guide contains descriptions of CSS styles that PeopleSoft delivers. You'll find this information helpful for creating custom Fluid applications as well as extending current CSS features delivered in your PeopleSoft applications. The document provides descriptions of nearly a thousand CSS styles delivered with PeopleTools 8.54. The styles are divided in these categories:

  • PeopleTools System Default CSS Styles: These styles are used to control basic elements of the fluid infrastructure provided by PeopleTools, such as the NavBar, the fluid banner, homepages, tiles and so on.
  • Application Content CSS Styles: These styles are used to control application content deployed in fluid mode.

Creating fluid applications relies heavily on CSS 3.0 for the look, feel, and layout of the runtime application. If you intend to customize or create any fluid application, expert knowledge off CSS is required.  Prior to working doing any fluid style development work, make sure you are familiar with the documentation provided in PeopleSoft PeopleTools 8.54: Fluid User Interface Developer’s Guide, “Adding Page Controls,” Applying Styles.

Refer to the FLUID UI: PeopleSoft Fluid User Interface Documentation Updates MOS page to receive important information about the PeopleSoft Fluid User Interface.

Loading, Updating and Deleting From HBase Tables using HiveQL and Python

Rittman Mead Consulting - Thu, 2015-05-21 14:32

Earlier in the week I blogged about a customer looking to offload part of the data warehouse platform to Hadoop, extracting data from a source system and then incrementally loading data into HBase and Hive before analysing it using OBIEE11g. One of the potential complications for this project was that the fact and dimension tables weren’t append-only; Hive and HDFS are generally considered write-once, read-many systems where data is inserted or appended into a file or table but generally then can’t be updated or overwritten without deleting the whole file and writing it again with the updated dataset.

To get around this problem we loaded our incoming data into HBase tables, a NoSQL key/value-store database that also runs on Hadoop and HDFS but permits update and delete operations on rows as well as selects and inserts; later on we took the main fact table stored in Hive-on-HBase and copied its contents into Impala to considerably improve the response time of queries against this tables and the still-Hive-on-HBase dimension tables, but going back to the insert-update-delete operations on the HBase tables, how exactly does this work and what’s the most efficient way to do it?

Taking a step back for a moment, HBase is a NoSQL, key/value-type database where each row has a key (for example, “SFO” for San Francisco airport) and then a number of columns, grouped into column families. In the Flight Delays dataset that we used in the previous blog post, an HBase of origin airports might have a few thousand entries with each entry, or row, keyed on a particular airport code like this:

NewImage

(Note that at the start, these key values won’t be there – they’re more for illustrative purposes)

At the time of HBase table definition, you specify one or more “column families”. These are group headers for columns you might add earlier, and in the case of my origin airport table I might just use the column family name “dest”, so that the HBase table DDL looks like this:

create 'geog_origin','origin'

and the conceptual view of the table would look like this:

NewImage

Now what’s neat about NoSQL-style databases like this (and Endeca Server is the same) is that you can define individual columns just by using them. For example, I could create columns for the airport name, airport city, airport state and airport code just by using their name in a data load, prefixing those column names with the named of a previously-defined column family. Using the HBase Shell, for example, I could issue the following PUT commands to insert the first row of data into this HBase table, like this:

put 'geog_origin’,’SFO','origin:airport_name','San Francisco, CA: San Francisco'
put 'geog_origin’,’SFO','origin:city’,’San Francisco, CA'
put 'geog_origin’,’SFO',’origin':state','California'
put 'geog_origin’,'SFO',’origin':id’,'14771'

Now my HBase table conceptually looks like this:

NewImage

If I then want to use another column under the “origin” column family for LAX, I can just do so by using it in the next set of PUT commands, like this:

put 'geog_origin','LAX’,origin:airport_name','Los Angeles, CA: Los Angeles'
put 'geog_origin','LAX','origin:city','Los Angeles, CA'
put 'geog_origin','LAX','origin:state','California'
put 'geog_origin','LAX','origin:region’,’West Coast'
put 'geog_origin','LAX','origin:id','12892'

NewImage

Each column within column families has its values individually set, retrieved and deleted using PUT, GET and DELETE commands, and as long as you prefix the column name with one of the previously-defined column-family names and provide the key value for the row you’re interested in, HBase database tables are very flexible and were designed for simple product catalog-type applications running on hundreds of sharded server nodes for companies of the likes of Amazon, Google and Facebook (see this HBase “Powered-by” page for more examples of organizations using HBase).

But what HBase very much isn’t is a relational database like Oracle, Microsoft SQL server or even Apache Hive, databases that we’re much more likely to store data warehouse-type data in. In the previous post I showed how Hive table structures can in-fact be put over HBase tables, mapping HBase columns to Hive columns, and then HiveQL INSERT INTO TABLE … SELECT commands can be used to bulk-load these HBase tables with initial sets of data. So back to the original question – what’s the best way to then incrementally load and refresh these HBase tables, and I can I still use HiveQL for this?

In my original post, I defined Hive tables over my HBase ones using the Hive-on-Hbase (yum install hive-hbase) package and associated Hive storage handler; for example, the Hive table that provided SQL access over the flight_delays HBase tables was defined like this:

ADD JAR /usr/lib/hive/lib/zookeeper.jar;
ADD JAR /usr/lib/hive/lib/hive-hbase-handler.jar;
ADD JAR /usr/lib/hive/lib/guava-11.0.2.jar;
ADD JAR /usr/lib/hive/lib/hbase-client.jar;
ADD JAR /usr/lib/hive/lib/hbase-common.jar;
ADD JAR /usr/lib/hive/lib/hbase-hadoop-compat.jar;
ADD JAR /usr/lib/hive/lib/hbase-hadoop2-compat.jar;
ADD JAR /usr/lib/hive/lib/hbase-protocol.jar;
ADD JAR /usr/lib/hive/lib/hbase-server.jar;
ADD JAR /usr/lib/hive/lib/htrace-core.jar;

 
CREATE EXTERNAL TABLE hbase_flight_delays
 (key string,
  year string,
  carrier string,
  orig string,
  dest string,
  flights string,
  late   string,
  cancelled string,
  distance string
) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES 
("hbase.columns.mapping" = ":key,dims:year,dims:carrier,dims:orig,dims:dest,measures:flights,measures:late,measures:cancelled,measures:distance")
TBLPROPERTIES ("hbase.table.name" = "test1_flight_delays");

With the underlying HBase table defined with a key and two column families, one for dimension columns and one for fact (measure) ones – the key is a sequence number that I added to the source dataset to give each row a unique identifier.

create ‘test1_flight_delays','dims','measures'

To initially populate the table, I’ve created another Hive table with the initial set of source data in it, and I just insert its values in to the Hive-on-HBase table, like this:

insert into table hbase_flight_delays              
select * from flight_delays_initial_load;      
         
Total jobs = 1
...
Total MapReduce CPU Time Spent: 11 seconds 870 msec
OK
Time taken: 40.301 seconds

This initial load of 200,000 rows in this instance took 40 seconds to load; not bad, certainly acceptable for this particular project. Imagine now for every day after this we typically added another 500 or so flight records; in regular Hive this would be straightforward and we’d use the LOAD DATA or INSERT INTO TABLE … SELECT commands to add new file data to the Hive table’s underlying HDFS directories. And we can do this with the Hive-on-HBase table too, with the INSERT INTO TABLE command adding the incoming data to new rows/cells in the HBase table. Checking the row count and min/max ID values in the Hive-on-HBase table at the start, like this:

select count(*), min(cast(key as bigint)) as min_key, max(cast(key as bigint)) as max_key
from hbase_flight_delays;
 
Total jobs = 1
...
Total MapReduce CPU Time Spent: 14 seconds 660 msec
OK
200000  1  200000
Time taken: 53.076 seconds, Fetched: 1 row(s)

I can see that there’s 200,000 rows in the HBase table, starting at key value 1 and ending at key value 200,000. The table containing new data has key values going from 200,001 to 200,500, so let’s insert that new data into the Hive-on-HBase table:

insert into table hbase_flight_delays                                              
select * from flight_delays_daily_update_500_rows;     
                                 
Total jobs = 1
...
Total MapReduce CPU Time Spent: 3 seconds 870 msec
OK
Time taken: 26.368 seconds

Not bad – 26 seconds for the 500 rows, not quite as fast as the initial load but acceptable. Let’s also check that the data went in OK:

select count(*), min(cast(key as bigint)) as min_key, max(cast(key as bigint)) as max_key
from hbase_flight_delays;  
                                                      
Total jobs = 1
...
Total MapReduce CPU Time Spent: 13 seconds 580 msec
OK
200500   1   200500
Time taken: 44.552 seconds, Fetched: 1 row(s)

As I’d hoped, the number of rows has increased by 500 and the maximum key value is now 200,500. But how do we apply updates to the data in the table? I’ve got another source table that this time contains 1,000 randomly-selected rows from the initial data load dataset, where I’ve set the LATE column value to ‘999’:

hive> select * from flight_delays_daily_changes_1000_rows                                      
    > limit 5;
OK
21307  2008 WN BDL  BWI  1  999  1  283
136461  2008  OO  ORD  TYS  0  999  1  475
107768  2008  WN  BWI  ORF  0  999  1  159
102393  2008  OO  SLC  ACV  0  999  1  635
110639  2008  WN  BOI  OAK  0  999  1  511
Time taken: 0.139 seconds, Fetched: 5 row(s)

In fact the way you apply these updates is just to INSERT INTO TABLE … SELECT again, and the incoming values create new versions of existing rows/cells if needed. Some versions of HBase automatically keep a number of versions of each cell value (typically 3 versions), however the version of HBase that comes with CDH5.2 and higher only keeps one version by default (you can increase this number per table, or system wide, using the steps in the CDH5.2 release notes). Let’s try this out now, first using the HBase shell to see the values and timestamps currently held for one particular key value I know should by updated by the next dataset:

hbase(main):029:0> get 'test1_flight_delays', '102393'
COLUMN                                     CELL                                                                                                                       
 dims:carrier                              timestamp=1432236609421, value=OO                                                                                          
 dims:dest                                 timestamp=1432236609421, value=ACV                                                                                         
 dims:orig                                 timestamp=1432236609421, value=SLC                                                                                         
 dims:year                                 timestamp=1432236609421, value=2008                                                                                        
 measures:cancelled                        timestamp=1432236609421, value=1                                                                                           
 measures:distance                         timestamp=1432236609421, value=635                                                                                         
 measures:flights                          timestamp=1432236609421, value=0                                                                                           
 measures:late                             timestamp=1432236609421, value=0                                                                                           
8 row(s) in 0.0330 seconds

I’ll now use Hive to apply the updates, like this:

insert into table hbase_flight_delays                                              
select * from flight_delays_daily_changes_1000_rows;
 
Total jobs = 1
...
Total MapReduce CPU Time Spent: 4 seconds 340 msec
OK
Time taken: 24.805 seconds
 
select count(*), min(cast(key as bigint)) as min_key, max(cast(key as bigint)) as max_key
from hbase_flight_delays;                                                          
Total jobs = 1
...
Total MapReduce CPU Time Spent: 13 seconds 430 msec
OK
200500 1 200500
Time taken: 47.379 seconds, Fetched: 1 row(s)

Notice how this third INSERT didn’t create any new rows, the max key ID in the follow-up query hasn’t increased since the previous insert of new data. Querying one of the rows that I know was changed by this new table of data updates, I can see that the LATE column value has been changed:

select * from hbase_flight_delays where key = '102393';
Total jobs = 1
...
Total MapReduce CPU Time Spent: 3 seconds 600 msec
OK
102393  2008  OO  SLC  ACV  0  999  1  635

Let’s go into the HBase shell now and take a look at the columns cells for that same key ID:

hbase(main):030:0> get 'test1_flight_delays', '102393'
COLUMN                                     CELL                                                                                                                       
 dims:carrier                              timestamp=1432236723680, value=OO                                                                                          
 dims:dest                                 timestamp=1432236723680, value=ACV                                                                                         
 dims:orig                                 timestamp=1432236723680, value=SLC                                                                                         
 dims:year                                 timestamp=1432236723680, value=2008                                                                                        
 measures:cancelled                        timestamp=1432236723680, value=1                                                                                           
 measures:distance                         timestamp=1432236723680, value=635                                                                                         
 measures:flights                          timestamp=1432236723680, value=0                                                                                           
 measures:late                             timestamp=1432236723680, value=999                                                                                         
8 row(s) in 0.0800 seconds

Notice how the timestamp for each of the cells has now updated? If I had more than the default 1 version of each cell enabled, I could query the previous versions to see the old values and timestamps. So this works pretty well, and all I need to do is use HiveQL and INSERT INTO TABLE … SELECT to initially populate, append to and even update values in the table. But what If I want to update HBase more “programmatically”, maybe as part of a process that reads directly from a source application (for example, Salesforce or a web service) and then writes directly into HBase without the intermediate step of landing the incoming data into a file? For this we can use the HBase Client API of which there are libraries for many languages with the most popular being the Java API. If Java is too much though and you’d rather interact with HBase using a language such as Python, as this Cloudera blog post explains you can use either a REST API interface to HBase or one using the Thrift interface and work with languages such as Python.

In my case, my preferred way of programatically working with HBase is to use Python and a developer library called Happybase, where I can also bring in other libraries such as ones to work with Hive and even ones to work with OBIEE and Fusion Middleware and do my work at a much higher-level of abstraction. To show how this might work, I’m going to use Python, the HBase Client API and Happybase to programatically read from my update Hive tables (in real-life I’d probably connect directly to a web service if going down this more complicated route) and write a routine to read rows from the Hive table and load them into HBase.

Again I’m using the Oracle Big Data Lite 4.1 VM which has Python 2.7.6 already installed, and to get ready to install the Happybase library I first need to install pip, the “preferred installer program” for Python. As per the pip installation instructions, first download pip and then install it from the command-line:

sudo python get-pip.py

Then use Pip to install Happybase 

sudo pip install happybase

Whist you’re there you might as well install “pyhs2”, another python package that in this case lets us easily connect to Hive tables via the HiveServer2 interface found on CDH5+ and the Big Data Lite 4.1 VM.

sudo pip install pyhs2

Now I can put together a Python program such as the one below, that in this case creates a connection to a Hive table, selects all rows from it into a cursor and then PUTs these rows into the HBase table, via a batch process that sends data to HBase via the Thrift interface every 10,000 rows:

import pyhs2
import happybase
 
connection = happybase.Connection('bigdatalite')
flight_delays_hbase_table = connection.table('test1_flight_delays')
b = flight_delays_hbase_table.batch(batch_size=10000)
 
with pyhs2.connect(host='bigdatalite',
               port=10000,
               authMechanism="PLAIN",
               user='oracle',
               password='welcome1',
               database='default') as conn:
    with conn.cursor() as cur:
 
        #Execute query
        cur.execute("select * from flight_delays_initial_load")
 
        #Fetch table results
        for i in cur.fetch():
            b.put(str(i[0]),{'dims:year': i[1],
                             'dims:carrier': i[2],
                             'dims:orig': i[3],
                             'dims:dest': i[4],
                             'measures:flights': i[5],
                             'measures:late': i[6],
                             'measures:cancelled': i[7],
                             'measures:distance': i[8]})
b.send()

which I can then run from the command-line like this:

[oracle@bigdatalite ~]$ python ./load_update_flight_delays.py

As I said, using this approach I could just as easily connect to a web service or read in data via Flume or Kafka, and I can delete rows as well as insert/update them and add any other logic. From my testing it’s not all that faster than going via HiveQL and INSERT INTO TABLE … SELECT scripts (most probably because I’m still going into HBase indirectly, via the Thrift interface) but it does offer the possibility of direct inserts into HBase (and therefore Hive) from the source application without the intermediate step of writing files to disk.

So to finish this short series, tomorrow I’ll look at how well these Hive-on-HBase tables, and the Impala table I created in the previous example, work when queried from OBIEE11g. Back tomorrow.

Categories: BI & Warehousing

Oracle Priority Support Infogram for 21-MAY-2015

Oracle Infogram - Thu, 2015-05-21 14:11

RDBMS
From DBA Kevlar: SQL ID Specific Performance Information.
PL/SQL
A tale of romantic notions and cold hard bonus cash: Dinner Courtesy of the Lady: My Most Successful PL/SQL Assignment, from Toolbox.com.
OEM
From the intriguingly named PETEWHODIDNOTTWEET.COM: INSTALLING ENTERPRISE MANAGER 12C FOR TESTING (also apparently the Pete whose shift key is stuck).
From the AMIS Oracle and Java Blog: Keeping track of your licenses with OEM12C – reports
Exalogic
Traffic Director 11.1.1.9 is now available, from Oracle Exalogic.
Data Warehousing
More on monitoring parallel  processing from the Data Warehouse Insider: PX In Memory, PX In Memory IMC?
Cloud Computing
DOCS and APEX integration through REST API, from Oracle Cloud for Developers.
OIM
Now Available! Oracle Identity Management 11gR2 PS3, from Identity Management.
GoldenGate
From Oracle DBA – Tips and Techniques: Oracle Goldengate on DBFS for RAC and Exadata
Oracle OpenStack
LIVE WEBINAR (May 28): How to Get Started with Oracle OpenStack for Oracle Linux, from the Oracle OpenStack Blog.
WebLogic
Creating and scaling Dynamic Clusters using wlst , from WebLogic Partner Community EMEA.
APEX
Differences between Oracle APEX 4.2 and 5.0, from get along with oracle and apex.
Solaris
From Oracle Solaris Crash Analysis Tool: Oracle Solaris Crash Analysis Tool 5.5 Release
EBS
From the Oracle E-Business Suite Support blog:
Receiving Errors When Validating Invoices? APP-PO-14144: PO_ACTIONS-065 CLOSE_AUTO
Webcast: Getting Started with Document Sequencing in Oracle Receivables
Webcast: A Diagnosis on OPM - ASCP Data Collection Entities
New and Improved Shipping Execution Analyzer!
The Procurement Accrual Accounting Analyzer has been improved!
Webcast: An Overview of Construction Estimates in R12 in EAM
From the Oracle E-Business Suite Technology blog:

Firefox ESR 38 Certified with EBS 12.2 and 11i

Fixing Super LOV in Universal Theme

Dimitri Gielis - Thu, 2015-05-21 14:08
When you migrate to APEX 5.0 and the Universal Theme you might see that some plugins are not behaving correctly anymore. In this post I'll discuss the Enkitec Modal LOV plugin.

When I ran the plugin in my APEX 5.0 app with the Universal Theme it looked like this:


There's too much space in the search bar and the close button is not showing up with an icon.

Here're the steps I did to fix it. First you need to download the css file of the style you use and the js file from the plugin in Shared Components. I use the smoothness.css style most of the times, so I'll use that as an example.

To fix the close icon, add !important to the png:

.ek-ml .ui-state-default .ui-icon {
background-image: url(ui-icons_888888_256x240.png) !important;
}

Note: you can do that for all those png on line 467 till 489.

To fix the height, add following css to smoothness.css:

.superlov-button-container {
  height:50px;
}

And finally in enkitec_modal_lov.min.js change the height of the searchContainer from a dynamic height (r) to 0px:
$searchContainer.css("height","0px")
Next upload those files again to the plugin.
When you run the plugin it should give you this result:

Now the bigger question is; do we still need that plugin? In APEX 5.0 there're native Modal Pages, so you could create an Interactive Report and set the page as a Modal Page. Next you can hook that up to a button or link and you've just build your own Modal LOV.
I still like to use the plugin at the moment (as it's just one item on the page), but it could use a refresh to make it look nicer and more inline with Universal Theme.
Wonder what you think - would you build your own Modal LOV in APEX 5.0 or would you still prefer to use a plugin? 
Categories: Development

Understanding SQL

Jonathan Lewis - Thu, 2015-05-21 11:12

From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix. Then, after you’ve made a couple of suggestions and explained a couple of ideas the provider simply fades into the distance and doesn’t tell you any more about the query, or whether they’ve taken advantage of your advice, or found some other way to address the problem.

Such a query, with its execution plan, appeared a couple of weeks ago:

UPDATE FACETS_CUSTOM.MMR_DTL
SET
	CAPITN_PRCS_IND = 2,
	FIL_RUN_DT = Current_fil_run_dt,
	ROW_UPDT_DT = dta_cltn_end_dttm
WHERE
	CAPITN_PRCS_IND = 5
AND	HSPC_IND ='Y'
AND	EXISTS (
		SELECT	1
		FROM	FACETS_STAGE.CRME_FUND_DTL_STG STG_CRME
		WHERE	STG_CRME.MBR_CK = MMR_DTL.MBRSHP_CK
		AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN STG_CRME.ERN_FROM_DT AND STG_CRME.ERN_THRU_DT
		AND	STG_CRME.FUND_ID IN ('AAB1', '1AA2', '1BA2', 'AAB2', '1AA3', '1BA3', '1B80', '1A80')
	)
AND	EXISTS (
		SELECT	1
		FROM	FACETS_CUSTOM.FCTS_TMS_MBRID_XWLK XWLK
		WHERE	XWLK.MBR_CK = MMR_DTL.MBRSHP_CK
		AND	MMR_DTL.PMT_MSA_STRT_DT BETWEEN XWLK.HSPC_EVNT_EFF_DT AND XWLK.HSPC_EVNT_TERM_DT
	)
;

-------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT              |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|   1 |  UPDATE                       | MMR_DTL               |       |       |            |          |
|   2 |   NESTED LOOPS SEMI           |                       |     1 |   148 | 12431   (2)| 00:02:30 |
|*  3 |    HASH JOIN RIGHT SEMI       |                       |    49 |  5488 | 12375   (2)| 00:02:29 |
|   4 |     TABLE ACCESS FULL         | FCTS_TMS_MBRID_XWLK   |  6494 | 64940 |    24   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL         | MMR_DTL               |   304K|    29M| 12347   (2)| 00:02:29 |
|*  6 |    TABLE ACCESS BY INDEX ROWID| CRME_FUND_DTL_STG     |     1 |    36 |     5   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN          | IE1_CRME_FUND_DTL_STG |     8 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("XWLK"."MBR_CK"="MMR_DTL"."MBRSHP_CK")
       filter("XWLK"."HSPC_EVNT_EFF_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
              "XWLK"."HSPC_EVNT_TERM_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
   5 - filter("CAPITN_PRCS_IND"=5 AND "HSPC_IND"='Y')
   6 - filter(("STG_CRME"."FUND_ID"='1A80' OR "STG_CRME"."FUND_ID"='1AA2' OR
              "STG_CRME"."FUND_ID"='1AA3' OR "STG_CRME"."FUND_ID"='1B80' OR "STG_CRME"."FUND_ID"='1BA2' OR
              "STG_CRME"."FUND_ID"='1BA3' OR "STG_CRME"."FUND_ID"='AAB1' OR "STG_CRME"."FUND_ID"='AAB2') AND
              "STG_CRME"."ERN_FROM_DT"<=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT") AND
              "STG_CRME"."ERN_THRU_DT">=INTERNAL_FUNCTION("MMR_DTL"."PMT_MSA_STRT_DT"))
   7 - access("STG_CRME"."MBR_CK"="MMR_DTL"."MBRSHP_CK")

The most informative bit of narrative that went with this query said:

“The table MMR_DTL doesnt have index on these columns CAPITN_PRCS_IND , HSPC_IND .. Since this is an update stmt which will update 85k records, In a dilema whether to index these columns or not .. And this table MMR_DTL is an ever growing table. Worried about the update performance. “

This was in response an observation that there was a full tablescan on MMR_DTL at operation 5 despite the predicate “CAPITN_PRCS_IND”=5 AND “HSPC_IND”=’Y’. You’ll note that the predicted cardinality for that scan is 304K and the update statement is going to change CAPITN_PRCS_IND from the value 5 to the value 2 – so it’s not entirely unreasonable to be worried about the impact of creating an index that included the column capitn_prcs_ind.

What more can we say about this query, given the limited information. Lots – unfortunately the owner of the query isn’t giving anything else away.

I’m going to leave this note unfinished to give people a little chance to think about the clues in the request, the questions they might ask, reasons why there might be a performance problem, and strategies they might investigate, then I’ll update the posting with a few ideas some time in the next 24 hours.

 

 

Sorry, something came up, Ill have to do the updayte tomorrow.

 


How to import SQL Plan Directives

Yann Neuhaus - Thu, 2015-05-21 10:01

Today I've presented SQL Plan Directives at the SOUG Romandie event. I had a question about the import/export of directives by Data Pump. The idea is that a lot of testing has been done on QA in order to validate the upgrade to 12c. A few directives had bad consequences (see Ludovico Caldara blog post for an example), then directives have been disabled. When going to production, they want to start with those directives disabled. Yes, they can be imported. We have to pack them into a staging table, import that table, and unpack them. It's similar tho what we do with SQL Plan Baselines.

testcase: a PERMANENT SPD

I'm using the state of the demo in my presentation where I have a PERMANENT directive that I have disabled because I don't want to do too much dynamic sampling.

SQL> show user
USER is "DEMO"
SQL> select table_name,num_rows,last_analyzed from user_tables;

TABLE_NAME             NUM_ROWS LAST_ANA
-------------------- ---------- --------
STGTAB
DEMO_TABLE                 1000 17:35:51

SQL> select table_name,column_name,num_distinct,last_analyzed from user_tab_col_statistics;

TABLE_NAME           COLUMN_NAME  NUM_DISTINCT LAST_ANA
-------------------- ------------ ------------ --------
DEMO_TABLE           A                       2 17:35:51
DEMO_TABLE           B                       2 17:35:51
DEMO_TABLE           C                       2 17:35:51
DEMO_TABLE           D                       2 17:35:51
DEMO_TABLE           SYS_STSPJNMI            2 17:35:51

SQL> select directive_id,state,created,last_modified,last_used,enabled,extract(notes,'//internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner=user ) order by created;

           DIRECTIVE_ID STATE      CREATED  LAST_MOD LAST_USE ENABLED INTERNAL_
----------------------- ---------- -------- -------- -------- ------- ---------
    9456295843023884415 USABLE     17:35:45 18:07:16 18:07:16 NO      PERMANENT

Pack directive

Datapump can export/import the table, the extended statistics and the statistics, but the SQL Plan Directives are not included. We have to pack then into a staging table in order to export/import that table and umpack it at the destination site.

SQL> exec dbms_spd.create_stgtab_directive('STGTAB');

PL/SQL procedure successfully completed.

SQL> select dbms_spd.pack_stgtab_directive('STGTAB',user,9456295843023884415) from dual;

DBMS_SPD.PACK_STGTAB_DIRECTIVE('STGTAB',USER,9456295843023884415)
-----------------------------------------------------------------
                                                                1

DataPump

Here I'll import the DEMO table, and the STGTAB which contains the packed statistics, through a loopback database link, and import it to another schema: DEMO_SITE2

SQL> create database link LOOPBACK connect to demo identified by demo using '&_CONNECT_IDENTIFIER';
Database link created.

SQL> host impdp demo/demo@&_CONNECT_IDENTIFIER network_link=LOOPBACK tables="DEMO.DEMO_TABLE,DEMO.STGTAB" remap_schema='DEMO:DEMO_&&1';

Import: Release 12.1.0.1.0 - Production on Thu May 21 18:07:42 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "DEMO"."SYS_IMPORT_TABLE_01":  demo/********@//192.168.78.114/DEMO14 network_link=LOOPBACK tables=DEMO.DEMO_TABLE,DEMO.STGTAB remap_schema='DEMO:DEMO
_SITE2'
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . imported "DEMO_SITE2"."STGTAB"                            6 rows
. . imported "DEMO_SITE2"."DEMO_TABLE"                     1000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "DEMO"."SYS_IMPORT_TABLE_01" successfully completed at Thu May 21 18:08:18 2015 elapsed 0 00:00:37

Check what is imported:

SQL> show user
USER is "DEMO_SITE2"
SQL> select table_name,num_rows,last_analyzed from user_tables;

TABLE_NAME             NUM_ROWS LAST_ANA
-------------------- ---------- --------
DEMO_TABLE                 1000 17:35:51
STGTAB

SQL> select table_name,column_name,num_distinct,last_analyzed from user_tab_col_statistics;

TABLE_NAME           COLUMN_NAME  NUM_DISTINCT LAST_ANA
-------------------- ------------ ------------ --------
DEMO_TABLE           A                       2 17:35:51
DEMO_TABLE           B                       2 17:35:51
DEMO_TABLE           C                       2 17:35:51
DEMO_TABLE           D                       2 17:35:51
DEMO_TABLE           SYS_STSPJNMI            2 17:35:51

SQL> select directive_id,type,state,created,last_modified,last_used,enabled,notes from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner=user ) order by created;

no rows selected

I have my statistics (columns and extended stats for the column group) but no directives. If I stop here, a new query will probably create a new SQL Plan Directive, which will became PERMANENT and will trigger Adaptive Dynamic Sampling. I want to avoid that and get my disabled directive from the previous environment.

Unpack directive

SQL> select dbms_spd.unpack_stgtab_directive('STGTAB',user,9456295843023884415) from dual;

DBMS_SPD.UNPACK_STGTAB_DIRECTIVE('STGTAB',USER,9456295843023884415)
-------------------------------------------------------------------
                                                                  0
Look at the returned number: no directive has been unpacked. Because I'm now into another schema, I have to update the owner. I don't know if there is a better way to do it, but here I update the STGTAB:
SQL> update stgtab set c1='DEMO_SITE2';

6 rows updated.

SQL> commit;

Commit complete.

don't forget to commit. It doesn't work if you don't.
SQL> select dbms_spd.unpack_stgtab_directive('STGTAB',user,9456295843023884415) from dual;

DBMS_SPD.UNPACK_STGTAB_DIRECTIVE('STGTAB',USER,9456295843023884415)
-------------------------------------------------------------------
                                                                  1

SQL> select directive_id,state,created,last_modified,last_used,enabled,extract(notes,'//internal_state/text()') internal_state from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner=user ) order by created;

           DIRECTIVE_ID STATE      CREATED  LAST_MOD LAST_USE ENABLED INTERNAL_
----------------------- ---------- -------- -------- -------- ------- ---------
   18397758907897483632 USABLE     18:09:12                   NO      PERMANENT

The directive is there. It's disabled which is what I wanted. Creation date and last used has been reset. If you see a directive with a status different than NEW and without a LAST_USED then it's an imported one. Note that the directive ID has changed.

Conclusion

When you have directives in HAS_STATS, then because Data Pump imports the statistics by default (including extended statistics), you probably don't need to import the SQL Plan Directives. But if you disabled some statistics and want the same in another environment, you have to pack/import/unpack them.

This morning event was a great event. Thanks Ludovico for the picture.

Live demos are best! @FranckPachot pic.twitter.com/aJfBO9cX5C

— Ludovico Caldara (@ludodba) May 21, 2015

Oracle Identity & Access Management (IDAM) 11gR2 PS3 – 11.1.2.3 is now available

Online Apps DBA - Thu, 2015-05-21 09:35
  Oracle Identity & Access Management 11g R2 PS3 (11.1.2.3) is now available (released on 18th May 2015) Documentation for Identity & Access Management 11gR2 PS3 11.1.2.3.0 is here Download Oracle IdM 11gR2 PS3 (11.1.2.3) software from here     Products available as part of Identity & Access Management 11gR2 Ps3 (11.1.2.3) are Oracle Access […] The post Oracle Identity & Access Management (IDAM) 11gR2 PS3...

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Irrecoverable part III : a fix

Laurent Schneider - Thu, 2015-05-21 09:05

After part I:the problem, Part II: two reports here is part III.

We have backed up archivelog during a full and deleted them before the full completed.


RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
15 4.00K  DISK 00:00:00   13:31:08
    BP Key: 15   Status: AVAILABLE
    Piece Name: /FULL/0fq7gc0s_1_1

  List of Archived Logs in backup set 15
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  15  355533  13:29:55 355777  13:31:08

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
25 4.00K  DISK 00:00:00   13:31:26
    BP Key: 25   Status: AVAILABLE
    Piece Name: /ARCH/0pq7gc1e_1_1

  List of Archived Logs in backup set 25
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  15  355533  13:29:55 355777  13:31:08

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
26 3.00K  DISK 00:00:00 13:31:26
  BP Key: 26   Status: AVAILABLE
  Piece Name: /ARCH/0qq7gc1e_1_1

  List of Archived Logs in backup set 26
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  16  355777  13:31:08 355827  13:31:22

BS Size   Type Elapsed  Completion
-- ------ ---- -------- ----------
28 2.50K  DISK 00:00:00 13:31:28
    BP Key: 28   Status: AVAILABLE
    Piece Name: /FULL/0sq7gc1g_1_1

  List of Archived Logs in backup set 28
  Seq LowSCN  LowTime  NextSCN NextTime
  --- ------- -------- ------- --------
  17  355827  13:31:22 355863  13:31:28

Sequence 16 is missing in /FULL/ and is available in /ARCH/

Ok, let’s copy the missing pieces from ARCH to FULL


$ cd /
$ cp -p ARCH/* FULL/

Now let’s try. I save my directory on an offline location (tape/usb/cdrom) and dropped my database including backups.


cd /
tar cvf /tmp/FULL.tar FULL

Let’s drop (it’s just a test)


RMAN> startup force mount dba 
RMAN> drop database including backups;
database dropped

Ok let’s get this USB stick back from the moon to my datacenter and see if I can restore …


cd /
tar xvf /tmp/FULL.tar 

Let’s do this !


RMAN> startup force nomount
RMAN> restore controlfile from '/FULL/c-2414975447-20150521-01';
RMAN> alter database mount;

Now I want RMAN to look for other files in FULL and also marked expired stuff. The start with does the magic.

RMAN> catalog start with '/FULL/' noprompt;
RMAN> crosscheck backup;
RMAN> crosscheck copy;
RMAN> crosscheck archivelog all;
RMAN> restore database;
RMAN> recover database;
RMAN> alter database open resetlogs;

As @dbastorage said on twitter:
“if it was easy, we would not need DBAs! ”

@laurentsch and have a job to backup the before backup redo, that includes the delete… if it was easy, we would not need DBAs! :) 2/2

— Peter Herdman-Grant (@DBAStorage) May 20, 2015

AeroGear Germany tour 2015

Matthias Wessendorf - Thu, 2015-05-21 04:50

Over the last three days Sébastien Blanc and I have been touring through Germany to visit a few JUGs.

The talks

We had the same setup for every evening: First, Sebi was talking about JBoss Tools and Forge and showed how to quickly create a Java EE based backend. Afterwards the audience saw the how to create a good looking Apache Cordova mobile app, that he also generated using Forge! At the end the solution was also protected using Keycloak. Afterwards I had a talk about Push Notifications in general. During the talk I had various demos, like our AeroDoc server and iOS client to demonstrate a rich push interaction, between different clients and servers, using geolocation and user segementation. I was also happy that I could demo some hot new stuff on our UI by showing stuff off code from different pull requests.

The cities

We had three different cities on the agenda and the start of the tour was Berlin. However, unfortunately I forgot my powerplug at home… :sob: But, arriving in Berlin I could borrow one fromSimon Willnauer. THANKS DUDE! :heart:

Berlin

The event took place at the VOTUM GmbH and it was a good start of our tour. We had a packed room and lot’s of questions during both talks, so we ended up talking a bit longer. Afterwards there was time to socialize with a drink or two. Here are some impressions from the evening.

Dortmund

After arrving in Dortmund Hendrik Ebbers we giving us a ride to the BVB training center. It was funny, that one guy thought that Sebi looks like the coach (Juergen Klopp) :joy: However, the real Juergen was a few hundred meters away, watching the team doing their training. Before the event started we did some preparation at Hendrik’s awesome home-office. Here is a picture of me talking about the push server, showing the latest greatest of a pending pull-request from Lukáš Fryč. The talks went well and while we enjoyed some Pizza we had a some good conversations with the attendees!

Stuttgart

On our way to the JUG Stuttgart we got effected by the strike, which turned out to be a very positive thing. We got an almost empty ICE train :v: This time the talks took place in the Stuttgart Red Hat Office and Heiko was already awaiting us at the S-Bahn station. After a little introduction by Heiko it was again Sebi’s part before I took over talking about push.

Summary

It was a great tour and lot’s of questions during and after the talks showed we had the right content for the different events. I am already looking forward to see some of the attendees getting in touch with our AeroGear community for more debats! On the last evening Heiko, Sebi and I went for a few :beers: in Stuttgart. Traveling back home, I had another interaction with the strike, which was again very positive. They had to change the train and all seats were opened up, so I ended up sitting on a nice and comfortable seat in the first class for free :stuck_out_tongue_winking_eye:


DBA_SQL_PLAN_DIRECTIVE.LAST_USED

Yann Neuhaus - Thu, 2015-05-21 02:30

If you have read Matching SQL Plan Directives and queries using it then you know how to use the '+metrics' format of dbms_xplan.

21:49:54 SQL> explain plan for select distinct * from DEMO_TABLE 
              where a=0 and b=0 and c=0 and d=0;
Explained.

21:50:01 SQL> select * 
              from table(dbms_xplan.display(format=>'basic +rows +note +metrics'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
Plan hash value: 3823449216

-------------------------------------------------
| Id  | Operation          | Name       | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT   |            |     1 |
|   1 |  SORT UNIQUE NOSORT|            |     1 |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |   500 |
-------------------------------------------------

Sql Plan Directive information:
-------------------------------
  Used directive ids:
    2183573658076085153
Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement
So, when I run this query with predicates on columns A,B,C,D I'm using the directive id 2183573658076085153. 'Using' means doing dynamic sampling in order to get good estimations, because the directive tells us that there is a misestimate when using only the static statistics. Then we can look at that directive:
21:50:11 SQL> select directive_id,state,created,last_modified,last_used 
              from dba_sql_plan_directives where directive_id=2183573658076085153;

           DIRECTIVE_ID STATE      CREATED  LAST_MODIFIED LAST_USED
----------------------- ---------- -------- ------------- -------------
    2183573658076085153 USABLE     21:41:50 21:41:55.0000 21:41:55.0000
Look at the timestamps. I'm at 21:50 and the directive which has been created 9 minutes ago at 21:41:50 has been used 5 seconds later at 21:45:55 and it's the last usage.

Let's run the query now. I know that it will use the directive:

21:50:40 SQL> select distinct * from DEMO_TABLE where a=0 and b=0 and c=0 and d=0;

         A          B          C          D
---------- ---------- ---------- ----------
         0          0          0          0
It I check to be sure that the directive has been used:
21:50:55 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +note'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
select distinct * from DEMO_TABLE where a=0 and b=0 and c=0 and d=0

Plan hash value: 3823449216

-------------------------------------------------
| Id  | Operation          | Name       | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT   |            |       |
|   1 |  SORT UNIQUE NOSORT|            |     1 |
|   2 |   TABLE ACCESS FULL| DEMO_TABLE |   500 |
-------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

Yes: SPD used, we have dynamic sampling and accurate cardinality estimation.
The SPD has been used by the query that I've run at 21:50:40

So I'll will check the LAST_USED timestamp, after being sure that what has been modified in memory is written to dictionary:

21:50:59 SQL> exec dbms_spd.flush_sql_plan_directive;
PL/SQL procedure successfully completed.

21:51:07 SQL> select directive_id,state,created,last_modified,last_used 
              from dba_sql_plan_directives where directive_id=2183573658076085153;

           DIRECTIVE_ID STATE      CREATED  LAST_MODIFIED LAST_USED
----------------------- ---------- -------- ------------- -------------
    2183573658076085153 USABLE     21:41:50 21:41:55.0000 21:41:55.0000
The LAST_USED has not been updated.

LAST_USED

This is not a bug. Updating the dictionary for all directives used during the previous 15 minutes (the automatic flush frequency) would be too expensive, so it is not done at each flush. The LAST_USED goal is to manage SPD retention - drop those that have not been used for a while (53 weeks by default) - and the retention is defined as weeks. So the LAST_USED is updated only if it is one week higher than the previous LAST_USED. Not exactly one week but 6.5 days.

6.5 days

In fact, when LAST_USED is updated, it adds 6.5 days to the current timestamp. We can see that in the underlying table:

SQL> alter session set nls_date_format='dd-mon-yyyy hh24_mi:ss';
Session altered.

SQL> select type,state,created,last_modified,last_used from sys.opt_directive$ where dir_id=2183573658076085153;
      TYPE      STATE CREATE  LAST_MODIFI LAST_USED
---------- ---------- ------- ----------- --------------------
         1          3 19-may- 19-may-2015 26-may-2015 09:41:26
Last used as displayed by the DBA_SQL_PLAN_DIRECTIVES is 19-may-2015 at 21:41:26 but the internal table OPT_DIRECTIVE$ stores 26-may-2015 09:41:26 which is 6.5 days later. The view subtracts 6.5 days to it in order to expose it to us. But we must keep in mind that this timestamp will not be updated until 26-may-2015.

Conclusion: We cannot rely on LAST_USED when we want to know if a directive has been used recently. We must use explain plan for that. Note that this implementation is not new: it's exactly the same as the LAST_EXECUTED column in DBA_SQL_PLAN_BASELINES. I'ts an optimization because tracking each usage would be too expensive.

what that in-memory term is about, and what not

Yann Neuhaus - Thu, 2015-05-21 02:10

Everybody is talking about in-memory databases these days. And everybody is talking about columnar store for sets of data because this can be a benefit for analytic queries. And a lot of people start mixing these terms not realizing that these topics are not exchangeable.

 

Alfresco: video thumbnails and transformations

Yann Neuhaus - Thu, 2015-05-21 01:00


Alfresco support, by default, the preview in your browser of some video formats like mp4 but doesn't support some others like mkv, avi or wmv. Even if Google Chrome can read mkv files for example, if you try to use the action "View in Browser" (this is not the same thing as the preview), Chrome will not try to play the mkv file but will download it instead. That's why if you upload an mp4 video in Alfresco, you will certainly be able to watch this video directly in your browser. In this blog, I will try to explain how to configure an external tool to take care of the thumbnails creation and transformations for your videos. To achieve that, we will install and configure ffmpeg. In addition to that, you can also very easily configure Alfresco to embed an external video player like "FlowPlayer" that would take care of playing all video formats directly "streaming" from Alfresco, not using your browser. Basically, this is done by replacing the preview page for some Mime types but I will not describe it in details here.


For this blog post, I will use /opt/alfresco-4.2.c as the $ALF_HOME environment variable: the folder where Alfresco has been installed. The current release of ffmpeg is version 2.6.3. This is at least working for Alfresco v4.2.x and v5.x.


I. Installation of ffmpeg


The ffmpeg binaries/executables are available in the ffmpeg website. From this website, download the "Static" build for Windows, Linux or Mac depending on the Operating System on which Alfresco has been installed. For this example, I will use a linux host (RedHat 6.x, 64 bits) but I can assure you that it's working properly on Windows too. I already installed ffmpeg with several versions of Alfresco from v4.2.x to 5.x.

[alfresco ~]$ cd /opt
[alfresco /opt]$ wget http://johnvansick...elease-64bit-static.tar.xz
[alfresco /opt]$ tar -xJf ffmpeg-release-64bit-static.tar.xz
[alfresco /opt]$ chmod -R 755 ffmpeg-2.6.3-64bit-static
[alfresco /opt]$ chown -R alfresco:alfresco ffmpeg-2.6.3-64bit-static
[alfresco /opt]$ mv ffmpeg-2.6.3-64bit-static /opt/alfresco-4.2.c/ffmpeg


These commands will download, extract, change the permissions/ownership and rename the folder to something more friendly (at a more proper location too).


II. Configuration of Alfresco


Once ffmpeg has been installed, the next step is to configure Alfresco to use it. The first thing to do is to add some parameters in the well-known alfresco-global.properties file. Don't hesitate to customize these paremeters, remove the lines that aren't needed, aso...

[alfresco /opt]$ cd /opt/alfresco-4.2.c/tomcat/shared/classes

[alfresco /classes]$ cat alfresco-global.properties
### Begin of the file with your custom parameters ###
### E.g.: dir.root, db.driver, db.username, aso... ###

### FFMPEG executable location ###
ffmpeg.exe=/opt/alfresco-4.2.c/ffmpeg/ffmpeg

### Video Thumbnails parameters ###
# ffmpeg.thumbnail
content.transformer.ffmpeg.thumbnail.priority=50
content.transformer.ffmpeg.thumbnail.extensions.3g2.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.3gp.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.asf.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.avi.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.avx.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.flv.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mov.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.movie.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mp4.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mpeg2.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.mpg.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.ogv.jpg.supported=true
content.transformer.ffmpeg.thumbnail.extensions.wmv.jpg.supported=true

### Video Transformations parameters ###
# ffmpeg.flv
content.transformer.ffmpeg.flv.priority=50
content.transformer.ffmpeg.flv.extensions.3g2.flv.supported=true
content.transformer.ffmpeg.flv.extensions.3gp.flv.supported=true
content.transformer.ffmpeg.flv.extensions.asf.flv.supported=true
content.transformer.ffmpeg.flv.extensions.avi.flv.supported=true
content.transformer.ffmpeg.flv.extensions.avx.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mov.flv.supported=true
content.transformer.ffmpeg.flv.extensions.movie.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mp4.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mpeg2.flv.supported=true
content.transformer.ffmpeg.flv.extensions.mpg.flv.supported=true
content.transformer.ffmpeg.flv.extensions.ogv.flv.supported=true
content.transformer.ffmpeg.flv.extensions.wmv.flv.supported=true
# ffmpeg.mp4
content.transformer.ffmpeg.mp4.priority=50
content.transformer.ffmpeg.mp4.extensions.3g2.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.3gp.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.asf.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.avx.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.mov.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.movie.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.mpeg2.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.mpg.mp4.supported=true
content.transformer.ffmpeg.mp4.extensions.wmv.mp4.supported=true
content.transformer.avi.mp4.extensions.mpg.mp4.supported=true
content.transformer.ogv.mp4.extensions.wmv.mp4.supported=true
# ffmpeg.mp3
content.transformer.ffmpeg.mp3.priority=50
content.transformer.ffmpeg.mp3.extensions.aiff.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.au.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.m4a.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.oga.mp3.supported=true
content.transformer.ffmpeg.mp3.extensions.wav.mp3.supported=true


In the above configuration, the parameter "ffmpeg.exe=/opt/alfresco-4.2.c/ffmpeg/ffmpeg" is the location of the binary file named ffmpeg. If you install Alfresco and ffmpeg on a Windows host, then this parameter looks like: "ffmpeg.exe=C:/Alfresco-4.2.c/ffmpeg/bin/ffmpeg.exe"


Once this is done, you need to enable the thumbnail & transformation extensions in Alfresco and this is done using the following steps:

[alfresco /classes]$ cd /opt/alfresco-4.2.c/tomcat/shared/classes/alfresco/extension
[alfresco /extension]$ mv video-thumbnail-context.xml.sample video-thumb-context.xml
[alfresco /extension]$ mv video-transformation-context.xml.sample video-transf-context.xml


If you want, you can parse these two files to understand what Alfresco will do with ffmpeg but basically these files define the commands and options that will be used by Alfresco.


And that's it, Alfresco has been configured to use ffmpeg for thumbnails creation and transformations of your video. For Alfresco to take these changes into account, simply restart your Application Server using an Alfresco service:

[alfresco ~]$ service alfresco restart


Or using Alfresco default scripts:

[alfresco ~]$ /opt/alfresco-4.2.c/alfresco.sh stop
[alfresco ~]$ /opt/alfresco-4.2.c/alfresco.sh start



To check if ffmpeg is working properly, simply open Alfresco Share, browse your repository to find a video with an avi or wmv format and a thumbnail should now be present (if not, just refresh the page...). You can also try to upload a video and if a thumbnail is created for this video, then ffmpeg is working!