Development

Hide certain objects on an APEX page

Dimitri Gielis - Sat, 2018-07-21 17:20
A few days ago I got a question on how to hide the title row from the Interactive Report Pivot view.
So the person didn't want to show the red area:


The solution to this problem is to add the following CSS to your page:


table.a-IRR-table--pivot tr:nth-child(3) {
    display:none;
}

The result is this - the title is gone:


Doing this blog post is not about giving the solution to the above problem. I find it more important to show you the process to come to your answer. It comes down to find the right elements on the page which you can manipulate with CSS or JavaScript. To hide something, you can either use CSS with display: none or a JavaScript function (or JQuery hide()). The first thing you do is a search for the element. You want to use the Developer Tools of your browser for that. Most of the time you can right click on your page and do Inspect Element. The browser will show the HTML that is behind what you see on the page.


In the above screenshot, I see that row is a TR in a Table.
So the next step is to find a way to select that element. Typically you would use the id or class tag and look that up. The TR in our case doesn't have any of those, so I went up a line in the hierarchy until I find a good selector. The table has a class a-IRR-table--pivot which we can use.
Once we have the selector, we want to go to the real element, so we navigate back down. Now you need to know a bit of JavaScript or CSS or search on the internet how to do that. You can add elements after each other and it will drill down in the hierarchy again.
In our case, the TR is the third TR in the table, and there's a function to select that, which I used in CSS (nth-child).

If this is all new to you, learning about JavaScript and CSS selectors is a great start. For example, W3School is a nice site to get started learning more about HTML, CSS, JavaScript, and general web.

Categories: Development

I'll be at APEX Meetup Munich: Thu 19 Jul 2018

Dimitri Gielis - Sun, 2018-07-15 06:14
Just a quick note I'll do two presentations at the APEX Meetup in Munich on Thursday, July 19th 2018.

In the first presentation I'll bring you to a virtual and augmented world, entirely build in Oracle Application Express (APEX). There are 30 Google Cardboards available to make the experience complete. Fun guaranteed! :)


At KScope I was also interviewed by Bob Rhubart on my talks over there, which the AR/VR presentation was one of them.


In my second presentation at Munich I'll show the upcoming version of APEX Office Print (AOP).
I'll show some features nobody has seen before :) With every major release of AOP I feel like this:


If you are in the Munich area I would love to meet you at the meetup.

Categories: Development

My top 5 APEX 18.1 Plugins

Dimitri Gielis - Sat, 2018-07-14 05:46
With every new version of Oracle Application Express (APEX) new features are added and the life of a developer is made even easier. If the feature set is not enough or you see you need to build the same functionality more often, you can always extend APEX with plug-ins.

There are six different types of plugins: dynamic action, item,  region, process, authentication scheme and authorization scheme.

Plug-ins are absolutely fantastic to extend the native functionalities of APEX in a declarative way. The APEX plugin becomes a declarative option in the APEX Builder and has the [Plug-in] text next to it. In the next screenshot, you see the dynamic actions being extended by two Plug-ins.


If you are searching for an APEX plug-in, I typically go to APEX World > Plug-ins. The nice thing about that site is that the plug-ins seem to be maintained, so if a plug-in is not supported anymore it gets the status deprecated.

!! And here lays the catch with using plug-ins. When you decide to use a plug-in in your project, you become responsible for this and need to make sure it's compatible with every release of Oracle APEX. Many plug-ins are open source and many plug-in developers maintain their plug-ins, but it's really important you understand that at the end you are responsible for things you put in your application. If the plug-in is not secure or it breaks in the next release of APEX, you need to find a solution. So use plug-ins with care and see for example how many likes the plug-in has or what the comments are about the plug-in or author. Oracle is not reviewing or supporting the plug-ins !!

When I saw the tweet of Travis, I thought to do a blog post on my top 5 plugins I use in almost every project.


Here we go:

1. Built with love using Oracle APEX

I'm proud to built applications with Oracle Application Express, and this plug-in makes it very clear :) At the bottom of the app, you will see this text:


Note that in Oracle APEX 18.1 this text in included by default and you don't even need to add the plugin. Nevertheless, I wanted to include it in this list as it should be there in every app, even the ones built before APEX 18.1 :)

2. Select2

When a select list (or drop-down) has many values, it takes too long to find the right value. Select2 makes it easy to search for values, it also supports lazy loading and multiple select.


3. APEX Office Print

APEX Office Print extends APEX so it becomes possible to export to native Excel files and generate documents in Word, Powerpoint, PDF, HTML and Text, all based on your own template. It has many more features, I blogged about some before.



4. Dropzone

APEX 18.1 has declarative multi-file upload, but still, I love the Dropzone plugin developed by Daniel Hochleitner. You can drag multiple files from your desktop straight in your APEX app. Daniel is one of my favorite plug-in developers. When he releases something, you know it will be good.



5. Modal LOV

This is a newer plugin and I haven't used it that much yet, but I'm sure I will do. The nice thing with this item type plugin is that it also supports Interactive Grid. Where Select2 stays within the page, this Modal LOV comes with a modal list of values (pop-up) which is great if you want to show multiple columns or need more context for the record you need to select.


There are many more plug-ins out there, most of them work on APEX 5.x and upwards. For example, Pretius has some cool plug-ins too, the one to create nested reports I recently used in a project. Another site you can find plug-ins is APEX-Plugin.com.

Categories: Development

Automatically capture all errors and context in your APEX application

Dimitri Gielis - Sat, 2018-06-30 16:30
Let me start this post with a conversation between an end-user (Sarah) and a developer (Harry):

End-user: "Hey there, I'm receiving an error in the app."
Developer: "Oh, sorry to hear that. What is the message saying?"
End-user: "Unable to process row of table EBA_PROJ_STATUS_CATS.  ORA-02292: integrity constraint (XXX.SYS_C0090660) violated - child record found"
Developer: "Oh, what are you trying to do?"
End-user: "I'm trying to delete a category."
Developer: "Oh, most likely this category is in use, so you can't delete the category, you first need ..."
End-user: "Ehh?!"

You might ask yourself, what is wrong with this conversation?

The first thing is that the end-user gets an error which is hard to understand. She probably got the error before but tried a few times before calling the developer (or support). Most likely Sarah has a tight deadline and these errors don't really help their mood. 
The other problem is that the developer was most likely just busy working on some complex logic and now gets interrupted. It takes some minutes before Harry can understand what Sarah is talking about. He needs to ask a few questions to know what Sarah is doing and doesn't have much context. He might ask to send a screenshot of the error and a few minutes later he receives this (app in APEX 5.1):

Harry is a smart cookie, so he knows in which schema to look for that constraint name, so he knows which table it's linked to. If Harry read my previous blog post on how to remotely see what Sarah was doing, he has more context too.

If the application is running in APEX 18.1, it's a different story. The screenshot will look like this:

APEX 18.1 actually enhanced the default error message. The user gets fewer details and sees a debug id. With this debug id the developer can get actually more info in Your App > Utilities > Debug Messages:


You might also want to check this blog post by Joel Kallman where to find more info when receiving an internal error with debug id.

Although APEX 18.1 captures more info, there's a more recommended way to deal with errors.

In APEX you can define an Error Handling Function which will kick in every time an error occurs. You can define this function in the Application Definition:


When you look in the Packaged applications that are shipped with Oracle Application Express (APEX), you find some examples. The above screenshot comes from P-Track.

The error handling function has this definition:

function apex_error_handling (p_error in apex_error.t_error )
  return apex_error.t_error_result

The example used in P-Track gives a good overview (read the comments in the package) of the different errors you want to capture:

function apex_error_handling (
    p_error in apex_error.t_error )
    return apex_error.t_error_result
is
    l_result          apex_error.t_error_result;
    l_constraint_name varchar2(255);
begin
    l_result := apex_error.init_error_result (
                    p_error => p_error );
    -- If it is an internal error raised by APEX, like an invalid statement or
    -- code which can not be executed, the error text might contain security sensitive
    -- information. To avoid this security problem we can rewrite the error to
    -- a generic error message and log the original error message for further
    -- investigation by the help desk.
    if p_error.is_internal_error then
        -- mask all errors that are not common runtime errors (Access Denied
        -- errors raised by application / page authorization and all errors
        -- regarding session and session state)
        if not p_error.is_common_runtime_error then
            add_error_log( p_error );
            -- Change the message to the generic error message which doesn't expose
            -- any sensitive information.
            l_result.message := 'An unexpected internal application error has occurred.';
            l_result.additional_info := null;
        end if;
    else
        -- Always show the error as inline error
        -- Note: If you have created manual tabular forms (using the package
        --       apex_item/htmldb_item in the SQL statement) you should still
        --       use "On error page" on that pages to avoid loosing entered data
        l_result.display_location := case
                                       when l_result.display_location = apex_error.c_on_error_page then apex_error.c_inline_in_notification
                                       else l_result.display_location
                                     end;
        -- If it's a constraint violation like
        --
        --   -) ORA-00001: unique constraint violated
        --   -) ORA-02091: transaction rolled back (can hide a deferred constraint)
        --   -) ORA-02290: check constraint violated
        --   -) ORA-02291: integrity constraint violated - parent key not found
        --   -) ORA-02292: integrity constraint violated - child record found
        --
        -- we try to get a friendly error message from our constraint lookup configuration.
        -- If we don't find the constraint in our lookup table we fallback to
        -- the original ORA error message.
        if p_error.ora_sqlcode in (-1, -2091, -2290, -2291, -2292) then
            l_constraint_name := apex_error.extract_constraint_name (
                                     p_error => p_error );
            begin
                select message
                  into l_result.message
                  from eba_proj_error_lookup
                 where constraint_name = l_constraint_name;
            exception when no_data_found then null; -- not every constraint has to be in our lookup table
            end;
        end if;
        -- If an ORA error has been raised, for example a raise_application_error(-20xxx)
        -- in a table trigger or in a PL/SQL package called by a process and we
        -- haven't found the error in our lookup table, then we just want to see
        -- the actual error text and not the full error stack
        if p_error.ora_sqlcode is not null and l_result.message = p_error.message then
            l_result.message := apex_error.get_first_ora_error_text (
                                    p_error => p_error );
        end if;
        -- If no associated page item/tabular form column has been set, we can use
        -- apex_error.auto_set_associated_item to automatically guess the affected
        -- error field by examine the ORA error for constraint names or column names.
        if l_result.page_item_name is null and l_result.column_alias is null then
            apex_error.auto_set_associated_item (
                p_error        => p_error,
                p_error_result => l_result );
        end if;
    end if;
    return l_result;
end apex_error_handling;

When defining this error handling function the error the user gets is more like a notification message and embedded in your app. You can also define a custom message, in the above package there's a lookup in an error_lookup table, but as it can't find the constraint name, it falls back to the normal message.


The real power comes when you start to combine the error handling function with a call to also log session state information. Then you know exactly which record this error was produced for.

There are a couple of ways to include the session state:

Team Development

I typically include a feedback page in my apps. When the user logs feedback by clicking on the feedback link, this is saved in Team Development. The really cool thing is that whenever feedback is logged, automatically the session state of items and some other info like the browser that was being used at the moment of the logging is included. But you can also log feedback through an APEX API:

apex_util.submit_feedback (
    p_comment         => 'Unexpected Error',
    p_type            => 3,
    p_application_id  => v('APP_ID'),
    p_page_id         => v('APP_PAGE_ID'),
    p_email           => v('APP_USER'),
    p_label_01        => 'Session',
    p_attribute_01    => v('APP_SESSION'),
    p_label_02        => 'Language',
    p_attribute_02    => v('AI_LANGUAGE'),
    p_label_03        => 'Error orq_sqlcode',
    p_attribute_03    => p_error.ora_sqlcode,
    p_label_04        => 'Error message',
    p_attribute_04    => p_error.message,
    p_label_05        => 'UI Error message',
    p_attribute_05    => l_result.message
);


Logger 

Logger is a PL/SQL logging and debugging framework. If you don't know it yet, you should definitely check it out. In my opinion, Logger is the best way to instrument your PL/SQL code. Logger has many cool features, one of them is the ability to log your APEX items:

logger.log_apex_items('Debug Items from Error log');
With the above methods, you know which record the end-user was looking at and what the context was. Note that you might find this information too if you look at their session, but it would take more time to figure things out.

Be pro-active

Now, to prevent the conversation from happening again, you can take it one step further and start logging and monitoring those errors. Whenever errors happen you can, for example, log it in your own error table, or in your support ticket system and send yourself an email or notification.
Then instead of the end-user calling you, you call them and say "Hey, I saw you had some issues...".

By monitoring errors in your application, you can pro-actively take actions :)

Note that APEX itself also stores Application Errors. You find under Monitor Activity > Application Errors:


The report gives the error and the session, so you look further into what happened:


So, even when you didn't have an error handling function in place, you can still start monitoring errors that happen in your app. I know the readers of this blog are really smart so you might not see any errors, but still, it might be worthwhile to check it once and a while :)

You find another example of the error handling function in my Git account. I included an example of logging in your own error table and sending an email.

.gist .blob-wrapper.data { max-height:200px; overflow:auto; }
Categories: Development

How to Export to Excel and Print to PDF in Oracle APEX? The answer...

Dimitri Gielis - Thu, 2018-06-28 16:46
Two questions that pop-up a lot when I'm at a conference or when doing consulting are:
  • How can I export my data from APEX to Excel?
  • How can I print to PDF? Or how can I get a document/report with my data?
The reason those questions are asked every time again is that although those features exist to a certain extent in APEX, what you actually want, is not shipped with Oracle Application Express (APEX), at least not yet in Oracle APEX 18.1 and before.

Although the solution to both questions is the same, I'll go into more detail on the specific questions separately.

How can I export my data from APEX to Excel?

People typically want to export data from a Classic Report, Interactive Report, Interactive Grid or a combination of those to Excel.

What APEX provides out-of-the-box is the export to CSV format, which can be opened in Excel.

The biggest issue with CSV is that it's not native Excel format. Depending on the settings of Excel (or better your OS globalization settings) the CSV will open incorrectly. Instead of different columns, you see one big line. You also get an annoying message that some functions will be lost as it's not a native Excel format.


You can customize the CSV separator, so the columns are recognized. But with a global application (users with different settings), it's still a pain. Maybe the biggest issue people have with CSV export is that it's just plain text, so the markup or customizations (sum, group by, ...) are lost.

You can enable the CSV export in the attributes section of the respective components:


When you have BI Publisher (BIP) setup and in APEX specified as Print Server, you have a few more options. In the Classic Report, you find it in the Printing section - there's an option for Excel. In the Interactive Report, there's an option for XLS, the Interactive Grid doesn't have an option.

BI Publisher is expensive and comes with a big infrastructure and maintenance overhead, so this is not an option for many APEX people. But even the companies who have it, are looking at other solutions because although you get a native Excel file, it's cumbersome to use and BIP doesn't export your Interactive Report exactly as you see it on the screen with the customizations you did.

So how to get around those issues then? There are some APEX plugins to export an Interactive Report and Grid as you see it on the screen. The plugin of Pavel is probably the most popular one.
If you need to export one IR/IG at a time to Excel in a pre-defined Excel file, this might be an option for you. If you want to use your own Excel template, the ability to export multiple IR/IG at the same time or want more flexibility all around, you want to read on...

The solution

APEX Office Print (AOP). The AOP plugin extends APEX so you can specify the Excel file you want to start from, your template, in combination with the different APEX reports (Classic Report, Interactive Report, Interactive Grid) and get the output in Excel (or other formats). AOP is really easy to use, yet flexible and full of features no other solution provides. I'll touch on three different aspects customers love.

Interactive Report/Grid to Excel with AOP - WYSIWYG (!)

This feature is what customers love about AOP and something you won't find anywhere else. You can print one or more Interactive Reports and Grids directly to Excel, exactly as you see it on the screen. So if the end-user made a break, added some highlights or did some computations, it's all known by AOP. Even the Group by and Pivot are no problem. The implementation is super simple; in Excel, you can define your template; a title, a logo etc. Where you want to see the Interactive Report or Grid you specify {&interactive_1}, {&interactive_2} and for the Interactive Grid you specify {&static_id&}. In the AOP APEX plugin, you specify the template, and the static ids of the Interactive Report / Grid regions and that is it! AOP is doing the merge... if in the template the special tags are seen, AOP will generate the IR/IG. Not a screenshot - REAL table data! Here's an example with one Interactive Report:


In your Excel you can add multiple tags, on the same sheet and on different sheets... and this doesn't only work in Excel, but also in Word and PDF!

But there is even more... what if you look at the Interactive Report as a chart?
You got it... AOP even understands this. You can plot the table data with {&interactive} and by using {$interactive} it will generate the chart ... and that is a native Office chart, you can still change it in Excel!

Here's an example of the output generated by AOP with three interactive reports, one as a chart:


All the above goodies you can do through the AOP PL/SQL API too. Some people use this to schedule their reports and email them out on a daily basis, so they don't even have to go into APEX.

For me, the Interactive Report and Grid feature are one of the killer features of AOP.

Advanced templates in Excel with AOP

AOP is really flexible in how you build your template. The templating engine supports hierarchical data, angular expressions, conditions, blocks of data so you can view data next to each other and it supports HTML expressions too.

Here's an example of a template which loops over the orders and shows the product of that order. It contains a condition to show an "X" when the quantity is higher than 2 and it also has an expression to calculate the price of the line (unit price * quantity).


The data source specified in the plugin is of type SQL. AOP supports the cursor technique in SQL to create hierarchical data:


And (a part of) the output looks like this:


I'm amazed by what people come up with in their templates to create really advanced Excel sheets. It's really up to your imagination... and a combination of the features of Excel.

Multiple sheets in one Excel file with AOP

We have one customer who basically dumps their entire database in Excel. Every table has its own sheet in Excel. You just need to put the right tags in the different sheets and you are done.

AOP also supports the dynamic generation of sheets in Excel, so you get for example one sheet per customer and on that sheet the orders of that customer. The template looks like this (the magic tag is {!customers}):


The output is this:


We built this feature a while back based on some customers feedback.

Dynamic column generation in Excel with AOP

This is a new feature we have been working for in AOP 4.0. By using the {:tag} we can generate columns dynamically now too:


This might be useful if you want to pivot the data or want to see it in a different format. This feature is also available for Word tables. Another way of pivoting is doing it in Oracle or in an Interactive Report. This feature took us a long time to develop, but we think it's worth it.

I hope by the above I demonstrated why I believe APEX Office Print (AOP) is "THE" solution if you want to export your data from APEX (or the Oracle Database) into Excel.


Let's move on to the second question...

How can I print to PDF? Or how can I get a document/report with my data?

Oracle Application Express (APEX) has two integrated ways to print to PDF: either you use XSL-FO or you use BI Publisher. But the reason people still ask the question of how to print to PDF is that the one is too hard to implement (XSL-FO) and the other (BI Publisher) is too expensive, too hard to maintain and not user-friendly enough.

Again APEX Office Print (AOP) is the way to go. AOP is so easy to use, so well integrated with APEX, that most developers love to work with it. Based on a template you create in Word, Excel, Powerpoint, HTML or Text you can output to PDF. In combination with the AOP plugin or PL/SQL API, it's easy to define where your data and template is, and AOP does the merge for you.

Building the template

It begins the same as with any print engine... You don't want to learn a new tool to build your template in. You want to have a fast result. So the way you get there with AOP is, use the AOP plugin, define your data source and let AOP generate the template for you. AOP will look at your data and create a starter template for you (in Word, Excel, HTML or Text) with the tags you can use based on your data and some explanation how to use the tags.

Here's an example where AOP generates a Word template based on the SQL Query specified in the Data Source:



So now you have a template you can start from. Next, you customize the template to your needs... or you can even let the business user customize the template. The only thing to know is how to use the specific {tags}. As a developer, I always thought my time would be better spent than changing the logo on a template or changing some sentences over and over again. With AOP my dream comes true; as a developer, I can concentrate on my query (data), the business user can create the template themselves and send the new version or upload it straight into the app whenever changes are required.

When customers show me what they did with AOP; from creating templates for invoices, bills of materials, certificates to full-blown books, I'm really impressed by their creativity. If you imagine it, you can probably do it :)

Here's the AOP plugin, where we specify where the customized Word template can be found (in Static Application Files) and set the output to PDF:


Features in AOP that people love

When you download APEX Office Print, it comes with a Sample app, which shows the features of AOP in action. Here's a screenshot of some of the Examples you find in the AOP Sample App:


As this blog post is getting long, I won't highlight all the features of AOP and why they rock so much, but I do want to take two features you probably won't find anywhere else.

Native Office Charts and JET Charts in PDF

AOP supports the creation of native Office Charts, so you can even customize the charts further in Word. But sometimes people want to see exactly the chart they have on the screen, it is a JET chart, a Fusion chart, Highchart or any other library... With AOP you can get those charts straight into your PDF! The only thing you have to do is specifying the static id of the region and in your template, you put {%region} ... AOP will screenshot what the user sees and replace the tag with a sharp image. So even when the customer removed a series from the legend, it's exactly like that in the PDF.



HTML content in PDF

At the APEX World conference, a customer showed their use case of APEX together with AOP. Before they had to manage different Word documents and PDFs, but it was so hard as they had to update different documents every time again, it got out of sync and it was just a pain overall to deal with. So they replaced all this by Oracle APEX and Rich Text Editors. They created a structured database, so the information was in there once, but by using APEX Office Print (AOP) they generate all the different documents (Word/PDF) they need.

AOP will interpret the HTML when it sees an underscore in the tag e.g. {_tag}, then it will translate that HTML into native Word styling. If a PDF is requested, the Word is converted to PDF, so the PDF contains real bold text, or real colors etc.

Here's an example of how Rich Text is rendered to PDF.


AOP also understands when you use for example HTML expressions in your Classic or Interactive Report, or you do some inline styling. It took us a very long time to develop this feature, but the feedback we get from our customer base made it worthwhile :)

So far I showed Word as starting template for your PDF, but sometimes Powerpoint is a great start too, and not many people know about that. In Powerpoint you can make pixel perfect templates too and go to PDF is as easy as coming from Word.

In our upcoming release of AOP 4.0, we spend a lot of time improving our PDF feature set. We will introduce PDF split and merge and the ability to prepend and append files to any of your documents.


Some last words

If you are interested in what APEX Office Print (AOP) is all about, I recommend to sit down and watch this 45 minutes video I did at the APEX Connect conference. In that presentation, I go from downloading, installing to using and show many features of AOP live.



We at APEX R&D are committed to bringing the best possible print engine to APEX, which makes your life easier. We find it important to listen to you and support you however we can. We really want you to be successful. So if you have feedback for us in ways we can help you, even more, let us know, we care about you. We won't rest before we let everybody know about our mission and want to stay "the" printing solution for APEX.

Sometimes I get emails from developers who tell me they have to do a comparison between the print engines for Oracle APEX, but they love AOP. If you include some of the above features (IR/IG to PDF or Excel, JET Charts, and HTML to PDF) in your requirements, you are guaranteed to work with APEX Office Print, there's nothing else that comes even close to those features :)

AOP's philosophy has been to be as integrated as possible in APEX, as easy as building APEX applications, yet flexible enough to build really advanced reports. We make printing and exporting of data in APEX easy.

If you read until here, you are amazing, now I rest my case :)
Categories: Development

Implementing Master/Detail in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2018-06-20 18:29

This is a quick demo that combines two techniques I showed in previous blogs - filtering lists, and accessing the value of a  selected row in a table. Leveraging these two together it's quite easy to crate a page that has two tables on it - one is the parent and the other is the child, once you select a record in the parent the child table will update to see only the related child records.

Here is a quick demo:

The two steps we are doing are:

  • Create an action flow on the change of first-selected-row attribute of the table
  • In the flow use the assign variable function to set the filterCriterion of the child table to check for the value selected in the master

As you can see - quite simple.

 

Categories: Development

Error!?! What's going in APEX? The easiest way to Debug and Trace an Oracle APEX session

Dimitri Gielis - Wed, 2018-06-20 13:55
There are some days you just can't explain the behaviour of the APEX Builder or your own APEX Application. Or you recognize this sentence of your end-user? "Hey, it doesn't work..."

In Oracle APEX 5.1 and 18.1, here's how you start to see in the land of the blinds :)

Logged in as a developer in APEX, go to Monitor Activity:


 From there go to Active Sessions:



You will see all active sessions at that moment. Looking at the Session Id or Owner (User) you can identify the session easily:


Clicking on the session id shows the details: which page views have been done, which calls, the session state information and the browser they are using.

But even more interesting, you can set the Debug Level for that session :)


When the user requests a new page or action, you see a Debug ID of that request.


Clicking on the Debug ID, you see straight away all the debug info and hopefully it gives you more insight why something is not behaving as expected.



A real use case: custom APEX app

I had a real strange issue which I couldn't explain at first... an app that was running for several years suddenly didn't show info in a classic report, it got "no data found". When logging out and back in, it would show the data in the report just fine. The user said it was not consistent, sometimes it works, sometimes not... even worse, I couldn't reproduce the issue. So I told her to call me whenever it happened again.
One day she calls, so I followed the above to set debug on for her session and then I saw it... the issue was due to pagination. In a previous record she had paginated to the "second page", but for the current record there was no "second page". With the debug information I could see exactly why it was behaving like that... APEX rewrote the query rows > :first_row, which was set to 16, but for that specific record there were not more than 16 records, so it would show no data found.
Once I figured that out, I could quickly fix the issue by Resetting Pagination on opening of the page.

Debug Levels

You can set different Debug Levels. Level 9 (= APEX Trace) gives you most info whereas debug level 1, only shows the errors, but not much other info. I typically go with APEX Trace (level 9).

The different debug levels with the description:


Trace Mode

In case you want to go a step futher you can also set Trace Mode to SQL Trace.


This will do behind the scenes: alter session set events '10046 trace name context forever, level 12’;
To find out where the trace file is stored, go to SQL Workshop > SQL Scripts and run

SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

It will return the path of the trace file. When looking into that directory you want to search for the filename which contains the APEX session id (2644211946422) and time you ran the trace.


In Oracle SQL Developer you can then look at those trace files a bit more easily. You can also use TKPROF or other tools.


When I really have performance issues and I need to investigate further, I like to use Method R Workbench. The Profiler interpretes the trace file(s) and gives an explanation what's going on.


And with the different tools on the left, you can drill down in the files.


I'm definitely not a specialist in reading those trace files, but the above tools really help me understanding them. When I'm really stuck I contact Cary Millsap - or I call him Mr Trace - he's the father of those tools and knows trace files inside out :)

A second use case: APEX Builder

I was testing our APEX Office Print plugin in APEX 18.1 and for some reason APEX was behaving differently than earlier versions, but I didn't understand why. I followed the above method again to turn debug and trace on for my own session - so even when you are in the APEX Builder you can see what APEX is doing behind the scenes.


Debugging and Tracing made easy

I hope by this post you see the light when you are in the dark. Let the force be with you :)

Categories: Development

Migrating Your Database to Oracle Cloud

Gerger Consulting - Thu, 2018-06-14 22:47
Oracle Database Cloud is increasingly becoming an attractive option to run databases. However, moving all our data to the cloud still represents an interesting problem. Attend the free webinar by Oracle ACE Director and OCM Kamran Aghayev and learn the eight ways you can migrate your databases to the Oracle Cloud. Register at this link.


About the Webinar

Are you planning to move your on-premise database to Oracle Cloud? Are you looking for the best way to achieve it?
In this session Oracle Certified Master and ACE Director Kamran Agayev will show you how you can migrate your production database to Oracle Public Cloud using various methods such as data pump, cross platform transportable tablespaces with incremental backups, Data Guard, Golden Gate etc...
During the presentation Kamran will be present step by step guides for eight different practical migration scenarios that will help you migrate your database to the Oracle Cloud easily.

Categories: Development

Facebook, Google and Custom Authentication in the same Oracle APEX 18.1 app

Dimitri Gielis - Wed, 2018-06-06 15:37
Oracle APEX 18.1 has many new features, one of them is called Social Login.

On the World Cup 2018 Challenge, you can see the implementation of this new feature. The site allows you to sign-up or login with Facebook, Google, and your own email address.


It's even nicer that if you register with your email, but later decide to sign-up with Google or Facebook, it will recognize you as the same user if the email address is the same.

To get the Social Login to work I had to do the following...

Facebook

To enable Facebook login in your own app, you first have to create an app on Facebook. Creating an application is straightforward by following the wizards, just make sure you create a website app.


Google

To enable Google login in your own app, you first have to create a project on Google. Adrian did a really nice blog post which walks you through creating your project and setup Google authentication in your APEX application.




To hook-up Google and Facebook to our own APEX app, we have to let APEX know which credentials it should use, namely the info you find in the previous screenshots.

Web Credentials 

Go to App Builder > Workspace Utilities > All Workspace Utilities and click on the Web Credentials link

I added the Web Credentials for Facebook and Google. Web Credentials store the necessary info (Client ID = App ID and Client Secret = App Secret) of the OAuth2 authentication. OAuth2 is a standard these days which most sites are using to authenticate you as a user. Web Credentials are stored on Workspace Level so you can reuse those credentials in all the APEX apps in the same workspace.


Authentication Scheme 

We need to create the different authentication schemes. The Custom Authentication is to authenticate with email, next we have FACEBOOK, and GOOGLE (and Application Express Authentication which is there by default, but not used in this app).

Custom Authentication Scheme

I blogged before about Create a Custom Authentication and Authorization Scheme in Oracle APEX. The package I use in that blog post is pretty similar to the one of the World Cup app. In the Authentication Scheme, you define the authentication function. I also have a post-authentication procedure that sets some application items.



Facebook Authentication Scheme

Normally the authentication scheme of Facebook would look a bit different as Oracle APEX has built-in Facebook authentication, but for that to work, you need to load the SSL certificate in the Oracle wallet. On the platform the World Cup is running, the database is 12.1 and unfortunately, there's a bug in the database with multi-site or wildcard certificates (which Facebook has). So I had to workaround the issue, but I still used a new feature of APEX 18.1, instead of Facebook Authentication I used Generic OAuth2 Provider.

This is how it looks like:


As we are using the Generic OAuth2 Provider, we have to define the different OAuth URLs manually. When you look at my URLs they look a bit strange...

To get around the SSL issue I set up a reverse proxy in Apache which handles the SSL, so anytime the database does a call to http://apexrnd.localdomain it goes through the reverse proxy.
The reverse proxy in Apache is configured like this:


Note that in Oracle DB 12.2 and above the SSL bug is not there, so you don't need to do the above. I've been using the technique many times before if I don't want to deal with the SSL certificates and configuring the Oracle wallet. Adrian did a post about APEX Social Sign-In without a wallet, which might be of interest if you are on Oracle XE for example.

So what else is happening in the authentication scheme? You have to give the scope of what you want to get back from Facebook. In our case, we use the email as username and for additional attributes, we also want to know the first name, last name and the picture. It's really important you set those additional attributes, otherwise, APEX won't pass the full JSON through and takes a shortcut as it just needs the email.

The User info Endpoint URL is special:
http://apexrnd.localdomain/graph.facebook.com/v2.10/me?fields=#USER_ATTRIBUTES#&access_token=#ACCESS_TOKEN#

Special thanks to Christian of the APEX Dev team, without his help, I wouldn't have figured that one out. Thanks again, Christian!

The next big bit is the post_authenticate procedure which contains the logic to map the Facebook user to the World Cup app user. If it finds the user, it will set some application items again, just like in the custom authentication, but if it doesn't find the user (the first time somebody connects through Facebook), it will create a World Cup user. The most important part of that logic is the part to get the name and picture. Here we parse the JSON the authentication scheme holds in memory.

apex_json.get_varchar2('first_name')
apex_json.get_varchar2('last_name')
apex_json.get_varchar2('picture.data.url')


And then the final bit you have to be careful with, that in the authentication scheme "Switch in Session" is set to "Enabled". This setting is the magic bit to have your APEX application multiple authentication schemes and be able to use one or the other.


Google Authentication Scheme

The Google authentication is simpler than the Facebook one, as we don't have to do the workaround for the certificate as Oracle understands the Google certificate. So here I use the standard APEX 18.1 feature to authenticate against Google. The username attribute is again the email, and the "additional user attribute" is "profile" as that holds the name and picture of the person.


The rest of the authentication scheme is very similar to the one of Facebook. Again don't forget to switch in session to enable.

Login buttons

To call the different authentication schemes on our login page we included different buttons:


The Login button is a normal Submit and will do the Custom Authentication as that is the default authentication (see - Current in Shared Components > Authentication Schemes).

The Facebook button has a Request defined in the link: APEX_AUTHENTICATION=FACEBOOK, this is the way that APEX let you switch authentication schemes on the fly. Very cool! :)


The Google button is similar, but then the request is APEX_AUTHENTICATION=GOOGLE
(note the name after the equal sign needs to be the same as your authentication scheme)


I hope by me showing how the Social Authentication of Oracle APEX 18.1 was implemented in the World Cup 2018 Challenge, it will help you to do the same in your own APEX application.

I really love this new feature of APEX 18.1. The implementation is very elegant, user-friendly and flexible enough to handle most of the OAuth2 authentications out there. Note that Facebook and Google upgrade their APIs to get user info, so depending on when you read this, things might have changed. Facebook is typically backward compatible for a long time, but know that the current implementation in APEX is for API v2.10 and the default Facebook authentication is v3.0. As far as I experienced, the user info didn't change between the API versions. I'll do another blog post how you can debug your authentication as it might help you get other info than the one I got for the World Cup app. Feel free to add a comment if you have any question.
Categories: Development

The World Cup 2018 Challenge is live... An app created 12 years ago to showcase the awesome Oracle APEX

Dimitri Gielis - Tue, 2018-06-05 10:39

Since 2006 it's a tradition... every two years we launch a site where you can bet on the games of the World Cup (or Euro Cup). This year you find the app at https://www.wc2018challenge.com

You can read more about the history and see how things look like over time, or you can look on this blog at other posts in the different years.

The initial goal of the app was to showcase what you can do with Oracle Application Express (APEX). Many companies have Excel sheets where they keep the scores of the games and keep some kind of ranking for their employees. When I saw in 2006 that Excel sheet, I thought, oh well, I can do this in APEX, and it would give us way more benefits... results straight away, no sending of Excel sheets or merging data, much more attractive design with APEX etc. and from then on this app lives its own life.

Every two years I updated the app with the latest and greatest of Oracle APEX at that time.

Today the site is built in Oracle APEX 18.1 and it showcases some of the new features.
The look and feel is completely upgraded. Instead of a custom theme, the site is now using Universal Theme. You might think, it doesn't look like a typical APEX app, but it is! Just some minimal changes in CSS and a background image makes the difference.

The other big change is the Social Authentication, which is now using the built-in capabilities of APEX 18.1 instead of a custom authentication scheme I used the previous years. You can authenticate with Google, Facebook and with your own email (custom).

Some other changes came with JET charts and some smaller enhancements that came with APEX 5.1 and 18.1.

Some people asked me how certain features were done, so I'll do some separate blog posts about how Universal Theme was adapted on the landing page and how Social Authentication was included and what issues we had along the line. If you wonder how anything else was done, I'm happy to do some more posts to explain.

Finally, I would like to thank a few people who helped to make the site ready for this year: Erik, Eduardo, Miguel, Diego, Galan, and Theo, thanks so much!
Categories: Development

Creating Dependent/Cascading Select Lists with Visual Builder

Shay Shmeltzer - Fri, 2018-06-01 17:13

A common requirement in applications is to have dependent lists (also known as cascading lists) - meaning have the value selected in one place influence the values that could be select in another place. For example when you select a state, we'll only show you cities in that state in the city list.

In the short demo video below, I'm showing you how to implement this cascading lists solution with the new Visual Builder Cloud Service.

The solution is quite simple

You catch the event of a value change in the first list, and in the action chain that is invoked you set a filterCriterion on the second list. (See this entry for a quick introduction to filterCriterion).

Since the list is connected to a ServiceDataProvider, there is no further action you need to take - the change to the SDP will be reflected in the UI component automatically.

Quick tips - make sure you reference the id of the column and that your operators are properly defined and enclosed in double quotes.

 

Categories: Development

Safely Upgrading to Oracle APEX 18.1

Dimitri Gielis - Wed, 2018-05-30 05:37
Oracle Application Express (APEX) 18.1 has been out now for a couple of days.

I typically don't wait long before doing the upgrade, as with every new release you get many new features I want to use. Also if you want to stay on top of the game, you just want to move as fast as you can. I typically start testing the Early Adopter releases and then when apex.oracle.com gets updated, I do more testing, but having it on your own system with applications that are used day-in-day-out is a different level.

So I thought to share how we update our environment in a safe way.

The first thing we do is put our maintenance pages on. We use an Apache Reverse Proxy in front of Apache Tomcat with ORDS which is connected to the Database. By specifying some ErrorDocuments the maintenance pages are being used the moment there's an error.

For example, you can add this to your httpd.conf:

ErrorDocument 404 https://s3.amazonaws.com/apexRnD/website/maintenance.html
ErrorDocument 500 https://s3.amazonaws.com/apexRnD/website/maintenance.html
ErrorDocument 503 https://s3.amazonaws.com/apexRnD/website/maintenance.html


When you update APEX you don't want any incoming connections, so we stop Apache Tomcat with ORDS. At that moment the Reverse Proxy gets an error and the ErrorDocument kicks in and serves the Maintenance page. This way if people want to use the system, they know we are working on it.

We use Oracle Database 12c container database and pluggable databases. We want to run different versions of APEX next to each other because we have to test APEX Office Print against all APEX releases. Our customers use different releases of Oracle APEX too, so when we do custom development we have to stick to their version, so we really need all supported APEX versions somewhere.

Our setup was like this before the APEX 18.1 upgrade:
- CDB: cdb
- PDB with APEX 4.2: apex42_pdb
- PDB with APEX 5.0: apex50_pdb
- PDB with APEX 5.1 (main - our most used one): apex_pdb

With every new major release of APEX we clone our main PDB and give it the name of the APEX release, so we keep the APEX release we are on.

The steps to clone a pluggable database in Oracle DB 12.1 (SQL*Plus or SQLcl):

alter pluggable database apex_pdb close immediate; 
alter pluggable database apex_pdb open read only; 
create pluggable database APEX51_PDB from APEX_PDB file_name_convert=('/u01/app/oracle/oradata/cdb/APEX_PDB/','/u01/app/oracle/oradata/cdb/APEX51_PDB/') PATH_PREFIX='/u01/app/oracle/oradata/cdb/APEX51_PDB'; 
alter pluggable database apex51_pdb open; 
alter pluggable database apex_pdb close immediate; 
alter pluggable database apex_pdb open;


After the above we have a situation like this:
- CDB: cdb
- PDB with APEX 4.2: apex42_pdb
- PDB with APEX 5.0: apex50_pdb
- PDB with APEX 5.1: apex51_pdb
- PDB with APEX 5.1: apex_pdb  - will be upgraded to APEX 18.1 (main - our most used one)

Note: if you use Transparent Data Encryption (TDE) you have to perform some additional steps.

The installation of APEX 18.1 on the database side are basically 5 steps:
1) download the software from OTN
2) unzip in /tmp folder and cd into the /tmp/apex directory
3) run SQLcl or SQLPlus as sys as sysdba and connect to the apex_pdb container
alter session set container=APEX_PDB;
4) run the apexins command
@apexins SYSAUX SYSAUX TEMP /i/

In my environment the script took about 23 minutes to complete:


Note: the APEX 18.1 scripts are in 3 phases and the wizard shows information and timings for all phases and at the end also a global timing for the whole. If you want to have less downtime you can run the phases separately - see the doc Maximizing Uptime During an Application Express Upgrade

5) run the apex_rest_config command
@apex_rest_config.sql

The pluggable database is ready now and contains APEX 18.1.

During the APEX upgrade and as we already have downtime, we typically make use of that time to upgrade the other components in a typical Oracle APEX stack, namely the web server (e.g. Apache Tomcat) and ORDS (Oracle REST Data Services). Another advantage of going with a new version of your middleware is that you have your working Apache Tomcat and ORDS untouched, so in case you have to rollback there's nothing to do. Note that you can prepare most of the following commands beforehand.


Upgrading the Application (web) Server:

Unzip in your folder of choice.
That is basically all you have to do (on Linux) :)


Unzip in your folder of choice and cd into it.
Run: java -jar ords.war install advanced
and follow the wizard to install ORDS in APEX_PDB
* make sure you use different config dirs for ORDS in order to run multiple versions of ORDS and APEX


Once done, copy the ords.war into /apache-tomcat-version/webapps
Next copy the images folder of the apex directory to /apache-tomcat-version/webapps:
cp -R /tmp/apex/images /apache-tomcat-version/webapps/i

Start Apache Tomcat:
cd bin 
./startup.sh

Restart your Apache Reverse Proxy (and optionally take out the ErrorDocuments)
/sbin/service httpd graceful

It sometimes happens to me that APEX isn't working the first time when I run it.
Then I debug the connection and check the logs of the web server.

Another thing that often helps, is running ORDS in standalone mode as it will give me clear messages. e.g.

WARNING: *** jdbc.MaxLimit in configuration |apex|| is using a value of 10, this setting may not be sized adequately for a production environment ***
WARNING: *** jdbc.InitialLimit in configuration |apex|| is using a value of 3, this setting may not be sized adequately for a production environment ***
WARNING: The pool named: |apex|al| is invalid and will be ignored: The username or password for the connection pool named apex_al, are invalid, expired, or the account is locked
WARNING: The pool named: |apex|rt| is invalid and will be ignored: The username or password for the connection pool named apex_rt, are invalid, expired, or the account is locked

The above warning remembers me to change some parameters of ORDS. Or I could look-up my previous configuration and copy those parameters. The above warning also indicates our APEX_LISTENER user can't connect (apex_al), so we need to fix that by specifying the correct password. For example, for apex_rt I forgot which user it was, but it's easy to find by navigating to the ords config folder and view the apex_rt file. It will tell the user in the file.

Now we should have APEX 18.1 up-and-running :)

We also want to access the previous versions of APEX. So I copy the older ordsxx.war files to the new web server, but I name those ords51.war, ords50.war, so the URL I access to the different APEX versions becomes https://www.apexrnd.be/ords50/ or https://www.apexrnd.be/ords51/
https://www.apexrnd.be/ords/ is always the latest version of APEX. 
The images folder of the older APEX version (5.1) we map to /i51/ (instead of /i/ as that is of APEX 18.1 now). In order to have a different image folder you need to run in apex51_pdb following sql:
SQL> @\utilities\reset_image_prefix.sql


We upgraded our systems this weekend, the second day after 18.1 was released. We followed more or less the above procedure and things went fine. Make sure to test your own apps first before doing the upgrade. Most of our apps were running just fine, but for some, we had to replace some older plugins with new versions or remove the plugins and replace by built-in functionality.

Note: there are many different ways of updating your system. It comes down to see what works for you. What I share works for us, but for example, if you can't afford downtime you probably want to work with standby databases and load balancers. Or if you work with virtual machines or Docker, it might be useful to clone the machine and test things on the entire machine first.

Categories: Development

Top 10 Albums Meme

Greg Pavlik - Fri, 2018-05-25 21:27

I’ve been hit by a barrage of social media posts on people’s top 10 albums, so I thought I would take a look at what I have listened to the most in the last 5 years or so. I’m not claiming these are my favorites or “the best” albums recorded (in fact there are many better albums I enjoy). But I was somewhat surprised to find that I do return to the the same albums over and over, so here’s the top 10, in no particular order.

1)Alina, Arvo Part

If you were going to stereotype and box in Part’s work, this would be a good album to use. It’s also amazing enough that it could run on a continuous loop forever and I’d be pretty happy with that.

2)Benedicta: Marian Chants from Norcia, Monks of Norcia

Yes, the music hasn’t changed much from the middle ages. And yes, these are actually monks singing, who somehow managed to top the Billboard charts. The term to use is sublime – this music is quintessentially music of peace and another album that bears repetition with ease.

3) Mi Sueno, Ibrahim Ferrer

I know the whole Bueno Vista Social Club thing was trendy, but this music – Cuban bolero to be precise – is full of passion, charm, and romance: it music for human beings (which is harder and harder to find these days). This is at once a work of art and a testament to real life.

4) Dream River, Bill Callahan

I don’t even know what to categorize this music as: it’s not popular music, rock, easy listening, country or folk. But it has elements of most of those. Callahan’s baritone voice sounds like someone is speaking to you rather than singing. This album just gets better with the years of listening and it’s by far his best.

5) The Harrow and the Harvest, Gillian Welch

Appalachian roots, contemporary musical twists – I don’t know what they call this: alt-blue grass? In any case, its Welch’s best album and a solid, if somewhat dark, listen.

6) In the Spur of the Moment, Steve Turre

Turre does his jazz trombone (no conch shells on this album – which I am happy about) along with Ray Charles on piano for the first third or so, later trending toward more Afro-Cuban jazz style. I know the complaint on this one is that it feels a bit passionless in parts, but it’s a hard mix not to feel good about.

7) Treasury of Russian Gypsy Songs, Marusia Georgevskaya and Sergei Krotkoff

I’ll admit that it sounds like Georgevskaya has smoked more than a few cigarettes. But this is timeless music, a timeless voice, from a timeless culture. Sophie Milman’s Ochi Chernye is sultry and seductive (she is really fantastic), but somehow I like Marusia’s better.

9) Skeleton Tree, Nick Cave

Nick Cave is uneven at best and often mediocre but this album is distilled pain in poet form and a major work of art. For some reason I listen to this end to end semi regularly on my morning commute.

10) Old Crow Medicine Show, Old Crow Medicine Show

End to end, just hits the right notes over and over again. From introspective to political to just plain fun, these guys made real music for real people at their peak. Things fell apart after Willie Watson, but there is an almost perfect collection of authentic songs.

When Screen Scraping became API calling – Gathering Oracle OpenWorld Session Catalog with ...

Shay Shmeltzer - Sun, 2018-05-20 03:16
image

A dataset with all sessions of the upcoming Oracle OpenWorld 2017 conference is nice to have – for experiments and demonstrations with many technologies. The session catalog is exposed at a website here.

With searching, filtering and scrolling, all available sessions can be inspected. If data is available in a browser, it can be retrieved programmatically and persisted locally in for example a JSON document. A typical approach for this is web scraping: having a server side program act like a browser, retrieve the HTML from the web site and query the data from the response. This process is described for example in this article – https://codeburst.io/an-introduction-to-web-scraping-with-node-js-1045b55c63f7 – for Node and the Cheerio library.

However, server side screen scraping of HTML will only be successful when the HTML is static. Dynamic HTML is constructed in the browser by executing JavaScript code that manipulates the browser DOM. If that is the mechanism behind a web site, server side scraping is at the very least considerably more complex (as it requires the server to emulate a modern web browser to a large degree). Selenium has been used in such cases – to provide a server side, programmatically accessible browser engine. Alternatively, screen scraping can also be performed inside the browser itself – as is supported for example by the Getsy library.

As you will find in this article – when server side scraping fails, client side scraping may be a much to complex solution. It is very well possible that the rich client web application is using a REST API that provides the data as a JSON document. An API that our server side program can also easily leverage. That turned out the case for the OOW 2017 website – so instead of complex HTML parsing and server side or even client side scraping, the challenge at hand resolves to nothing more than a little bit of REST calling. Read the complete article here.

PaaS Partner Community

For regular information on business process management and integration become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn image[7][2][2][2] Facebook clip_image002[8][4][2][2][2] Wiki

Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

Categories: Development

Solve digital transformation challenges using Oracle Cloud

Shay Shmeltzer - Sun, 2018-05-20 03:15

 

image

Digital transformation is an omnipresent topic today, providing a lot of challenges as well as chances. Due to that, customers are asking about how to deal with those challenges and how to leverage from the provided chances. Frequently asked questions in this area are:

  • How can we modernize existing applications?
  • What are the key elements for a future-proven strategy IT system architecture?
  • How can the flexibility as well as the agility of the IT system landscape be ensured?

But from our experience there’s no common answer for these questions, since every customer has individual requirements and businesses, but it is necessary to find pragmatic solutions, which leverage from existing best Practices – it is not necessary to completely re-invent the wheel.

With our new poster „Four Pillars of Digitalization based on Oracle Cloud“ (Download it here) , we try to deliver a set of harmonized reference models which we evolved based on our practical experience, while conceiving modern, future-oriented solutions in the area of modern application designs, integrative architectures, modern infrastructure solutions and analytical architectures. The guiding principle, which is the basis for our architectural thoughts is: Design for Change. If you want to learn more, you can refer to our corresponding Ebook (find the Ebook here, only available in German at the moment).

Usually the technological base for modern application architectures today is based on Cloud services, where the offerings of different vendors are constantly growing. Here it is important to know which Cloud services are the right ones to implement a specific use case. Our poster „Four Pillars of Digitalization based on Oracle Cloud“ shows the respective Cloud services of our strategic partner Oracle, which can be used to address specific challenges in the area of digitalization. Get the poster here.

 

Developer Partner Community

For regular information become a member in the Developer Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn Forum Wiki

Technorati Tags: PaaS,Cloud,Middleware Update,WebLogic, WebLogic Community,Oracle,OPN,Jürgen Kress

Categories: Development

Oracle API Platform Cloud Service Overview by Rolando Carrasco

Shay Shmeltzer - Sat, 2018-05-19 03:25

image

  Oracle API Platform Cloud Services - API Design This is the first video of a series to showcase the usage of Oracle API Platform Cloud Services. API Management Part 1 of 2. Oracle API Cloud Services This is the second video of a series to show case the usage of the brand new Oracle API Platform CS. This is part one of API Management Oracle API Platform Cloud Services - API Management part 2 This is the 3rd video of the series. In specific here we will see the second part of the API Management functionality focused on Documentation. Oracle API Platform CS - How to create an app This is the 4th video of this series. In this video you will learn how to create an application. Oracle API Plaform Cloud Services - API Usage This is the fifth video of this series. In this video I will showcase how you will interact with the APIs that are deployed in APIPCS.

 

PaaS Partner Community

For regular information on business process management and integration become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn image[7][2][2][2] Facebook clip_image002[8][4][2][2][2] Wiki

Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

Categories: Development

Why are Universal Cloud Credit and Bring Your Own License a great opportunity for Oracle Partners?

Shay Shmeltzer - Sat, 2018-05-19 03:24
image

Oracle simplified buying and consuming for PaaS and IaaS Cloud. Customer can purchase now Universal Cloud Credits. This universal cloud credits can be spend for any IaaS or PaaS service. Partners can start a PoC or project e.g. with Application Container Cloud Service and can add additional service when required e.g. Chabot Cloud Service. The customer can use the universal cloud credits for any available or even upcoming IaaS and PaaS services.

Thousands of customers use Oracle Fusion Middleware and Databases today. With Bring Your Own License they can move easy workload to the cloud. As they already own the license the customer needs to pay only a small uplift for the service portion of PaaS. This is a major opportunity for Oracle partners to offer services to this customers.

To learn more about Universal Cloud Credits and Bring Your Own License Attend the free on-demand training here

 

Developer Partner Community

For regular information become a member in the Developer Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

Blog Twitter LinkedIn Forum Wiki

Technorati Tags: PaaS,Cloud,Middleware Update,WebLogic, WebLogic Community,Oracle,OPN,Jürgen Kress

Categories: Development

Event Hub Cloud Service. Hello world

Shay Shmeltzer - Sat, 2018-05-19 00:46

In early days, I've wrote a blog about Oracle Reference Architecture and concept of Schema on Read and Schema on Write. Schema on Read is well suitable for Data Lake, which may ingest any data as it is, without any transformation and preserve it for a long period of time. 

At the same time you have two types of data - Streaming Data and Batch. Batch could be log files, RDBMS archives. Streaming data could be IoT, Sensors, Golden Gate replication logs.

Apache Kafka is very popular engine for acquiring streaming data. It has multiple advantages, like scalability, fault tolerance and high throughput. Unfortunately, Kafka is hard to manage. Fortunately, Cloud simplifies many routine operations. Oracle Has three options for deploy Kafka in the Cloud:

1) Use Big Data Cloud Service, where you get full Cloudera cluster and there you could deploy Apache Kafka as part of CDH.

2) Event Hub Cloud Service Dedicated. Here you have to specify server shapes and some other parameters, but rest done by Cloud automagically. 

3) Event Hub Cloud Service. This service is fully managed by Oracle, you even don't need to specify any compute shapes or so. Only one thing to do is tell for how long you need to store data in this topic and tell how many partitions do you need (partitions = performance).

Today, I'm going to tell you about last option, which is fully managed cloud service.

It's really easy to provision it, just need to login into your Cloud account and choose "Event Hub" Cloud service.

after this go and choose open service console:

Next, click on "Create service":

Put some parameters - two key is Retention period and Number of partitions. First defines for how long will you store messages, second defines performance for read and write operations.

Click next after:

Confirm and wait a while (usually not more than few minutes):

after a short while, you will be able to see provisioned service:

 

 

Hello world flow.

Today I want to show "Hello world" flow. How to produce (write) and consume (read) message from Event Hub Cloud Service.

The flow is (step by step):

1) Obtain OAuth token

2) Produce message to a topic

3) Create consumer group

4) Subscribe to topic

5) Consume message

Now I'm going to show it in some details.

OAuth and Authentication token (Step 1)

For dealing with Event Hub Cloud Service you have to be familiar with concept of OAuth and OpenID. If you are not familiar, you could watch the short video or go through this step by step tutorial

In couple words OAuth token authorization (tells what I could access) method to restrict access to some resources.

One of the main idea is decouple Uses (real human - Resource Owner) and Application (Client). Real man knows login and password, but Client (Application) will not use it every time when need to reach Resource Server (which has some info or content). Instead of this, Application will get once a Authorization token and will use it for working with Resource Server. This is brief, here you may find more detailed explanation what is OAuth.

Obtain Token for Event Hub Cloud Service client.

As you could understand for get acsess to Resource Server (read as Event Hub messages) you need to obtain authorization token from Authorization Server (read as IDCS). Here, I'd like to show step by step flow how to obtain this token. I will start from the end and will show the command (REST call), which you have to run to get token:

#!/bin/bash curl -k -X POST -u "$CLIENT_ID:$CLIENT_SECRET" \ -d "grant_type=password&username=$THEUSERNAME&password=$THEPASSWORD&scope=$THESCOPE" \ "$IDCS_URL/oauth2/v1/token" \ -o access_token.json

as you can see there are many parameters required for obtain OAuth token.

Let's take a looks there you may get it. Go to the service and click on topic which you want to work with, there you will find IDCS Application, click on it:

After clicking on it, you will go be redirected to IDCS Application page. Most of the credentials you could find here. Click on Configuration:

On this page right away you will find ClientID and Client Secret (think of it like login and password):

 

look down and find point, called Resources:

Click on it

and you will find another two variables, which you need for OAuth token - Scope and Primary Audience.

One more required parameter - IDCS_URL, you may find in your browser:

you have almost everything you need, except login and password. Here implies oracle cloud login and password (it what you are using when login into http://myservices.us.oraclecloud.com):

Now you have all required credential and you are ready to write some script, which will automate all this stuff:

#!/bin/bash export CLIENT_ID=7EA06D3A99D944A5ADCE6C64CCF5C2AC_APPID export CLIENT_SECRET=0380f967-98d4-45e9-8f9a-45100f4638b2 export THEUSERNAME=john.dunbar export THEPASSWORD=MyPassword export SCOPE=/idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest export PRIMARY_AUDIENCE=https://7EA06D3A99D944A5ADCE6C64CCF5C2AC.uscom-central-1.oraclecloud.com:443 export THESCOPE=$PRIMARY_AUDIENCE$SCOPE export IDCS_URL=https://idcs-1d6cc7dae45b40a1b9ef42c7608b9afe.identity.oraclecloud.com curl -k -X POST -u "$CLIENT_ID:$CLIENT_SECRET" \ -d "grant_type=password&username=$THEUSERNAME&password=$THEPASSWORD&scope=$THESCOPE" \ "$IDCS_URL/oauth2/v1/token" \ -o access_token.json

after running this script, you will have new file called access_token.json. Field access_token it's what you need:

$ cat access_token.json {"access_token":"eyJ4NXQjUzI1NiI6InVUMy1YczRNZVZUZFhGbXFQX19GMFJsYmtoQjdCbXJBc3FtV2V4U2NQM3MiLCJ4NXQiOiJhQ25HQUpFSFdZdU9tQWhUMWR1dmFBVmpmd0UiLCJraWQiOiJTSUdOSU5HX0tFWSIsImFsZyI6IlJTMjU2In0.eyJ1c2VyX3R6IjoiQW1lcmljYVwvQ2hpY2FnbyIsInN1YiI6ImpvaG4uZHVuYmFyIiwidXNlcl9sb2NhbGUiOiJlbiIsInVzZXJfZGlzcGxheW5hbWUiOiJKb2huIER1bmJhciIsInVzZXIudGVuYW50Lm5hbWUiOiJpZGNzLTFkNmNjN2RhZTQ1YjQwYTFiOWVmNDJjNzYwOGI5YWZlIiwic3ViX21hcHBpbmdhdHRyIjoidXNlck5hbWUiLCJpc3MiOiJodHRwczpcL1wvaWRlbnRpdHkub3JhY2xlY2xvdWQuY29tXC8iLCJ0b2tfdHlwZSI6IkFUIiwidXNlcl90ZW5hbnRuYW1lIjoiaWRjcy0xZDZjYzdkYWU0NWI0MGExYjllZjQyYzc2MDhiOWFmZSIsImNsaWVudF9pZCI6IjdFQTA2RDNBOTlEOTQ0QTVBRENFNkM2NENDRjVDMkFDX0FQUElEIiwiYXVkIjpbInVybjpvcGM6bGJhYXM6bG9naWNhbGd1aWQ9N0VBMDZEM0E5OUQ5NDRBNUFEQ0U2QzY0Q0NGNUMyQUMiLCJodHRwczpcL1wvN0VBMDZEM0E5OUQ5NDRBNUFEQ0U2QzY0Q0NGNUMyQUMudXNjb20tY2VudHJhbC0xLm9yYWNsZWNsb3VkLmNvbTo0NDMiXSwidXNlcl9pZCI6IjM1Yzk2YWUyNTZjOTRhNTQ5ZWU0NWUyMDJjZThlY2IxIiwic3ViX3R5cGUiOiJ1c2VyIiwic2NvcGUiOiJcL2lkY3MtMWQ2Y2M3ZGFlNDViNDBhMWI5ZWY0MmM3NjA4YjlhZmUtb2VodGVzdCIsImNsaWVudF90ZW5hbnRuYW1lIjoiaWRjcy0xZDZjYzdkYWU0NWI0MGExYjllZjQyYzc2MDhiOWFmZSIsInVzZXJfbGFuZyI6ImVuIiwiZXhwIjoxNTI3Mjk5NjUyLCJpYXQiOjE1MjY2OTQ4NTIsImNsaWVudF9ndWlkIjoiZGVjN2E4ZGRhM2I4NDA1MDgzMjE4NWQ1MzZkNDdjYTAiLCJjbGllbnRfbmFtZSI6Ik9FSENTX29laHRlc3QiLCJ0ZW5hbnQiOiJpZGNzLTFkNmNjN2RhZTQ1YjQwYTFiOWVmNDJjNzYwOGI5YWZlIiwianRpIjoiMDkwYWI4ZGYtNjA0NC00OWRlLWFjMTEtOGE5ODIzYTEyNjI5In0.aNDRIM5Gv_fx8EZ54u4AXVNG9B_F8MuyXjQR-vdyHDyRFxTefwlR3gRsnpf0GwHPSJfZb56wEwOVLraRXz1vPHc7Gzk97tdYZ-Mrv7NjoLoxqQj-uGxwAvU3m8_T3ilHthvQ4t9tXPB5o7xPII-BoWa-CF4QC8480ThrBwbl1emTDtEpR9-4z4mm1Ps-rJ9L3BItGXWzNZ6PiNdVbuxCQaboWMQXJM9bSgTmWbAYURwqoyeD9gMw2JkwgNMSmljRnJ_yGRv5KAsaRguqyV-x-lyE9PyW9SiG4rM47t-lY-okMxzchDm8nco84J5XlpKp98kMcg65Ql5Y3TVYGNhTEg","token_type":"Bearer","expires_in":604800}

Create Linux variable for it:

#!/bin/bash export TOKEN=`cat access_token.json |jq .access_token|sed 's/\"//g'`

Well, now we have Authorization token and may work with our Resource Server (Event Hub Cloud Service). 

Note: you also may check documentation about how to obtain OAuth token.

Produce Messages (Write data) to Kafka (Step 2)

The first thing that we may want to do is produce messages (write data to a Kafka cluster). To make scripting easier, it's also better to use some environment variables for common resources. For this example, I'd recommend to parametrize topic's end point, topic name, type of content to be accepted and content type. Content type is completely up to developer, but you have to consume (read) the same format as you produce(write). The key parameter to define is REST endpoint. Go to PSM, click on topic name and copy everything till "restproxy":

Also, you will need topic name, which you could take from the same window:

now we could write a simple script for produce one message to Kafka:

#!/bin/bash export OEHCS_ENDPOINT=https://oehtest-gse00014957.uscom-central-1.oraclecloud.com:443/restproxy export TOPIC_NAME=idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest export CONTENT_TYPE=application/vnd.kafka.json.v2+json curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: $CONTENT_TYPE" \ --data '{"records":[{"value":{"foo":"bar"}}]}' \ $OEHCS_ENDPOINT/topics/$TOPIC_NAME

if everything will be fine, Linux console will return something like:

{"offsets":[{"partition":1,"offset":8,"error_code":null,"error":null}],"key_schema_id":null,"value_schema_id":null}

Create Consumer Group (Step 3)

The first step to read data from OEHCS is create consumer group. We will reuse environment variables from previous step, but just in case I'll include it in this script:

#!/bin/bash export OEHCS_ENDPOINT=https://oehtest-gse00014957.uscom-central-1.oraclecloud.com:443/restproxy export CONTENT_TYPE=application/vnd.kafka.json.v2+json export TOPIC_NAME=idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: $CONTENT_TYPE" \ --data '{"format": "json", "auto.offset.reset": "earliest"}' \ $OEHCS_ENDPOINT/consumers/oehcs-consumer-group \ -o consumer_group.json

this script will generate output file, which will contain variables, that we will need to consume messages.

Subscribe to a topic (Step 4)

Now you are ready to subscribe for this topic (export environment variable if you didn't do this before):

#!/bin/bash export BASE_URI=`cat consumer_group.json |jq .base_uri|sed 's/\"//g'` export TOPIC_NAME=idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest curl -X POST \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: $CONTENT_TYPE" \ -d "{\"topics\": [\"$TOPIC_NAME\"]}" \ $BASE_URI/subscription

If everything fine, this request will not return something. 

Consume (Read) messages (Step 5)

Finally, we approach last step - consuming messages.

and again, it's quite simple curl request:

#!/bin/bash export BASE_URI=`cat consumer_group.json |jq .base_uri|sed 's/\"//g'` export H_ACCEPT=application/vnd.kafka.json.v2+json curl -X GET \ -H "Authorization: Bearer $TOKEN" \ -H "Accept: $H_ACCEPT" \ $BASE_URI/records

if everything works, like it supposed to work, you will have output like:

[{"topic":"idcs-1d6cc7dae45b40a1b9ef42c7608b9afe-oehtest","key":null,"value":{"foo":"bar"},"partition":1,"offset":17}]

Conclusion

Today we saw how easy to create fully managed Kafka Topic in Event Hub Cloud Service and also we made a first steps into it - write and read message. Kafka is really popular message bus engine, but it's hard to manage. Cloud simplifies this and allow customers concentrate on the development of their applications.

here I also want to give some useful links:

1) If you are not familiar with REST API, I'd recommend you to go through this blog

2) There is online tool, which helps to validate your curl requests

3) Here you could find some useful examples of producing and consuming messages

4) If you are not familiar with OAuth, here is nice tutorial, which show end to end example

Categories: Development

Why Now Is the Time for ERP in the Cloud

Shay Shmeltzer - Fri, 2018-05-18 20:20

“The movement to cloud is an inevitable destination; this is how computing will evolve over the next several years.” So said Oracle CEO Mark Hurd at Oracle OpenWorld 2017. Based on the results of new research, that inevitability is here, now.

In our first ERP Trends Report, we surveyed more than 400 finance and IT leaders. We found that 76% of respondents said they either have plans for ERP in the cloud or have made the move already. They are recognizing that waiting puts them at a disadvantage; the time to make the move is now.

The majority of respondents cited economic factors as the reason they made the leap, and it’s easy to see why: Nucleus Research recently published a report that cloud delivers 3.2x the return on investment (ROI) of on-premises systems, while the total cost of ownership (TCO) is 52% lower.  

But even more surprising were the benefits realized once our survey respondents got to the cloud. An astonishing 81% cited “Staying current on technology” as the main benefit of moving to cloud ERP. With a regular cadence of innovation delivered by the cloud, it is easier for companies to quickly incorporate game-changing technologies into everyday business processes—technologies like artificial intelligence, machine learning, the Internet of Things (IoT), blockchain and more. In the cloud, the risk of running their businesses on obsolete technology drops to zero. It’s the last upgrade they will ever need.

“One of the key value propositions in engaging with Oracle and implementing the cloud solutions has been the value of keeping current with technology and technological developments,” said Mick Murray, CFO of Blue Shield of California. “In addition to robotics, we’re looking at machine learning and artificial intelligence, and how do we apply that across the enterprise.”

As new capabilities are rolled out, cloud subscribers like Blue Shield can take advantage of them immediately. This gives them the agility to be both responsive and predictive. Uncertainty is the new normal in business and managing amid uncertainty is a must. It’s no longer enough to be quick-to-change; competitive companies must also have reliable insight into how potential future scenarios could impact performance.

So, what does that mean in terms of daily operations? Basically, it means people using knowledge to make good decisions in a fast, productive, and highly automated manner at all levels of the business. Cloud systems provide the data integration and ongoing technology refresh to incorporate best practices and technology advances.

The cloud also makes it easier to integrate external sources of valuable, contextual knowledge that helps improve the accuracy of data models. This is important considering the scope of threats to sustainable operations for businesses with large, global footprints. Political, environmental, and economic factors across multiple regions could impact business, such as limited travel capabilities slowing down delivery of key supplies.

Business uncertainty is everywhere, and organizations must be able to say, “What is our plan if X happens? What is our plan if X, Y, and Z happen, but W doesn’t?” And this insight must come quickly. Business moves too fast for reports to take days to compile.

ERP Replacement Effort Is Not What It Used to Be

One final stone on the scale in favor of ERP cloud is that migrating does not have to be painful. Don’t let memories of past onsite replacements haunt you. With the right products and the right expertise behind them, cloud migrations happen quickly, cause minimal business disruption, and don’t require intense user training.

For example, Blue Shield of California had set aside $600,000 on change management for the adoption of cloud; in the end, they barely spent anything. Change adoption, they reported, happened quickly and seamlessly.

Considering the benefits for cost savings, elimination of technology obsolescence, and ease of adopting emerging technologies, it is becoming harder to justify a wait on migration to cloud ERP. Disruption is not an issue, and long-term cost saving are substantial. Most importantly, modernizing ERP is an opportunity to modernize the business and embed an ever-refreshing technology infrastructure that enables higher performance on multiple levels.

 

Categories: Development

7 Machine Learning Best Practices

Shay Shmeltzer - Fri, 2018-05-18 20:11

Netflix’s famous algorithm challenge awarded a million dollars to the best algorithm for predicting user ratings for films. But did you know that the winning algorithm was never implemented into a functional model?

Netflix reported that the results of the algorithm just didn’t seem to justify the engineering effort needed to bring them to a production environment. That’s one of the big problems with machine learning.

At your company, you can create the most elegant machine learning model anyone has ever seen. It just won’t matter if you never deploy and operationalize it. That's no easy feat, which is why we're presenting you with seven machine learning best practices.

Download your free ebook, "Demystifying Machine Learning"

At the most recent Data and Analytics Summit, we caught up with Charlie Berger, Senior Director of Product Management for Data Mining and Advanced Analytics to find out more. This is article is based on what he had to say. 

Putting your model into practice might longer than you think. A TDWI report found that 28% of respondents took three to five months to put their model into operational use. And almost 15% needed longer than nine months.

Graph on Machine Learning Operational Use

So what can you do to start deploying your machine learning faster?

We’ve laid out our tips here:

1. Don’t Forget to Actually Get Started

In the following points, we’re going to give you a list of different ways to ensure your machine learning models are used in the best way. But we’re starting out with the most important point of all.

The truth is that at this point in machine learning, many people never get started at all. This happens for many reasons. The technology is complicated, the buy-in perhaps isn’t there, or people are just trying too hard to get everything e-x-a-c-t-l-y right. So here’s Charlie’s recommendation:

Get started, even if you know that you’ll have to rebuild the model once a month. The learning you gain from this will be invaluable.

2. Start with a Business Problem Statement and Establish the Right Success Metrics

Starting with a business problem is a common machine learning best practice. But it’s common precisely because it’s so essential and yet many people de-prioritize it.

Think about this quote, “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

Now be sure that you’re applying it to your machine learning scenarios. Below, we have a list of poorly defined problem statements and examples of ways to define them in a more specific way.

Machine Learning Problem Statements

Think about what your definition of profitability is. For example, we recently talked to a nation-wide chain of fast-casual restaurants that wanted to look at increasing their soft drinks sales. In that case, we had to consider carefully the implications of defining the basket. Is the transaction a single meal, or six meals for a family? This matters because it affects how you will display the results. You’ll have to think about how to approach the problem and ultimately operationalize it.

Beyond establishing success metrics, you need to establish the right ones. Metrics will help you establish progress, but does improving the metric actually improve the end user experience? For example, your traditional accuracy measures might encompass precision and square error. But if you’re trying to create a model that measures price optimization for airlines, that doesn’t matter if your cost per purchase and overall purchases isn’t going up.

3. Don’t Move Your Data – Move the Algorithms

The Achilles heel in predictive modeling is that it’s a 2-step process. First you build the model, generally on sample data that can run in numbers ranging from the hundreds to the millions. And then, once the predictive model is built, data scientists have to apply it. However, much of that data resides in a database somewhere.

Let’s say you want data on all of the people in the US. There are 360 million people in the US—where does that data reside? Probably in a database somewhere.

Where does your predictive model reside?

What usually happens is that people will take all of their data out of database so they can run their equations with their model. Then they’ll have to import the results back into the database to make those predictions. And that process takes hours and hours and days and days, thus reducing the efficacy of the models you’ve built.

However, growing your equations from inside the database has significant advantages. Running the equations through the kernel of the database takes a few seconds, versus the hours it would take to export your data. Then, the database can do all of your math too and build it inside the database. This means one world for the data scientist and the database administrator.

By keeping your data within your database and Hadoop or object storage, you can build models and score within the database, and use R packages with data-parallel invocations. This allows you to eliminate data duplications and separate analytical servers (by not moving data) and allows you to to score models, embed data prep, build models, and prepare data in just hours.

4. Assemble the Right Data

As James Taylor with Neil Raden wrote in Smart Enough Systems, cataloging everything you have and deciding what data is important is the wrong way to go about things. The right way is to work backward from the solution, define the problem explicitly, and map out the data needed to populate the investigation and models.

And then, it’s time for some collaboration with other teams.

Machine Learning Collaboration Teams

Here’s where you can potentially start to get bogged down. So we will refer to point number 1, which says, “Don’t forget to actually get started.” At the same time, assembling the right data is very important to your success.

For you to figure out the right data to use to populate your investigation and models, you will want to talk to people in the three major areas of business domain, information technology, and data analysts.

Business domain—these are the people who know the business.

  • Marketing and sales
  • Customer service
  • Operations

Information technology—the people who have access to data.

  • Database administrators

Data Analysts—people who know the business.

  • Statisticians
  • Data miners
  • Data scientists

You need the active participation. Without it, you’ll get comments like:

  • These leads are no good
  • That data is old
  • This model isn’t accurate enough
  • Why didn’t you use this data?

You’ve heard it all before.

5. Create New Derived Variables

You may think, I have all this data already at my fingertips. What more do I need?

But creating new derived variables can help you gain much more insightful information. For example, you might be trying to predict the amount of newspapers and magazines sold the next day. Here’s the information you already have:

  • Brick-and-mortar store or kiosk
  • Sell lottery tickets?
  • Amount of the current lottery prize

Sure, you can make a guess based off that information. But if you’re able to first compare the amount of the current lottery prize versus the typical prize amounts, and then compare that derived variable against the variables you already have, you’ll have a much more accurate answer.

6. Consider the Issues and Test Before Launch

Ideally, you should be able to A/B test with two or more models when you start out. Not only will you know how you’re doing it right, but you’ll also be able to feel more confident knowing that you’re doing it right.

But going further than thorough testing, you should also have a plan in place for when things go wrong. For example, your metrics start dropping. There are several things that will go into this. You’ll need an alert of some sort to ensure that this can be looked into ASAP. And when a VP comes into your office wanting to know what happened, you’re going to have to explain what happened to someone who likely doesn’t have an engineering background.

Then of course, there are the issues you need to plan for before launch. Complying with regulations is one of them. For example, let’s say you’re applying for an auto loan and are denied credit. Under the new regulations of GDPR, you have the right to know why. Of course, one of the problems with machine learning is that it can seem like a black box and even the engineers/data scientists can’t say why certain decisions have been made. However, certain companies will help you by ensuring your algorithms will give a prediction detail.

7. Deploy and Automate Enterprise-Wide

Once you deploy, it’s best to go beyond the data analyst or data scientist.

What we mean by that is, always, always think about how you can distribute predictions and actionable insights throughout the enterprise. It’s where the data is and when it’s available that makes it valuable; not the fact that it exists. You don’t want to be the one sitting in the ivory tower, occasionally sprinkling insights. You want to be everywhere, with everyone asking for more insights—in short, you want to make sure you’re indispensable and extremely valuable.

Given that we all only have so much time, it’s easiest if you can automate this. Create dashboards. Incorporate these insights into enterprise applications. See if you can become a part of customer touch points, like an ATM recognizing that a customer regularly withdraws $100 every Friday night and likes $500 after every payday.

Conclusion

Here are the core ingredients of good machine learning. You need good data, or you’re nowhere. You need to put it somewhere like a database or object storage. You need deep knowledge of the data and what to do with it, whether it’s creating new derived variables or the right algorithms to make use of them. Then you need to actually put them to work and get great insights and spread them across the information.

The hardest part of this is launching your machine learning project. We hope that by creating this article, we’ve helped you out with the steps to success. If you have any other questions or you’d like to see our machine learning software, feel free to contact us.

You can also refer back to some of the articles we’ve created on machine learning best practices and challenges concerning that. Or, download your free ebook, "Demystifying Machine Learning."

 

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator - Development