Feed aggregator

Loading more than 45 columns into a table using wizard - Oracle apex 5.0.4

Tom Kyte - Wed, 2017-08-30 10:06
I was wondering if there's any workaround for uploading data into the tables using the apex wizard when there are more than 45 columns. I am using Apex 5.0.4, Oracle 12 C database. I was going through the documentation where it lists that number of c...
Categories: DBA Blogs

NVL - invalid number

Tom Kyte - Wed, 2017-08-30 10:06
Hi Tom, Can you explain why NVL has to process expr2 when expr1 is not null? I see the error is not occuring when used in WHERE condition. Error:- UPDATE mytable SET number_col = nvl(1234,'dummy') where number_col is null or select...
Categories: DBA Blogs

Show error about 'ORA-04043: object dba_data_files does not exist' on my Oracle DataGuard 11.2.0.1.0 physical standby when query 'desc dba_data_files' ??

Tom Kyte - Wed, 2017-08-30 10:06
Hi, When i query description of table 'dba_data_files' and 'dba_temp_files' with SQL command 'desc dba_data_files' and 'desc dba_temp_files' on Oracle DataGuard 11.2.0.1.0 physical standby,there has an error,content as follows, <code> SYS@standby>...
Categories: DBA Blogs

Perfomance Tuning

Tom Kyte - Wed, 2017-08-30 10:06
Hi Team/Steven. We have one table"A" with 100000 records based on "A" Table we have sequence id we are using that sequence id and fetching two more tables like insert into gtt_table select * from b join c where B.id=c.id and b.seq_id=rec.seq_id; ...
Categories: DBA Blogs

get_tab2.sql - Free Tool to show Privileges on an Object Updated

Pete Finnigan - Wed, 2017-08-30 10:06
I have a core set of PL/SQL scripts that I use when conducting Oracle security work on customer sites. Most of these are available on this website for many years. One of these is my script get_tab2.sql which shows grants....[Read More]

Posted by Pete On 30/08/17 At 12:11 PM

Categories: Security Blogs

ADF Client Side Validation with JavaScript

Andrejus Baranovski - Wed, 2017-08-30 09:25
In my previous post I explained how to use JS client side formatter for ADF Faces input components - ADF Goes Client Side - UI Performance Boost with JavaScript. The same principle can be applied for client side validation. Most likely you are not going to implement complex validation rules on the client side, but for simple checks - it will be perfect. There will be no roundtrip to the server and this will allow to improve user experience.

Client side validation check runs instantly when value is changed and there is no need to set Auto Submit = true to send value to the server. This rule checks value to be in certain range, similar simple checks can be implemented on client side (I'm sure you will find similar rules in your project):


If value is in range, client side formatter is applied as expected:


Custom JSF converter is registered in faces-config.xml and can be added to the input field either manually or dropped through the wizard:


Converter is based on ID, which is defined in faces-config.xml:


Definition in faces-config.xml:


Customer converter class defines range values, these properties will be consumed in JS logic:


Range check is done in JS. If validation fails, we display JSF error message. All happens on client side, without request to the server:


Download sample application with client side validation logic - ADFFormattingApp_v2.zip.

Oracle Big Data Cloud services for all - Insights built to last

Big data remain a topic everyone seems to be talking about it, but still many wonder "what is big data really?" How is it changing the way researchers at companies, non-profits,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function

Amis Blog - Wed, 2017-08-30 03:59

If your application does not have internal state – and sometimes it is handling peak loads of requests while at other times it is not doing any work at all, why then should there be one or even more instances of the application (plus container and/or server) continuously and dedicatedly up and running for the application? For peak loads – a single instance is nowhere near enough. For times without any traffic, even a single instance is too much – and yet you pay for it.

Serverless computing – brought to prominence with AWS Lambda – is an answer to this. It is defined on Wikipedia as a “cloud execution model” in which “the cloud provider dynamically manages the allocation of machine resources”. The subscriber to the cloud service provides the code to execute and specifies the events that should trigger execution. The cloud provider takes care of running that code whenever the event occurs. Pricing is based on the combination of the resources used (memory, possibly CPU) and the time it takes to execute the function. No compute node is permanently associated with the function and any function [execution]instance can run on a different virtual server. (so it is not really serverless in a strict sense – a server is used for running the function; but it can be a different server with each execution). Of course, function instances can still have and share state by using a cache or backend data store of some kind.

The Serverless Function model can be used for processing events (a very common use case) but also for handling HTTP requests and therefore for implementing REST APIs or even stateless web applications. Implementation languages for serverless functions differ a little across cloud providers. Common runtimes are Node, Python, Java and C#. Several cloud vendors provide a form of Serverless Computing – AWS with Lamba, Microsoft with Azure Functions, Google with Google Cloud Functions and IBM with BlueMix FaaS (Function as a Service). Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016 (Oracle Functions – Serverless architecture on the Oracle PaaS Cloud) and is expected to actually the service (including support for orchestration for distributed serverless functions) around Oracle OpenWorld 2017 (October 2017) – see for example the  list of session at OOW2017 on Serverless.

Note: monitoring the execution of the functions, collecting run time metrics and doing debugging on issues can be a little challenging. Special care should be taken when writing the functions – as for example there is no log file written on the server on which the code executes.

In this article, I briefly show an example of working with Serverless Computing using Azure Functions.

Steps for implementing a Function:

  • arrange Azure cloud account
  • create Function App as context for Functions
  • create Function
  • trigger Function – cause the events that trigger the Function.
  • inspect the result from the function
  • monitor the function execution

Taking an existing Azure Cloud Account, the first step is to create a Function App in your Azure subscription – as a context to create individual functions in (“You must have a function app to host the execution of your functions. A function app lets you group functions as a logic unit for easier management, deployment, and sharing of resources.”).

image

I will not discuss the details for this step – they are fairly trivial (see for example this instruction: https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function#create-a-function-app)

Quick Overview of Steps

Navigate into the function app:

image

Click on plus icon to create a new Function:

image

Click on goto quickstart for the easiest way in

image

Select scenario WebHook + API; select JavaScript as the language. Note: the JavaScript runtime environment is Node 6.5 at the time of writing (August 2017).

Click on Create this function.

image

The function is created – with a name I cannot influence

image

When the function was created, two files were created: index.js and function.json. We can inspect these files by clicking on the View Files tab:

image 

The function.json file is a configuration file where we specify generic meta-data about the function.

The integration tab shows the triggering event (s) for this function – configured for HTTP requests.

image

The manage tab allows us to define environment variable to pass into the function runtime execution environment:

image

The Monitor tab allows us to monitor executions of the Function and the logging they produce:

image

Return to the main tab with the function definition. Make a small change in the template code – to make it my own function; then click on Save & Run to store the modified definition and make a test call to the Function:

SNAGHTMLd5c68d

The result of the test call is shown on the right as well in the logging tab at the bottom of the page:

image

To invoke the function outside the Azure Cloud environment, click on Get Function URL.

image

Click on the icon to copy the URL to the clipboard.

Open a browser, paste the URL and add the name query parameter:

image

In Postman we can also make a test call:

image

Both these calls are from my laptop without any special connection to the Azure Cloud. You can make that same call from your environment. The function is triggerable – and when an HTTP request is received to hand to the function, Azure will assign it a run time environment in which to execute the JavaScript code. Pretty cool.

The logging shows the additional instances of the function:

image

From within the function, we can write output to the logging. All function execution instances write to the same pile of logging, from within their own execution environments:

image

Now Save & Run again – and see the log line written during the function execution:

image

Functions lets you define the threshold trace level for writing to the console, which makes it easy to control the way traces are written to the console from your functions. You can set the trace-level threshold for logging in the host.json file, or turn it off.

The Monitor tab provides an overview of all executions of the function, including the not so happy ones (I made a few coding mistakes that I did not share). For each instance, the specific logging and execution details are available:

SNAGHTMLdc20dc

 

Debug Console and Package Management

At the URL https://<function_app_name>.scm.azurewebsites.net we can access a management/development console where we can perform advanced operations regarding application deployment and configuration:

image

The CMD console looks like this:

SNAGHTMLea740b

NPM packages en Node Modules can be added to a JavaScript Function. See for details : https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node#node-version-and-package-management 

An not obvious feature of the CMD Console is the ability to drag files from my local Windows operating system into the browser – such as the package.json shown in this figure:

image

Note: You should define a package.json file at the root of your function app. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a package.json file in the folder of a specific function.

Conclusion

Creating a JavaScript (Node) Function in Azure Functions is pretty straightforward. The steps are logical, the environment reacts intuitively and smoothly. Good fun working with this.

I am looking forward to Oracle’s Cloud service for serverless computing – to see if it provides a similar good experience,and perhaps even more. More on that next month I hope.

Next steps for me: trigger Azure Functions from other events than HTTP Requests and leveraging NPM packages from my Function. Perhaps also trying out Visual Studio as the development and local testing environment for Azure Functions.

 

Resources

FAQ on AWS Lambda – https://aws.amazon.com/lambda/faqs/

Wikipedia on Serverless Computing – https://en.wikipedia.org/wiki/Serverless_computing

Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016  – Oracle Functions – Serverless architecture on the Oracle PaaS Cloud

Sessions at Oracle OpenWorld 2017 on Serverless Computing (i.e. Oracle Functions) –  list of session at OOW2017 on Serverless

Azure Functions – Create your first Function – https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function 

Azure Functions Documentation – https://docs.microsoft.com/en-us/azure/azure-functions/index 

Azure Functions HTTP and webhook bindings – https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook

Azure Functions JavaScript developer guide – https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node

How to update function app files – package.json, project.json, host.json – https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference#fileupdate

The post Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function appeared first on AMIS Oracle and Java Blog.

Ruby and Ruby on Rails Recognized by Oracle Technology Network

Christopher Jones - Wed, 2017-08-30 01:28

I'm really pleased that two key contributors to the Ruby and Ruby on Rails communities for Oracle Database have been honored.

Takehiro Kubo and Yasuo Honda have both become Oracle ACEs. The Oracle Technology Network's Oracle ACE program describes itself as recognizing Oracle evangelists and enthusiasts. Both Takehiro and Yasuo fit this description exactly.

To me, Takehiro Kubo is most visible in his work on the ruby-oci8 gem. Ruby-oci8 is a Ruby interface for Oracle Database. Takehiro created ruby-oci8 and is actively maintaining it and helping users. He also willingly contributes his knowledge to Oracle Database interfaces for other programming languages, helping both maintainers and users. An eager adopter of new technology, he is currently developing ruby-odpi, a rewrite of the interface that is based on the strengths of Oracle's new ODPI-C wrapper.

Most Oracle Ruby and JRuby developers use the Ruby on Rails web application framework. Here Yasuo Honda is the key person. He has been the lead maintainer of the Ruby on Rails Oracle ActiveRecord Adapter for some years now. He has nurtured an active community of users and contributors, keeping up with both framework and library improvements. He has contributed enormously to its status as a very popular development environment. He freely contributes his knowledge and expertise.

Both Takehiro and Yasuo personify the ideal open source maintainers. They are able to create useful, beautiful software components that other people want to use. They take their roles seriously, and have shown long term commitment to their projects' successes.

Congratulations!

Presenting at UKOUG Tech17 Conference in December (Here Come The Warm Jets)

Richard Foote - Tue, 2017-08-29 19:39
I presented at the UKOUG Tech14 conference in Liverpool and thoroughly enjoyed the experience. It was one of the better User Group conferences I’ve attended and winning both the Best New Oracle Speaker and Best Oracle Speaker awards made that bit more special. So it’s with real excitement that I have the opportunity to again […]
Categories: DBA Blogs

Single Submitter Support in Oracle Scheduler Integration

Anthony Shorten - Tue, 2017-08-29 17:47

The Oracle Scheduler integration was released for Oracle Utilities Application Framework to provide an interface to the DBMS_SCHEDULER package in the Oracle Database. 

By default, when submitting a multi-threaded job where the thread_limit is set to a number greater than 1 and the thread_number on the submission is setting to it to zero (to spawn threads) the interface would submit each thread individually after each other. For a large number of threads, this may lead to a high level of lock contention on the Batch Control table. To resolve this issue we have enhanced the interface to include a new feature to reduce the lock contention using a single submitter.

To use this facility you can either use a new command line override:

OUAF_BATCH.Submit_Job(
...
single_submitter => true,
...
)

Or an be used on the Set_Option facility (Globally or on individual jobs). For example for a Global scope:

OUAF_BATCH.Set_Option(scope => 'GLOBAL', name => 'single_submitter', value => true);

The default for this facility is set to false (for backward compatibility). If the value is set to true, you cannot restart an individual thread till all running threads have ended.

This patch is available from My Oracle Support for a number of releases:

Release Patch 4.2.0.3.0 24299479 4.3.0.1.0 26440254 4.3.0.2.0 26452535 4.3.0.3.0 26452546 4.3.0.4.0 26452556

 

How to get the Number of Day between two dates, subject to some conditions

Tom Kyte - Tue, 2017-08-29 15:46
Team, Please find below the Create Table stmt <code> CREATE TABLE ST_TA_STOP_INTEREST(LOAN_CODE NUMBER, TRANSACTION_DATE DATE,EVENT_ID VARCHAR2(5)); </code> Please find below the insert stmts for the above created table: <code> SET DEFIN...
Categories: DBA Blogs

Cant we take a backup of datafile using expdp y ??

Tom Kyte - Tue, 2017-08-29 15:46
hi, can not we take a backup of datafile using expdp and impdp ?? can we take a backup of schema in rman ??
Categories: DBA Blogs

Pro*C - DML on double variable

Tom Kyte - Tue, 2017-08-29 15:46
Hi Tom, While inserting a double variable into a database columns it gets round off.The Database column is number(12,2). when i tried inserting 2146327346.47,it was inserted as 2146327346.47. Similarly 2152186221.53 was insert as 2152186222.00 and...
Categories: DBA Blogs

How to create constraints and indexes for very large table/Running out of TEMP space

Tom Kyte - Tue, 2017-08-29 15:46
I am rebuilding a database on a new server and I have reloaded the tables via datapump exp/imp. I have one particular table that has a range partition by year and has roughly 20+ billion rows of data. When I try to add the constraints and indexes I...
Categories: DBA Blogs

Finding the correct value based on a date range.

Tom Kyte - Tue, 2017-08-29 15:46
I have 3 tables one with payroll information (payroll), one of pay periods (erpaydates) and one of retirement contribution percents (planeecontrpct). I am trying to find a way to select what the employee's contribution percent (planeecontrpct.contrpc...
Categories: DBA Blogs

What Are NULL pname entries in v$process?

Pete Finnigan - Tue, 2017-08-29 15:46
I got a message on Linked In today from Jijo who asked why when he queries v$process are some of the PNAME column values NULL. I have a simple script vproc.sql that I use when analysing databases for many years....[Read More]

Posted by Pete On 29/08/17 At 02:35 PM

Categories: Security Blogs

Create a Custom Authentication and Authorization Scheme in Oracle APEX

Dimitri Gielis - Tue, 2017-08-29 13:32
This post is part of a series of posts: From idea to app or how I do an Oracle APEX project anno 2017

Before when creating the APEX application based on a Blueprint, we used the standard APEX Authentication, which means the management of users is done in Oracle Application Express itself. As we created our own user table, we want to manage our users there and have some custom authentication scheme for our multiplication application.

In this post I will show you exactly how I built the custom authentication scheme for my Oracle APEX application you see at mtable.online.

I first decided which kind of login page I wanted. A few years ago I blogged about pimping the logging page, which is what I use on the APEX Office Print login page. This type of page is a more traditional way of building a login page in Oracle APEX. For the registration page and forgot password page we used the same techniques.


For the multiplication table project I decided to do it a bit different. I started from the landing page and build the login mechanism into that page. But also the register and forgot password are on the same page, so not like what we did for APEX Office Print, using different pages for forgot password, register and login.

Here's how my page looks like in the Oracle APEX App Builder:


There are a few regions to help with the layout (Top, Left, Right). In the Right region, there are 3 sub-regions: Login, Register, Forgot Password, which will show one at a time. Dynamic Actions (Show/Hide) control which region is seen.

From the front-end this is what it looks like.
When clicking a button an APEX process is being fired, but all the logic is defined in a package.
The package to handle the authentication I typically call [project trigram]_AUTH_PKG. It doesn't only contain the authentication logic, but also the registration, reset password and authorization logic.

The specifications looks like this:

.gist .blob-wrapper.data { max-height:600px; overflow:auto; } And the body like this:

I typically use dbms_crypto to generate (hash) the passwords, but as that package is not supported on Oracle Exadata Express at the time of writing, I use another SHA256 PL/SQL implementation.

I'm not going into too much detail on the logic in the PL/SQL package. I hope it's kinda self explanatory, but if you have any question, feel free to ask in the comments field.

Now we will focus on creating a Custom Authentication Scheme in APEX.

Go to App Builder > Shared Components > Authentication Schemes and hit the Create button to add a new one:


Enter the custom_authenticate procedure from the package we created earlier:


By default the new authentication scheme will be current, so make sure you have some data in your tables, otherwise you won't be able to login.

Next I typically add some post authentication to fill some Application Items.
Edit the Custom Authentication and add the code and post_auth as in this picture:


We have now made our application accessible to people by defining our own custom authentication scheme.

Next, we want to define which rights you have in the application. To do this, we will create two Authorization Schemes, one for a normal user and one for an administrator.

In our package we already included a function with the logic. Every user has a role defined to him, and depending the role, it's a normal user or an administrator. An administrator can do everything a normal user can do, but can also access the administrator section where we maintain our application.

Blueprint actually already created our Authorization scheme for administrators, but we will adapt it to use our package. Go to Shared Components > Authorization Schemes and modify like this:


I hope it gives you all the components to build your own custom authentication and authorization schemes.

I also recorded a video which goes in more detail on the entire process of signing up, forgetting password and logging in and the different authorization schemes and code being used.

Categories: Development

August 2017 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Tue, 2017-08-29 12:07

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

  • July 2017 Database 12.1.0.2 PSU and Proactive Bundle Patch
  • July 2017 Database 11.2.0.4 PSU and Engineered Systems Patch

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

References

Related Articles

Categories: APPS Blogs

Partner Webcast – Oracle IoT Cloud Service: Getting Started

Eight zetabytes of data are coming from tens of millions of devices. Where are you going to put the data, and what are you going to do with it? Internet of things (IoT) is all about integrating and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator