Feed aggregator

Oracle Instant Client 12.2 now has SQL*Loader and Data Pump

Christopher Jones - Mon, 2017-08-21 19:17

Oracle Database 12c iconThis is a guest post by Jim Stenoish, Senior Director of Software Development, whose products include all tools used for the high speed movement, transformation, and loading of data and metadata for Oracle Database.

Oracle Instant Client 12.2 now includes SQL*Loader as well as the Data Pump command line utilities expdp and impdp, and the traditional Export/Import utilities exp and imp. Previously, these tools were only available with a 'full' client installation, or in a database installation. Being in Instant Client allows users to access these tools with a smaller footprint on disk and with a much quicker install than for a full client. The OTN license is also more convenient for many users.

Now these tools are part of Instant Client, it's easy to load data from a file on a system the doesn't already have Oracle software installed.

Installing the Tools

You can install the tools by unzipping the Oracle Instant Client 'Basic' (or 'Basic Light') package and the 'Tools' package to the same location. See the Instant Client home page for links to download for each platform.

Follow the platform-specific instructions given on the download page. For example, on Linux set LD_LIBRARY_PATH and PATH to the Instant Client directory. On Windows set PATH to the Instant Client directory and also install the Microsoft Visual Studio 2013 Redistributable.

Using the Tools

SQL*Loader allows you to load data from files, such as comma-separated value files (CSV), into the tables.

The expdp and impdp utility allows you to move metadata and data between Oracle databases. The expdp utility unload metadata and data from the database into Data Pump files on the database server. The impdp utility recreates metadata defined in a Data Pump file and loads table data stored in the files on the database server. It also provides a way to extract metadata and data over a DB Link (no files involved) from one database to another database.

Documentation and examples for the tools are in the Database Utilities manual.

If you have questions about SQL*Load or Data Pump you can ask them on OTN.

Free Webinar by Craig Shallahamer on Oracle Performance

Gerger Consulting - Mon, 2017-08-21 14:28
Attend the free webinar presented by the Oracle ACE Director Craig Shallahamer and learn how to choose the best strategy when analyzing a Oracle Database performance problem.

About The Webinar. 
The Oracle Database provides amazing application performance opportunities. But sometimes there are still unhappy users. So, what is your best analysis strategy? As Oracle DBAs we have two fundamental strategies: time based and active session sample based (ASH). Knowing their differences and how to exploit them gives you the optimization edge. In this webinar, I introduce each strategy, highlight how they work, contrast their differences and demonstrate live each strategy. This webinar will teach you how to begin your analysis in the best direction.
Takeaways and value:
  • You will know the differences and advantages between an ASH and AWR (time-based) analysis strategy.
  • You will know which analysis is best in any given situation.
  • You will learn how to use an AWR report to analyze performance and derive multiple solutions.

Categories: Development

Oracle Code Returns to San Francisco - Oct 3, 2017

OTN TechBlog - Mon, 2017-08-21 11:43

Register now for Oracle Code San FranciscoThe Oracle Code event series began its journey around the globe in San Francisco on March, 1, 2017. On October 3, 2017 -- 7 months and 300,000 coders later -- Oracle Code returns to San Francisco to complete its circumnavigation.

This 21st event in the series will include the kind of developer-led keynotes, sessions, and labs that have made the series so successful. 

Even better, this very special event takes place at Moscone West during Oracle OpenWorld and JavaOne. But you can keep your credit card in your pocket. Registration for Oracle Code 21 is free, and does not require registration for OOW or JavaOne! 

Of course, those attending OOW or JavaOne are encouraged to attend Oracle Code.

Click here for registration information and a complete list of Oracle Code keynotes, sessions, and labs.

Register now for Oracle Code San Francisco

Get a Taste

Content from Previous Oracle Code Events

Video: JavaScript Toolkit for Enterprise Applications | Andreas Koop

OTN TechBlog - Mon, 2017-08-21 11:14

Modern web application development with JavaScript has become popular – even for enterprise applications. But when starting a new project developers can feel overwhelmed by the number of newly available JavaScript libraries and build tools from which to choose. In this technical session from the Oracle Code Online Full Stack Web Development track, Andreas Koop shows you how to build JavaScript applications with ease, using Oracle JET. He'll cover typical enterprise requirements like stable build tool chain, ready-to-use components, accessibility, built-in i18n, responsive features, and performance optimizations.

Also from the Oracle Code Online Full Stack Development Track Additional Resources

 

Change partition column

Tom Kyte - Mon, 2017-08-21 06:06
Hi, We have a existing table. And this table is partitioned by range interval. Now i would like to change the column its referring to. Can i do that? Eg: <code>CREATE TABLE partition_test ( PROD_ID NUMBER NOT NULL, DATE_SK ...
Categories: DBA Blogs

Adding a column with a default value to a table

Tom Kyte - Mon, 2017-08-21 06:06
Tom, I have a doubt about adding columns with default values to a table. Recently, I have stumble upon an Arup Nanda article on it where he states the following "When a user selects the column for an existing record, Oracle gets the fact about the...
Categories: DBA Blogs

Incorrect time in Database

Tom Kyte - Mon, 2017-08-21 06:06
We have 5 oracle 12C 12.1.0.2.0 databases running on 2 nodes. on One of the Database on both nodes sysdate returns as below select sysdate from dual; SYSDATE -------------------- 18-AUG-2017 * on all other 4 database when sysdate was...
Categories: DBA Blogs

Data warehouse performance

Tom Kyte - Mon, 2017-08-21 06:06
Hi Tom! We have problem with database (warehouse) performance. Our DBA said that the main reason is some kind of company software create and delete about 20000 tables per day and it?s impacts on data dictionary. Can it be truth? And what we can do...
Categories: DBA Blogs

Package compilation speed

Tom Kyte - Mon, 2017-08-21 06:06
Hi All, I've PL/SQL package body (~50k lines) that used to take a few mins to compile in Oracle 11g & 12.1. The package consists of mostly insert statements to some tables, plus about half a dozen stored procedures. Recently we upgraded our databa...
Categories: DBA Blogs

Explaination on LRU, MRU and MFL,TFL

Tom Kyte - Mon, 2017-08-21 06:06
Dear Tom, I have basic knowledge on what is MFL,TFL,PFL and hash bucket. it would be great if u can explain further in deepth. But here I want to know how LRU and MRU are used/mingled with MFL,TFL. I have gone throw many links of oracle but una...
Categories: DBA Blogs

Get trace file from server to client

Yann Neuhaus - Sun, 2017-08-20 15:20

The old way to get a user dump trace file, for sql_trace (10046), Optimizer compilation trace (10053), lock trace (10704), Optimizer execution trace (10507),… is to go to the server trace directory. But if you don’t have access to the server (as in the ☁) the modern (12cR2) way is to select from V$DIAG_TRACE_FILE_CONTENTS. Before everybody is on 12.2 I’m sharing here a sqlplus script that I use for a long time to get the trace file to the client.

This is a script to run with sqlplus passing as a parameter the name of the trace file you want to write to (because the original name on the server is not very meaningful). I close all cursors to get all info such as STAT lines. I get the trace file name from v$diag_info. If a directory exists I use it. If not I create one, named UDUMP, which I’ll remove at the end. Then the file is read by utl_file and spooled on client though dbms_output. At the end, I remove the trace file from the server.

Here is the script – comments welcome.

set serveroutput on feedback off verify off termout off linesize 1024 trimspool on echo off
spool &1..trc
declare
fd utl_file.file_type;
line varchar2(1024);
l_tracename varchar2(512);
l_directory_path all_directories.directory_path%TYPE;
l_directory_name all_directories.directory_name%TYPE;
l_directory_created boolean;
procedure t (msg in varchar2) is begin dbms_output.put_line(msg); end;
begin
/* use old parameter _cached_open_cursors to close all open cursors */
for r in (select 1 from v$session_cursor_cache where count>0) loop
dbms_session.set_close_cached_open_cursors(true);
rollback;
commit;
dbms_session.set_close_cached_open_cursors(false);
end loop;
/* get trace directory and trace file name */
select value into l_directory_path from v$diag_info where name='Diag Trace';
select substr(replace(value,l_directory_path,''),2) into l_tracename from v$diag_info where name='Default Trace File';
/* get directory name for it, or try to create it */
l_directory_created:=false;
begin
select directory_name into l_directory_name from all_directories where directory_path = l_directory_path and rownum=1;
exception
when no_data_found then
begin
l_directory_name:='UDUMP';
execute immediate 'create directory '||l_directory_name||' as '''||l_directory_path||'''';
l_directory_created:=true;
exception when others then
raise_application_error(-20000,'You must CREATE DIRECTORY '||l_directory_name||' as ''' ||l_directory_path||'''; or be granted CREATE DIRECTORY.');
end;
end;
/* opens the trace file */
begin
fd:=utl_file.fopen(l_directory_name,l_tracename,'R');
exception when others then
raise_application_error(-20001,'Impossible to open file: '||l_tracename||' in '||l_directory_name||' ( '||l_directory_path||' )');
end;
/* read the trace file and prints it */
begin
loop
begin
utl_file.get_line(fd,line);
dbms_output.put_line(line);
exception
when no_data_found then exit;
when others then
dbms_output.put_line('!!! error while reading file '||l_tracename||' in '||l_directory_name||' ( '||l_directory_path||' )');
end;
end loop;
/* close and remove the file from the server */
utl_file.fclose(fd);
utl_file.fremove(l_directory_name,l_tracename);
exception
when others then
raise_application_error(-20002,'Impossible to remove: '||l_tracename||' in '||l_directory_name||' ( '||l_directory_path||' )');
end;
begin
/* drop directory if created */
if l_directory_created then execute immediate 'drop directory '||l_directory_name; end if;
exception
when others then
raise_application_error(-20002,'Impossible to remove directory: '||l_directory_name||' ( '||l_directory_path||' )'||sqlerrm);
end;
end;
/
spool off

 

Cet article Get trace file from server to client est apparu en premier sur Blog dbi services.

CoreOS: How to login via ssh (default password?!)

Dietrich Schroff - Sun, 2017-08-20 14:25
After the installation of a coreos you want to login via ssh (because of wrong localization inside your virtualbox/vmware console and no client tools installed (copy/paste not working, etc...)?
If you ask your preferred search engine, you will get many answers that there is not default password...
(Of course! The idea behind containers and container schedulers is, that everything works automatically and typing passwords does not fit in this mantra).

But no worries, only 4 steps are needed to get a login via ssh:

  1. on your laptop
    1. $ ssh-keygen
      Generating public/private rsa key pair.
      Enter file in which to save the key (/home/schroff/.ssh/id_rsa):
      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:
      Your identification has been saved in /home/schroff/.ssh/id_rsa.
      Your public key has been saved in /home/schroff/.ssh/id_rsa.pub.
      The key fingerprint is:
      SHA256:xxxxxxxxxxxxxxxxxxxxxxxxxxx schroff@zerberus
      The key's randomart image is:
      +---[RSA 2048]----+
      xxxxxx
      +----[SHA256]-----+
    2. cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
  2. inside the coreos vm
    1.  scp schroff@:/.ssh/authorized_keys /home/core/.ssh
      schroff@xxxxxxx's password:
      authorized_keys                                               100%  398     0.4KB/s   00:00   
  3. on your laptop
    1. ssh core@
With executing step 3.1 the login into the coreos machine works without a password, BUT ONLY up to the next reboot, because the home-directory is a tmpfs filesystem, which looses its content with each reboot...
(Although you have to clean up your known hosts file, because the machine starts up with a new ssh key.)

Postgres vs. Oracle access paths VIII – Index Scan and Filter

Yann Neuhaus - Sun, 2017-08-20 07:40

In the previous post we have seen a nice optimization to lower the consequences of bad correlation between the index and the table physical order: a bitmap, which may include false positives and then requires a ‘recheck’ of the condition, but with the goal to read each page only once. Now we are back to the well-clustered table where we have seen two possible access paths: IndexOnlyScan when all columns we need are in the index, and IndexScan when we select additional columns. Here is a case in the middle: the index does not have all the columns required by the select, but can eliminate all rows.

The table created is:

create table demo1 as select generate_series n , 1 a , lpad('x',1000,'x') x from generate_series(1,10000);
SELECT 10000
create unique index demo1_n on demo1(n);
CREATE INDEX
vacuum demo1;
VACUUM

Index Only Scan and Filter

I use only one column (N), which is indexed, in the SELECT clause and the WHERE clause. And this WHERE clause is silly: in addition to the n<=1000 I've used in previous post to focus on 10% of rows, I add a condition which is always false: mod(n,100)=1000

explain (analyze,verbose,costs,buffers) select n from demo1 where n<=1000 and mod(n,100)=1000 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..38.78 rows=5 width=4) (actual time=0.276..0.276 rows=0 loops=1)
Output: n
Index Cond: (demo1.n <= 1000)
Filter: (mod(demo1.n, 100) = 1000)
Rows Removed by Filter: 1000
Heap Fetches: 0
Buffers: shared hit=5
Planning time: 0.454 ms
Execution time: 0.291 ms

Index Only Scan is used here because no other columns are used. The n<=1000 is the access condition (Index Cond.) doing a range scan on the index structure. The mod(n,100)=1000 is a filter predicate which is applied to the result of the index access (Filter) and we have additional information that the 1000 rows selected by the access predicate have been filtered out (Rows Removed by Filter). During the execution, 5 index buffers have been read for the range scan (branches + leaves). Because I vacuumed any changes, the visibility map knows that all rows can be displayed and there are no blocks to read from the table (Heap Fetches).

Now I’ll select another column in order to see an Index Scan. We have seen in the previous post that the huge cost of index access is the access to the table. Filtering most of the rows from the index entries is the most common recommendation to optimize a query. And my example here is running the extreme case: a predicate on the indexed column removes all rows.

Index Scan and Filter

I’ve just changed the ‘select n’ to ‘select a':


explain (analyze,verbose,costs,buffers) select a from demo1 where n<=1000 and mod(n,100)=1000 ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Index Scan using demo1_n on public.demo1 (cost=0.29..184.78 rows=5 width=4) (actual time=0.427..0.427 rows=0 loops=1)
Output: a
Index Cond: (demo1.n <= 1000)
Filter: (mod(demo1.n, 100) = 1000)
Rows Removed by Filter: 1000
Buffers: shared hit=147
Planning time: 0.434 ms
Execution time: 0.440 ms

I can understand that the cost is higher. The optimizer may not know that mod(n,100) will never be equal to 1000. Estimating 5 rows, as in the previous case, is ok for me. We see different Output (different SELECT clause) but same information about Index Cond, Filter, and Rows Removed (same WHERE clause). The estimation part looks good.

However, there’s something that I can’t understand. At execution, we know that all rows can be removed before going to the table. We go to the table to get the value from A but all rows were filtered out from the index. At least it was the case with the Index Only Scan, and we know that the filter condition has all values from the index.

However, 147 blocks were read here. We have seen that the index scan reads 5 index pages, and then we can guess that 142 table pages have been read, exactly 10% of the pages from my correlated table. It seems that all rows have been read from the table before being filtered out. The Index Scan being one operation, the filter occurs at the end only. This is only my guess and I hope to get comments about that.

Oracle

With Oracle, the first query, selecting only indexed columns is an INDEX RANGE SCAN similar to the Postgres one.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fj36y2vph9u8f, child number 0
-------------------------------------
select /*+ */ n from demo1 where n<=1000 and mod(n,100)=1000
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 4 (100)| 0 |00:00:00.01 | 3 |
|* 1 | INDEX RANGE SCAN| DEMO1_N | 1 | 10 | 4 (0)| 0 |00:00:00.01 | 3 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("N"<=1000)
filter(MOD("N",100)=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Oracle does not know either that the filter predicate mod(n,100)=1000 eliminates all rows and estimates this kind of predicate to 10% of rows (a generic value) after the access predicate returning 10% (this one is calculated from statistics). 3 blocks were read: index branch + leaves.

Reading an additional table from the column does not change this INDEX RANGE SCAN operation but just adds one step to go to the table:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1rpmvq3jj8hgq, child number 0
-------------------------------------
select /*+ */ a from demo1 where n<=1000 and mod(n,100)=1000
----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 (100)| 0 |00:00:00.01 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| DEMO1 | 1 | 10 | 6 (0)| 0 |00:00:00.01 | 3 |
|* 2 | INDEX RANGE SCAN | DEMO1_N | 1 | 10 | 4 (0)| 0 |00:00:00.01 | 3 |
----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("N"<=1000)
filter(MOD("N",100)=1000)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "A"[NUMBER,22] 2 - "DEMO1".ROWID[ROWID,10]

Having two operations, the filter removes the rows on the output of the index range scan on line 2 and then has to go to the table only for rows that remain. No additional buffer reads on this step 1 when there are no rows. With Oracle, we build indexes to optimize the access predicates and we add columns to optimize the filter predicate. We can go further by adding all projections and avoid completely the access to the table, but that is not always needed. If we can apply all where clause filters on the indexed columns, then the access to the table remains proportional to the result. And the end-user usually accept longer response time for long results. And index access response time is proportional to the result.

The decomposition in two operations is also convenient to see which columns projection is done for the index result or the table result. Here the only output of the index range scan at line 2 is the ROWID and the output from the table access at line 1 is the column we select. So, we have two operations here. We have seen that INDEX RANGE SCAN can run alone. And we will see in the next post that the TABLE ACCESS BY INDEX ROWID can also run alone.

So what?

I hope that Postgres experts will comment about the need to read the table pages even when we can filter all rows from the index scan. We can do something similar by re-writing the query where we can see that the access to the table is never executed:

explain (analyze,verbose,costs,buffers) select a from demo1 where n in (select n from demo1 where n<=1000 and mod(n,100)=1000 ) ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.57..76.35 rows=5 width=4) (actual time=0.285..0.285 rows=0 loops=1)
Output: demo1.a
Buffers: shared hit=5
-> Index Only Scan using demo1_n on public.demo1 demo1_1 (cost=0.29..34.78 rows=5 width=4) (actual time=0.284..0.284 rows=0 loops=1)
Output: demo1_1.n
Index Cond: (demo1_1.n <= 1000)
Filter: (mod(demo1_1.n, 100) = 1000)
Rows Removed by Filter: 1000
Heap Fetches: 0
Buffers: shared hit=5
-> Index Scan using demo1_n on public.demo1 (cost=0.29..8.30 rows=1 width=8) (never executed)
Output: demo1.n, demo1.a, demo1.x
Index Cond: (demo1.n = demo1_1.n)

But this involves a join, and join methods will deserve another series of blog posts. The next one on access paths will show the TABLE ACCESS BY INDEX ROWID equivalent, Tid Scan. Then I’ll have covered all access paths.

 

Cet article Postgres vs. Oracle access paths VIII – Index Scan and Filter est apparu en premier sur Blog dbi services.

Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters

Amis Blog - Sun, 2017-08-20 04:33

image

A common requirement in any web application: allow the user to quickly drill down to records of interest by specifying relevant filters. The figure overhead shows two way of setting filters: by selecting from the [limited number of]existing values in a certain column – here Location – and by specifying a search string whose value should occur in records to be displayed after filtering.

Oracle JET is a toolkit that supports development of rich web applications. With Oracle JET too this filtering feature is a common requirement. In this article I take a brief look at how to:

  • create the multi select element and how to populate it with data from the Location attribute of the records in the table
  • handle a (de)selection event in the multi select – leading to filtering of the records shown in the table
  • create the search field and intercept changes in the search field
  • handle resetting the search field
  • invoking the REST API when the search field has changed

I am not claiming to present the best possible way to implement this functionality. I am not fluent enough in JET to make such a claim, and I have seen too many different implementations in Oracle documentation, blog articles, tutorials etc. to be able to point out the one approach that stands out (for the current JET release). However, the implementation I demonstrate here seems good enough as a starting point.

The HRM module is a tab I have added to the Work Better demo application. It has its own ViewModel (hrm.js) and its own HTML view (hrm.html). I have implemented a very simple REST API in Node (http://host:port/departments?name=)  that provides the departments in a JSON document.

Sources are in this Gist: https://gist.github.com/lucasjellema/e133e5e769c13ba8507a3bee0ebc30d1

Starting Point

The starting point in this article is a simple JET application with a tab that contains a  table that displays Department records retrieved from a REST API. The implementation of this application is not very special and is not the topic of this article.

image

The objective of this article is to show how to add the capability to filter the records in this table – first by selecting the locations for which departments should be shown, using a multiselect widget. The filtering takes place on the client, against the set of departments retrieved from the backend service. The second step adds filtering by name using a search field. This level of filtering is performed by the server that exposes the REST API.

 

Create and Populate the Multiselect Element for Locations

The multiselect element in this case is the Oracle JET ojSelect component (see cookbook). `The element shows a dropdownlist of options that can be selected, displays the currently selected options and allows selected options to be deselected.

SNAGHTML5c0eb46

The HTML used to add the multiselect component to the page is shown here:

<label for="selectLocation">Locations</label>
<select id="selectLocation" data-bind="ojComponent: { component: 'ojSelect' , options: locationOptions, multiple: true , optionChange:optionChangedHandler, rootAttributes: {style:'max-width:20em'}}">  
</select>

The options attribute references the locationOptions property of the ViewModel that returns the select(able) option values – more on that later. The attribute multiple is set to true to allow multiple values to be selected and the optionChange attribute references the optionChangedHandler, a function in the ViewModel that handles option change events that are published whenever options are selected or deselected.

When the Departments have been fetched from the REST API, the locationOptions are populated by identifying the unique values for the Location attribute in all Department records. Subsequently, all locations are set as selected values on the select component – as we started out with an unfiltered set of departments. function handleDepartmentsFetch is called whenever fresh data has been fetched from the REST API.

// values for the locations shown in the multiselect
self.locationOptions = ko.observableArray([]);

self.handleDepartmentsFetch = function (collection) {
    var locationData = new Set();
    //collect distinct locations and add to locationData array 
    var locations = collection.pluck('Location'); // get all values for Location attribute
    // distill distinct values
    var locationData = new Set(locations.filter((elem, index, arr) => arr.indexOf(elem) === index));

    // rebuild locationOptions
    self.locationOptions.removeAll();

    var uniqueLocationsArray = [];
    for (let location of locationData) {
        uniqueLocationsArray.push({ 'value': location, 'label': location });
    }
    ko.utils.arrayPushAll(self.locationOptions(), uniqueLocationsArray);
    // tell the observers that this observable array has been updated
    // (as result, the Multiselect UI component will be refreshed)
    self.locationOptions.valueHasMutated();
    
    // set the selected locations on the select component based on all distinct locations available
    $("#selectLocation").ojSelect({ "value": Array.from(locationData) });
};

I did not succeed in setting the selected values on the select component by updating an observable array that backs the value attribute of the ojSelect component. As a workaround, I now use the direct manipulation using the programmatic manipulation via jQuery selection ($(“#selectLocation”)) of the ojSelect component.

 

Handle a (de)selection event in the multi select

When the user changes the set of selected values in the Locations multiselect, we want the set of departments shown in the table to be updated – narrowed down or expanded, depending on whether a location was removed or added to the selected items.

The ojSelect component has the optionChange attribute that in this case references the function optionChangeHandler . This function inspects the type of option change (equals “data”?) and then invokes function prepareFilteredDepartmentsCollection while passing the self.deppies collection that was initialized during the fetch from the REST API. This function clones the collection of all departments fetched from the REST API and subsequently filters it based on the selectedLocations.

// returns an array of the values of the currently selected options in select component with id selectLocation
self.getCurrentlySelectedLocations = function () {
    return $("#selectLocation").ojSelect("option", "value");
}

self.optionChangedHandler = function (event, data) {
    if (data.option == "value") {
        // REFILTER the data in self.DeptCol into the collection backing the table
        self.prepareFilteredDepartmentsCollection(self.deppies, getCurrentlySelectedLocations());
    }
}


// prepare (possibly filtered) set of departments and set data source for table
self.prepareFilteredDepartmentsCollection = function (collection, selectedLocations) {
    if (collection) {
        // prepare filteredDepartmentsCollection
        var filteredDepartmentsCollection = collection.clone();

        var selectedLocationsSet = new Set(selectedLocations);
        var toFilter = [];
        // find all models in the collection that do not comply with the selected locations
        for (var i = 0; i < filteredDepartmentsCollection.size(); i++) {
            var deptModel = filteredDepartmentsCollection.at(i);
            if (!selectedLocationsSet.has(deptModel.attributes.Location)) {
                toFilter.push(deptModel)
            }
        }
        // remove all departments that do not qualify according to the locations that are (not) selected
        filteredDepartmentsCollection.remove(toFilter);

        // update data source with fresh data and inform any observers of data source (such as the table component)
        self.dataSource(
            new oj.CollectionTableDataSource(filteredDepartmentsCollection));
        self.dataSource.valueHasMutated();
    }// if (collection)
}

When the collection of filtered departments is created, the self.dataSource is refreshed with a new CollectionTableDataSource. With the call to self.dataSource.valueHasMutated(), we explicitly trigger subscribers to the dataSource – the Table component.

 

Create the search field and Intercept Changes in the Search Field

The search field is simply an inputText element with some decoration. Associated with the search field is a button to reset (clear) the search field.

image

The HTML code for these elements is:


<div class="oj-flex-item oj-sm-8 ">
    
<div class="oj-flex-item" style="max-width: 400px; white-space: nowrap">
        <input aria-label="search box" placeholder="search" data-bind="value: nameSearch, valueUpdate: 'afterkeydown' , ojComponent: {component: 'ojInputText', rootAttributes:{'style':'max-width:100%;'} }" />
        
<div id="searchIcon" class="demo-icon-sprite demo-icon-search demo-search-position"></div>

        <button id="clearButton" data-bind="click: clearClick, ojComponent: { component: 'ojButton', label: 'Clear', display: 'icons', chroming: 'half', icons: {start:'oj-fwk-icon oj-fwk-icon-cross03'}}">
        </button> 
        </div>

</div>

The search field is bound to nameSearch, an observable in the ViewModel. When the user edits the contents of the search field, the observable is updated and any subscribers are triggered. One such subscriber is function self.search() – this is a computed KnockOut function that has a dependency on nameSearch. When the function is triggered – by a change in the value of nameSearch – it checks if the search string consists of three or more characters and if so, it triggers a new fetch of departments from the REST API by calling function fetchDepartments().

// bound to search field
self.nameSearch = ko.observable('');

// this computed function is implicitly subscribed to self.nameSearch; any changes in the search field will trigger this function
self.search = ko.computed(function () {
    var searchString = self.nameSearch();
    if (searchString.length > 2) {
        self.fetchDepartments();
    }
})

function getDepartmentsURL(operation, collection, options) {
    var url = dataAPI_URL + "?name=" + self.nameSearch();
    return url;
};

Function getDepartmentsURL() is invoked just prior to fetching the Departments. It returns the URL to use for fetching from the REST API. This function will add a query parameter to the URL with the value of the nameSearch observable.

 

Handle Resetting the Search Field

The Clear button – shown in the previous HTML snipptet – is associated with a click event handler: function clearClick. This function resets the nameSearch observable and explicitly declares its value updated – in order to trigger subscribers to the nameSearch observable. One such subscriber is function self.search() which will be triggered by this, and will go ahead with refetching the departments from the REST API.

// event handler for reset button (for search field)
self.clearClick = function (data, event) {
    self.nameSearch('');
    self.searchDepartments();
    return true;
}
The REST API

The REST API is implemented with Node and Express. It is extremely simple; initially it just returns the contents of a static file (departments.json) with department records. It is slightly extended to handle the name query parameter, to only return selected departments. Note that this implementation is not the most efficient. For the purpose of this article, it will do the job.

image

var app = express();
var locations = ['AMSTERDAM','ZOETERMEER','NIEUWEGEIN','MAASTRICHT']
var departments  = JSON.parse(require('fs').readFileSync('./departments.json', 'utf8'));
  // add a location to each record
  for (i = 0; i < departments.length; i++) { departments[i].location = locations[Math.floor(Math.random() * locations.length)] ; } app.get('/departments', function (req, res) { //process var nameFilter = req.query.name; // read query parameter name (/departments?name=VALUE) // filter departments by the name filter res.send( departments.filter(function (department, index, departments) { return !nameFilter ||department.DEPARTMENT_NAME.toLowerCase().indexOf(nameFilter)>-1; 
                                 })
            ); //using send to stringify and set content-type
	   });
    Complete Source Code GIST

    Putting all source code together:

     

     

     

    Resources

    Sources for this article in GitHub Gist: https://gist.github.com/lucasjellema/e133e5e769c13ba8507a3bee0ebc30d1

    JET Cookbook on Multiselect http://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=select&demo=multi

    JET Cookbook on Table and Filtering – http://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=table&demo=filteringTable

    Blog post by Andrejus Baranovskis – Oracle JET Executing Dynamic ADF BC REST URL

    JET Documentation on Collection  and its API Documentation.

    Knock Documentation on Computed [Obserable] and Observable

    JavaScript Gist on removing duplicates from an array – https://gist.github.com/telekosmos/3b62a31a5c43f40849bb

    JavaScript Filter, Map and Reduce on Arrays: https://danmartensen.svbtle.com/javascripts-map-reduce-and-filter

    Oracle JET Cookbook – Recipe on Filtering

    The post Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters appeared first on AMIS Oracle and Java Blog.

    performance impact of CLOB's in table

    Tom Kyte - Sat, 2017-08-19 17:46
    Hi, If I create a table of 20 columns and 2 of them are clob datatype. It seems that selection from that table is very slow even I am not selecting the clob data. Is it possible. Any example to check this.
    Categories: DBA Blogs

    performance tuning

    Tom Kyte - Fri, 2017-08-18 23:26
    Hi , My question is about performance tuning , i faced issue where most of session was waiting on enq:tm contention around 80%+ timing was utilize for this wait event and rest was enq:tx row lock contention blocking session in my ash report ...
    Categories: DBA Blogs

    How to eliminate TABLE ACCESS FULL

    Tom Kyte - Fri, 2017-08-18 23:26
    > I am running following query : select hh.cash_receipt_id, hh.gl_date, hh.amount, hh.status, hh.account_code_combination_id, h...
    Categories: DBA Blogs

    Sql Plan Managment (SPM) 12c - Evolve Procedure stuck on some SQL's

    Tom Kyte - Fri, 2017-08-18 23:26
    Hi Tom, we introduced SPM since 11g and now migrated to 12c (where SPM get envolved during the main. window) Our evolve procedure stuck however on some certain statements so that after 4 hrs (during of the main window) the SPM EVOLVE get interrupted...
    Categories: DBA Blogs

    Character conversion from one character set to another

    Tom Kyte - Fri, 2017-08-18 23:26
    How to know whether utf16 to utf8 character conversion allowed using convert function, similarly F7DEC to US7ASCII etc. ?
    Categories: DBA Blogs

    Unsupported type PL/SQL datatype in dot net program

    Tom Kyte - Fri, 2017-08-18 23:26
    Hi there, I would like to ask a question regarding upgrading Oracle version in the future. Last time we do an upgrade from 11g to 12c and I found there is an error of ?Unsupported PL/SQL datatype? in our dot net program. But somehow, we managed to...
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator