Feed aggregator

Oracle SOA Suite 12c: Tokens - To avoid substituting values: define global token variables

Dietrich Schroff - Sat, 2018-03-24 09:15
Inside the Enterprise Manager (URL: http://yourhost:7001/em) you can configure SOA Suite profiles:

Choose "Token Configurations"

 ???

To list the tokens click on "Modify Configuration file":

 To add a token click on the green plus:

 Do not forget to save your changes:


 
To use the bulk append you have to create a file like this one:




    2000
    weblogic Choose your file "text.xml":
 and click on "Append":


database performance by dblink

Tom Kyte - Fri, 2018-03-23 11:46
Hi Team , we are facing issue with sql query , which is using dblink in it , most of wait event observed is sql*net message from dblink , from remote database we have figured out the plan , and it seems to good , but we are not able to get that ...
Categories: DBA Blogs

Oracle and MS SQL server data transfers

Tom Kyte - Fri, 2018-03-23 11:46
Hi Tom, Currently our company is Implementing an ERP system based on Oracle (12c - 12.2.0.1). We already have a product testing system that has data in a SQL Server database (MS SQL Server 2008 R2 64 bit - 10.50.2500.0). We need to Establish a com...
Categories: DBA Blogs

PostgreSQL – logical replication with pglogical

Yann Neuhaus - Fri, 2018-03-23 11:31

Although PostgreSQL 10 integrate native logical replication (have a look here or here), it is always interesting to be aware of alternative solutions to the available in-core features.
One of those is called pglogical. It’s a Postgres extension developed by 2ndQuadrant, major contributor to PostgreSQL development.
pg_logical
The goal of this blog post is to discover how to install pglogical and configure it in order to generate simple logical replication cases.

Installation

To get the extension we should start with RPM repository installation :
postgres@ppas02:/u01/app/postgres/product/10/db_1/ [PG10] sudo yum install http://packages.2ndquadrant.com/pglogical/yum-repo-rpms/pglogical-rhel-1.0-3.noarch.rpm
Loaded plugins: fastestmirror
pglogical-rhel-1.0-3.noarch.rpm | 8.3 kB 00:00:00
Examining /var/tmp/yum-root-oH8huK/pglogical-rhel-1.0-3.noarch.rpm: pglogical-rhel-1.0-3.noarch
Marking /var/tmp/yum-root-oH8huK/pglogical-rhel-1.0-3.noarch.rpm to be installed
...
...
...
Installed:
pglogical-rhel.noarch 0:1.0-3


Complete!
postgres@ppas02:/u01/app/postgres/product/10/db_1/ [PG10]

Once installed, we can add the extension inside our Postgres instance :
postgres=# CREATE EXTENSION pglogical;
2017-12-18 16:24:39.079 CET [4327] ERROR: pglogical is not in shared_preload_libraries
2017-12-18 16:24:39.079 CET [4327] STATEMENT: CREATE EXTENSION pglogical;
ERROR: pglogical is not in shared_preload_libraries
postgres=#

Oops… the pglogical library must be loaded when the cluster starts :
postgres=# alter system set shared_preload_libraries = 'pglogical';
ALTER SYSTEM
postgres=#

Restart the cluster to take it in account :
postgres@ppas01:/u01/app/postgres/product/10/db_1/share/ [PG10] pg_ctl -D /u02/pgdata/PG10/ stop
waiting for server to shut down....2017-12-18 16:28:03.895 CET [4447] LOG: received fast shutdown request
2017-12-18 16:28:03.902 CET [4447] LOG: aborting any active transactions
2017-12-18 16:28:03.923 CET [4447] LOG: worker process: logical replication launcher (PID 4455) exited with exit code 1
2017-12-18 16:28:03.923 CET [4449] LOG: shutting down
2017-12-18 16:28:03.940 CET [4447] LOG: database system is shut down
done
server stopped
postgres@ppas01:/u01/app/postgres/product/10/db_1/share/ [PG10] pg_ctl -D /u02/pgdata/PG10/ start
waiting for server to start....2017-12-18 16:28:10.405 CET [4654] LOG: listening on IPv4 address "0.0.0.0", port 5420
2017-12-18 16:28:10.405 CET [4654] LOG: listening on IPv6 address "::", port 5420
2017-12-18 16:28:10.407 CET [4654] LOG: listening on Unix socket "/tmp/.s.PGSQL.5420"
2017-12-18 16:28:10.416 CET [4655] LOG: database system was shut down at 2017-12-18 16:28:03 CET
2017-12-18 16:28:10.426 CET [4654] LOG: database system is ready to accept connections
2017-12-18 16:28:10.430 CET [4661] LOG: starting pglogical supervisor
2017-12-18 16:28:10.435 CET [4663] LOG: manager worker [4663] at slot 0 generation 1 detaching cleanly
2017-12-18 16:28:10.439 CET [4664] LOG: manager worker [4664] at slot 0 generation 2 detaching cleanly
2017-12-18 16:28:10.444 CET [4665] LOG: manager worker [4665] at slot 0 generation 3 detaching cleanly
done
server started
postgres@ppas01:/u01/app/postgres/product/10/db_1/share/ [PG10]

Once the cluster restarted with the library, we may noticed a new OS process :
postgres@ppas01:/u01/app/postgres/product/10/db_1/share/ [PG10] ps -ef | grep pglogical
postgres 5001 4994 0 16:33 ? 00:00:00 postgres: bgworker: pglogical supervisor

We should now be able to create the extension :
postgres=# CREATE EXTENSION pglogical;
CREATE EXTENSION
postgres=#
postgres=# \dx+ pglogical
Objects in extension "pglogical"
Object description
------------------------------------------------------------------------------------------
function pglogical.alter_node_add_interface(name,name,text)
function pglogical.alter_node_drop_interface(name,name)
function pglogical.alter_replication_set(name,boolean,boolean,boolean,boolean)
function pglogical.alter_subscription_add_replication_set(name,name)
function pglogical.alter_subscription_disable(name,boolean)
function pglogical.alter_subscription_enable(name,boolean)
function pglogical.alter_subscription_interface(name,name)
function pglogical.alter_subscription_remove_replication_set(name,name)
function pglogical.alter_subscription_resynchronize_table(name,regclass,boolean)
function pglogical.alter_subscription_synchronize(name,boolean)
function pglogical.create_node(name,text)
function pglogical.create_replication_set(name,boolean,boolean,boolean,boolean)
function pglogical.create_subscription(name,text,text[],boolean,boolean,text[],interval)
function pglogical.drop_node(name,boolean)
function pglogical.drop_replication_set(name,boolean)
function pglogical.drop_subscription(name,boolean)
function pglogical.pglogical_gen_slot_name(name,name,name)
function pglogical.pglogical_max_proto_version()
function pglogical.pglogical_min_proto_version()
function pglogical.pglogical_node_info()
function pglogical.pglogical_version()
function pglogical.pglogical_version_num()
function pglogical.queue_truncate()
function pglogical.replicate_ddl_command(text,text[])
function pglogical.replication_set_add_all_sequences(name,text[],boolean)
function pglogical.replication_set_add_all_tables(name,text[],boolean)
function pglogical.replication_set_add_sequence(name,regclass,boolean)
function pglogical.replication_set_add_table(name,regclass,boolean,text[],text)
function pglogical.replication_set_remove_sequence(name,regclass)
function pglogical.replication_set_remove_table(name,regclass)
function pglogical.show_repset_table_info(regclass,text[])
function pglogical.show_subscription_status(name)
function pglogical.show_subscription_table(name,regclass)
function pglogical.synchronize_sequence(regclass)
function pglogical.table_data_filtered(anyelement,regclass,text[])
function pglogical.wait_slot_confirm_lsn(name,pg_lsn)
table pglogical.depend
table pglogical.local_node
table pglogical.local_sync_status
table pglogical.node
table pglogical.node_interface
table pglogical.queue
table pglogical.replication_set
table pglogical.replication_set_seq
table pglogical.replication_set_table
table pglogical.sequence_state
table pglogical.subscription
view pglogical.tables
(48 rows)

The wal_level parameter must be set to ‘logical’ for logical replication :
postgres=# show wal_level;
wal_level
-----------
logical
(1 row)

And the pg_hba file must contains an entry to allow connections for replication purpose.
Important : all the steps we have seen until now must be done on the standby side as well.

What’s next ?

Logical replication with Postgres is based on several components :
– Provider : the primary node/cluster
– Subscriber : the standby node/cluster, on which data will be replicated
– Replication set : collection of table you want to replicate
– Subscription : runs the replication, based on the replication set

We’ll start by creating the provider on the primary side :
postgres=# select pglogical.create_node (node_name := 'provider1', dsn := 'host=192.168.22.37 port=5420 dbname=postgres');
create_node
-------------
2976894835
(1 row)
postgres=# select * from pglogical.node_interface;
if_id | if_name | if_nodeid | if_dsn
------------+-----------+------------+-----------------------------------------------
2402836775 | provider1 | 2976894835 | host=192.168.22.37 port=5420 dbname=postgres
(1 row)

Notice again a new OS process :
postgres@ppas01:/home/postgres/ [PG10] ps -ef | grep pglogical
postgres 1796 1788 0 15:13 ? 00:00:00 postgres: bgworker: pglogical supervisor
postgres 4566 1788 0 16:05 ? 00:00:00 postgres: bgworker: pglogical manager 13211

On the standby side, we create the subscriber :
postgres=# select pglogical.create_node(node_name := 'subscriber1', dsn := 'host=192.168.22.38 port=5420 dbname=postgres');
create_node
-------------
330520249
(1 row)
postgres=# select * from pglogical.node_interface;
if_id | if_name | if_nodeid | if_dsn
------------+-------------+-----------+----------------------------------------------
2049915666 | subscriber1 | 330520249 | host=192.168.22.38 port=5420 dbname=postgres
2402836775 | provider1 | 2976894835 | host=192.168.22.37 port=5420 dbname=postgres
(1 row)

What about replication set ? There is 3 existing by default with a different DML replication behavior for each :
postgres=# select * from pglogical.replication_set;
set_id | set_nodeid | set_name | replicate_insert | replicate_update | replicate_delete | replicate_truncate
------------+------------+---------------------+------------------+------------------+------------------+--------------------
290045701 | 2976894835 | default | t | t | t | t
3461245231 | 2976894835 | default_insert_only | t | f | f | t
2465411902 | 2976894835 | ddl_sql | t | f | f | f

We can easily add our own :
postgres=# select pglogical.create_replication_set('my_set', true, true, true, true);
create_replication_set
------------------------
1521346531

To start the replication we have to create a subscription (using the replication set we created) from the standby side :
postgres=# select pglogical.create_subscription(subscription_name := 'subscription1', provider_dsn := 'host=192.168.22.37 port=5420 dbname=postgres', replication_sets := '{my_set}');
create_subscription
---------------------
1763399739
(1 row)


postgres=# select subscription_name, status FROM pglogical.show_subscription_status();
subscription_name | status
-------------------+-------------
subscription1 | replicating

Have a look in the log file… replication started with the correct provider and subscription :
2018-02-02 15:16:14.234 CET [5488] LOG: starting apply for subscription subscription1
2018-02-02 15:17:54.547 CET [5502] LOG: starting pglogical database manager for database postgres

postgres@ppas02:/u02/pgdata/PG10/ [PG10] ps -ef | grep pglogical
postgres 3113 3105 0 10:01 ? 00:00:00 postgres: bgworker: pglogical supervisor
postgres 4587 3105 0 13:47 ? 00:00:00 postgres: bgworker: pglogical manager 13211
postgres 5060 3105 0 15:06 ? 00:00:00 postgres: bgworker: pglogical apply 13211:1763399739

The last step is to add the tables in the replication set we created :
postgres=# select pglogical.replication_set_add_all_tables('my_set', ARRAY['public']);
replication_set_add_all_tables
--------------------------------
t
(1 row)

The insert/update/delete operations run against the primary server are now replicated to the standby one.

 

Cet article PostgreSQL – logical replication with pglogical est apparu en premier sur Blog dbi services.

Updating the partition key now works in PostgreSQL 11

Yann Neuhaus - Fri, 2018-03-23 11:00

In the last post about partitioning improvements in PostgreSQL 11 we talked about unique indexes and primary keys on partitioned tables. Both did not work in PostgreSQL 10 but now do in PostgreSQL 11. Another operation that did not work in PostgreSQL 10 and does now in PostgreSQL 11 is: Updating the partition key, which means that a row will move from one partition to another. Lets have a look at that.

We’ll use the same little list partitioned table as in the last post and start with PostgreSQL 10:

postgres=# select version();
                                                          version                                       Insert                     
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# create table part ( a int, list varchar(5) ) partition by list (list);
CREATE TABLE
postgres=# create table part_1 partition of part for values in ('beer');
CREATE TABLE
postgres=# create table part_2 partition of part for values in ('wine');
CREATE TABLE
postgres=# \d+ part
                                          Table "public.part"
 Column |         Type         | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+----------------------+-----------+----------+---------+----------+--------------+-------------
 a      | integer              |           |          |         | plain    |              | 
 list   | character varying(5) |           |          |         | extended |              | 
Partition key: LIST (list)
Partitions: part_1 FOR VALUES IN ('beer'),
            part_2 FOR VALUES IN ('wine')

Insert some data:

postgres=# insert into part (a,list) select i,'beer' from generate_series(1,10) i;
INSERT 0 10
postgres=# insert into part (a,list) select i,'wine' from generate_series(11,20) i;
INSERT 0 10

So we have ten beers and ten wines.

postgres=# select count(*) from only part_1;
 count 
-------
    10
(1 row)

postgres=# select count(*) from only part_2;
 count 
-------
    10
(1 row)

What happens if we update the partition key because we like more beer than wine in PostgreSQL 10?

postgres=# update part set list = 'beer' where a = 15;
ERROR:  new row for relation "part_2" violates partition constraint
DETAIL:  Failing row contains (15, beer).

Not so good. Lets try the same in PostgreSQL 11:

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 11devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# create table part ( a int, list varchar(5) ) partition by list (list);
CREATE TABLE
postgres=# create table part_1 partition of part for values in ('beer');
CREATE TABLE
postgres=# create table part_2 partition of part for values in ('wine');
CREATE TABLE
postgres=# insert into part (a,list) select i,'beer' from generate_series(1,10) i;
INSERT 0 10
postgres=# insert into part (a,list) select i,'wine' from generate_series(11,20) i;
INSERT 0 10
postgres=# update part set list = 'beer' where a = 15;
UPDATE 1

Excellent, just works.

 

Cet article Updating the partition key now works in PostgreSQL 11 est apparu en premier sur Blog dbi services.

How We Integrated this Website with the Mindbreeze InSpire Enterprise Search Appliance

As many of you know Fishbowl is a Mindbreeze Certified Partner and search appliance reseller. A core component of our company culture is using the same tools and technologies we implement for our customers. For that reason, and to give readers like you a chance to try out Mindbreeze in action, we have implemented Mindbreeze search here on fishbowlsolutions.com. Read on to learn more about the benefits and details of this integration.

Indexing Our Site

The first step in our Mindbreeze integration project was to configure Mindbreeze to crawl our website using the out of the box web crawler. We decided to split the content into two groups, blog posts and everything else, in order to distinctly configure how blog post content would be indexed. Mindbreeze allows the configuration of one or more crawler instances, so we created two crawlers with separate follow and do-not-follow patterns to index each content group.

Next we configured the extraction of content from the site. By default the crawler will crawl the entire contents of a page, but Mindbreeze can optionally restrict content indexing to a specific DIV or section. That way, words contained in your navigation or footer won’t be indexed for every page. For example, Fishbowl’s footer currently includes the word “Mindbreeze”, but when site users search for “Mindbreeze” we don’t want to return every page on the site—only those actually related to Mindbreeze. For customers already leveraging google-on google-off tags for this purpose (a feature from the Google Search Appliance), Mindbreeze can interpret those tags. We have a few spots on our blog where this was used to restrict the indexing of blog sidebars and other non-content elements within a page template.

We also configured metadata extraction from within the blog posts themselves. This was done by telling Mindbreeze (via XPATH selectors) where in the DOM the blog post author, category, and feature image data could be located. Again, this was all accomplished without altering the structure of the site itself or requiring additional work on the part of our site’s contributors. If you have standard htmltags within your pages, Mindbreeze will index these automatically.

Entity Recognition

As part of the index setup, we configured entity recognition to parse our pages (both blog and non-blog) for the names of the five key technologies Fishbowl works with. This was done using the Mindbreeze entity extraction feature. Each of the five possible values were mapped to a metadata field called Technology. Like the metadata extraction, the entities were extracted without having to change anything about the structure of our site or templates.

Query Expansion

Between the time when a user enters their query and the time the search engine computes relevant results, there is a critical piece in the search process often referred to as query expansion. Query expansion describes various ways in which the words the user types can be expanded upon or “understood” by the search engine in order to more accurately represent the original intent and locate the right content. One way queries can be expanded for better search is through the use of synonyms. Synonyms can be used to set related terms equal to one-another, make abbreviations equal to their full meanings, or set legacy terminology as synonymous with current nomenclature. Mindbreeze query expansion is used on this site to expand queries such as “Jobs” to include “Careers” and the legacy product name “UCM” to search for the new name, “WebCenter Content”. Mindbreeze also includes default stemming and spelling expansions to allow users to find content even if their query doesn’t exactly match our site’s data. For example, stemming allows users to search for “orders” and get results containing “order” “ordered” and “ordering.” It means users don’t have to know whether a word was in past tense, plural, or singular, in order to find what they need.

Relevancy and Result Boosting

Relevancy boosting allows administrators to further tune result ranking (also called biasing) based on factors such as metadata values, URL patterns, or date. These relevancy adjustments can be applied to specific sites, so that each audience sees what is most relevant to them. Relevancy is configured through the Mindbreeze Management Center without requiring custom development. On our site, the number of blog posts far outweigh the number of product pages; when someone searches for a product (such as Mindbreeze) we want the first result to be the main Mindbreeze product page. To ensure the main product pages (which may be older and contain fewer words than our latest blog posts) remain on the top, we can use Mindbreeze boosting to either increase the relevancy of product pages or decrease the relevancy of blog posts. All things being equal, it is better to down-boost less relevant content than to up-boost relevant content. We added a rule to reduce the relevancy of all blog post content by a factor of 0.75. We also boost our featured results by a factor of 10 to ensure they appear on top when relevant. In addition to manual tuning, Mindbreeze automatically monitors and analyzes click patterns to learn from user behavior and improve relevancy automatically over time.

Creating the Search Results Page

The search results page used on this site was created using the Mindbreeze Search App Designer. This builder provides a drag-and-drop interface for creating modular, mobile-friendly, search applications. Mindbreeze also provides a JSON API for fully custom search page development.

Our search app combines a list-style results widget and three filter widgets to limit the results based on Technology, Blog Post Category, and Blog Post Author. The filter widgets available within the builder are determined by the metadata available via the indexing configuration described earlier.

To personalize our search app, we made several modifications to the mustache templates which control the rendering of the various widgets. For example, we only show dates on blog posts and include the “blog post” callouts next to blog post titles.

Once the structure of the search app was complete, we were able use the export snippet functionality to copy the search app code from the Mindbreeze Management Center and embed that into a div within our site. In order to make the Mindbreeze search app match the look and feel of the rest of the website, we added a custom CSS file which overrides some of the standard Mindbreeze CSS within the search app.

Search Box Integration & Suggestions

To integrate Mindbreeze with our existing website’s search box, we modified the search input in the site header to direct search form submissions to the new Mindbreeze search results page. Since we are using WordPress, this involved modifying the header.php file within our site’s child theme. We also added a call to the Mindbreeze Suggest API, displayed using jQuery autocomplete, in order to provide search suggestions as you type. Most WCM systems have template files which can be modified to integrate Mindbreeze search into existing site headers. Our customers have similar integrations within Adobe Experience Manger and Oracle WebCenter Portal to name a few.

As a note for those familiar with WordPress, we could have customized the search.php template to include the Mindbreeze Export Snippet instead of creating a new search results page. We wanted to let our contributors edit the heading and call-to-action sections of our search results page without coding, so we built the search results into a standard WordPress page. This also allowed us to keep the core WordPress search page intact for comparison purposes (we are in the search business after all). From a technical perspective, either approach would have worked.

Closing Comments

We wanted to share the details about our integration to give anyone using or considering Mindbreeze an in-depth look at a real working search integration. The architecture and approach we took here can be applied to other platforms both internal and externally facing including SharePoint, Oracle WebCenter, or Liferay. Use the search box at the top of the page to try it for yourself. If you have any questions about Mindbreeze search integration options, please contact us.

Time running out on your GSA?

Our expert team knows both GSA and Mindbreeze. We’ll help you understand your options and design a migration plan to fit your needs.

The post How We Integrated this Website with the Mindbreeze InSpire Enterprise Search Appliance appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Oracle Integration Cloud Tips & Trics: Work-around for no Script Activity

Jan Kettenis - Fri, 2018-03-23 09:03
Oracle Process Cloud Services (PCS) nor the Process Builder in Oracle Integration Cloud (OIC) have a Script activity like there is in (on-premise) BPM Suite. In the BPM Suite you can use a Script activity for data mappings as well as Groovy. That OIC does not support Groovy is by design as the idea is to keep it as simple as possible. However missing the data mapping feature of the Script activity can make it even more complex than ever. Fortunately there is some data mapping activity on the road-map of some next version of OIC. Until then you can make use of the work-around below.
There can be several reasons why you may want to have an activity just for mapping data, among them:
  • Readability of the process model, making it clear which data is set where in the process.
  • Data mapping is conditional, making it too complex or impossible to do it in the Input or Output mapping of (for example) a Service activity.
  • A conditional mapping before a Gateway.
  • Iterative development, requiring (temporary) "hard-coding".
The work-around is to use a Rule activity which uses an input and output parameter of the type of the data object you want to map the data to.

A such the Rule activity is deprecated as it is superseded by the Decision activity, but as long as it is there (and a Mapping activity is not) we can make good use of it.

Below an example. This concerns some Process that is being used in a Dynamic Process application, to set up some case meta data. The case meta data is stored and checked for duplicates. The Store Meta Data activity is in draft mode because I'm developing it iteratively. One of the elements of the meta data is a startDate, which I want to set to the creationDate predefined variable.






I cannot do the mapping to the startDate in the Start event, because there it is not available. But even if it was, for reasons of clarity I would like to have it clearly visible in the process model.




I therefore created a Rule activity with uses an input and output argument, both of the MetaData business type.


 




I can do all mappings on the Input and Output Data Association tabs, so I do not actually have to implement a rule. The result will be that the input is mapped to the output 1:1. But for more complex use cases you can actually implement rules as well.





The run-time result is as shown in the next picture.


Hash Partitioning in PostgreSQL 11

Yann Neuhaus - Fri, 2018-03-23 07:45

PostgreSQL 10 comes with partition support. But Hash Partitioning was not supported. PostgreSQL 11 will support Hash Partition.
In the documentation we can read
The table is partitioned by specifying a modulus and a remainder for each partition. Each partition will hold the rows for which the hash value of the partition key divided by the specified modulus will produce the specified remainder.
For this demonstration let’s create a table with a hash partition

drop table dept;
create table dept (id  int primary key) partition by hash(id) ; 

Now let’s create for example 10 partitions

create table dept_1 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 0);
create table dept_2 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 1);
create table dept_3 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 2);
create table dept_4 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 3);
create table dept_5 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 4);
create table dept_6 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 5);
create table dept_7 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 6);
create table dept_8 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 7);
create table dept_9 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 8);
create table dept_10 partition of dept FOR VALUES WITH (MODULUS 10, REMAINDER 9);

We can verify that partitions are created using the \d+ command

(postgres@[local]:5432) [test] > \d+ dept
                                   Table "public.dept"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats target | De
scription
--------+---------+-----------+----------+---------+---------+--------------+---
----------
 id     | integer |           | not null |         | plain   |              |
Partition key: HASH (id)
Indexes:
    "dept_pkey" PRIMARY KEY, btree (id)
Partitions: dept_1 FOR VALUES WITH (modulus 10, remainder 0),
            dept_10 FOR VALUES WITH (modulus 10, remainder 9),
            dept_2 FOR VALUES WITH (modulus 10, remainder 1),
            dept_3 FOR VALUES WITH (modulus 10, remainder 2),
            dept_4 FOR VALUES WITH (modulus 10, remainder 3),
            dept_5 FOR VALUES WITH (modulus 10, remainder 4),
            dept_6 FOR VALUES WITH (modulus 10, remainder 5),
            dept_7 FOR VALUES WITH (modulus 10, remainder 6),
            dept_8 FOR VALUES WITH (modulus 10, remainder 7),
            dept_9 FOR VALUES WITH (modulus 10, remainder 8)

(postgres@[local]:5432) [test] >

Now let’s insert some rows in the table dept

(postgres@[local]:5432) [test] > insert into dept (select generate_series(0,200000));
INSERT 0 200001
(postgres@[local]:5432) [test] >

We can verify that rows are not in the base table

(postgres@[local]:5432) [test] > select count(*) from  only dept ;
 count
-------
     0
(1 row)

(postgres@[local]:5432) [test] >

But that row are in the partitions

(postgres@[local]:5432) [test] > select count(*) from  dept ;
 count
--------
 200001
(1 row)

What we can also observe it that rows are uniformly distributed among partitions. This distribution is automatically done by the hash algorithm.

(postgres@[local]:5432) [test] > select count(*) from  only dept_1 ;
 count
-------
 19982
(1 row)
                                   
(postgres@[local]:5432) [test] > select count(*) from  only dept_2 ;
 count
-------
 20199
(1 row)

(postgres@[local]:5432) [test] > select count(*) from  only dept_3 ;
 count
-------
 19770
(1 row)

(postgres@[local]:5432) [test] > select count(*) from  only dept_5 ;
 count
-------
 20068
(1 row)

(postgres@[local]:5432) [test] >
 

Cet article Hash Partitioning in PostgreSQL 11 est apparu en premier sur Blog dbi services.

New Era of Digital Business - Partner Webcast Series

IT professionals have no shortage of hot technologies and trends they’re exploring, from chatbots and artificial intelligence to cloud-based infrastructure and platforms to microservices and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What is the maximum in list size in PostgreSQL?

Yann Neuhaus - Fri, 2018-03-23 04:53

Yesterday, while being at a customer, an interesting question popped up: What is the maximum of in list values in PostgreSQL? I couldn’t answer although I never read somewhere that there is a limit. The following is for fun only and I am not saying that creating huge in lists is a good idea. Lets go.

The version I tested is PostgreSQL 10:

postgres=# select version(), now();
                                                  version                                                   |              now              
------------------------------------------------------------------------------------------------------------+-------------------------------
 PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit | 2018-03-21 18:29:50.269369+01
(1 row)

postgres=# create table t1 ( a varchar(10));
CREATE TABLE

We will use a very simple bash script to build the in list and execute the result in PostgreSQL:

postgres@pgbox:/home/postgres/ [PG10] cat t.sh 
#!/bin/bash
count=$1

statement='select * from t1 where a in ('

for (( i=1; i<=$count; i++ ))
do  
    if [ "${i}" -lt "${count}" ]; then
        statement="${statement} '${i}',"
    elif [ "${i}" == "${count}" ]; then
        statement="${statement} '${i}');"
    fi
done

psql -c "${statement}" postgres

Lets start with 100:

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 100
 a 
---
(0 rows)

Time: 0.983 ms

1000:

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 1000
 a 
---
(0 rows)

Time: 1.525 ms

10000:

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 10000
 a 
---
(0 rows)

Time: 11.549 ms

… takes a much longer (because of the bash script which fully occupies my virtual core):

./t.sh: line 15: /u01/app/postgres/product/10/db_0/bin/psql: Argument list too long

So there at least is a limit with psql. Lets try by creating a sql script and execute that instead:

postgres@pgbox:/home/postgres/ [PG10] cat t.sh 
#!/bin/bash
count=$1

echo 'select * from t1 where a in (' > ttt.sql

for (( i=1; i<=$count; i++ ))
do  
    if [ "${i}" -lt "${count}" ]; then
        echo  "'${i}'," >> ttt.sql
    elif [ "${i}" == "${count}" ]; then
        echo "'${i}');" >> ttt.sql
    fi
done

psql -f ttt.sql postgres

This way of stringing together the statement is much more efficient than building the list by concatenating everything into one variable. Does it still work?

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 100000
 a 
---
(0 rows)

Time: 155.986 ms

Not a problem, one more:

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 1000000
 a 
---
(0 rows)

Time: 14211.766 ms (00:14.212)

Still works. So now we could say: lets stop, who in the world will pass one million values into an in list. On the other hand, lets have fun and double:

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 2000000
 a 
---
(0 rows)

Time: 3993.091 ms (00:03.993)

One more:

postgres@pgbox:/home/postgres/ [PG10] ./t.sh 3000000
psql:ttt.sql:3000001: ERROR:  out of memory
DETAIL:  Failed on request of size 524288.
Time: 3026.925 ms (00:03.027)

Ok, now I am hitting some limits, but probably not those of PostgreSQL. I’ll test further when I have more time for that :)

 

Cet article What is the maximum in list size in PostgreSQL? est apparu en premier sur Blog dbi services.

An "Awesome" List of Resources

Hemant K Chitale - Fri, 2018-03-23 03:13
Here's an "Awesome" List of Resources 

https://github.com/sindresorhus/awesome

.
.
.
 
Categories: DBA Blogs

How to Optimize design a 1000+ columns in a Oracle DB

Tom Kyte - Thu, 2018-03-22 17:26
Hi, We need to store 5 minute interval data, with additional attributes, which will be around 1000+ columns. How to design this optimally for a Fact Table? Doe we have any columnar design in Oracle, such as a vertical split? Please provide your ex...
Categories: DBA Blogs

Java 10 released: java with some enhancements for running inside docker

Dietrich Schroff - Thu, 2018-03-22 16:03
After the release of Java 9 in october 2017 with its new features
Oracle released Java 10:
 A short summary of the new feature can be found
at https://blogs.oracle.com/java-platform-group/introducing-java-se-10
or you can take a look a the release notes:
http://www.oracle.com/technetwork/java/javase/10-relnote-issues-4108729.html#NewFeature

My favourites are:
  • JEP 307 Parallel Full GC for G1  Improves G1 worst-case latencies by making the full GC parallel. The G1 garbage collector is designed to avoid full collections, but when the concurrent collections can't reclaim memory fast enough a fall back full GC will occur. The old implementation of the full GC for G1 used a single threaded mark-sweep-compact algorithm. With JEP 307 the full GC has been parallelized and now use the same amount of parallel worker threads as the young and mixed collections.
and the docker enhancements:
  • JDK-8146115 Improve docker container detection and resource configuration usage
The JVM has been modified to be aware that it is running in a Docker container and will extract container specific configuration information instead of querying the operating system. The information being extracted is the number of CPUs and total memory that have been allocated to the container. The total number of CPUs available to the Java process is calculated from any specified cpu sets, cpu shares or cpu quotas. This support is only available on Linux-based platforms. This new support is enabled by default and can be disabled in the command line with the JVM option:
-XX:-UseContainerSupport
In addition, this change adds a JVM option that provides the ability to specify the number of CPUs that the JVM will use:
-XX:ActiveProcessorCount=count
This count overrides any other automatic CPU detection logic in the JVM.
  • JDK-8186248 Allow more flexibility in selecting Heap % of available RAM
Three new JVM options have been added to allow Docker container users to gain more fine grained control over the amount of system memory that will be used for the Java Heap:
-XX:InitialRAMPercentage
-XX:MaxRAMPercentage
-XX:MinRAMPercentage
These options replace the deprecated Fraction forms (-XX:InitialRAMFraction, -XX:MaxRAMFraction, and -XX:MinRAMFraction).
  • JDK-8179498 attach in Linux should be relative to /proc/pid/root and namespace aware
This bug fix corrects the attach mechanism when trying to attach from a host process to a Java process that is running in a Docker container.

Oracle VM Server x86: Discovering a server/node + setting up a repository, server pool and networking

Dietrich Schroff - Thu, 2018-03-22 15:32
Let's put the things together:
The Oracle VM Manager has to discover the node. Inside the web GUI (https://yourhost:7002/ovm/console)  clickt on the right icon:

Then give your password and the ip(s):

And then you can see the node details (if the connections does not fail):

The next step is to create a repository. To achieve this, you have to add a additional hdd on the oracle VM server/node, without mounting this on the server/node.
Then disover the local storage:
After "rescan phyiscal disks" it is possible to create a repository:







Still something missing: a server pool:



 Attention: Uncheck the "Clustered Server Pool" option:





And last but not least: networking...

Check, that your network is available for "Virtual Machine"

 Next posting i will show the creation of a virtual machine...

ADF Declarative Component Example

Andrejus Baranovski - Thu, 2018-03-22 12:48
ADF Declarative Component support is popular ADF framework feature, but in this post I would like to explain it from slightly different angle. I will show how to pass ADF binding and Java bean objects into component through properties, in these cases when component must show data from ADF bindings, such approach could offer robustness and simplify component development.

This is component implemented in the sample app - choice list renders data from ADF LOV and button calls Java bean method to print selected LOV item value (retrieved from ADF bindings):


JDeveloper provides wizard to create initial structure for declarative component:


This is ADF declarative component, it is rendered from our own tag. There are two properties. List binding is assigned with LOV binding object instance and bean property with Java bean instance defined in backing bean scope. In this way, we pass objects directly into the component:


LOV binding is defined in target page definition file, where component is consumed:


Bean is defined in the same project, where page which consumes ADF declarative component is created. We need to define component property type to match bean type, for that reason, we must create class interface in the component library and in target project implement it:


Component and main projects can be in the same JDEV application, we can use JDEV working sets to navigate between projects (when running main project, we dont want to run component project, component project is deployed and reused through ADF JAR library):


Bean interface is defined inside component:


Property for list binding is defined with JUCtrlListBinding type, this allows to pass binding instance directly to the component. Same for bean instance, using interface to define bean instance type, which will be assigned from the page, where component is used:


Declarative component is based on combination of ADF Faces components:


Download sample application from GitHub repository.

Oracle EMEA Big Data, Analytics & AI Partner Community Forum

Oracle EMEA Big Data, Analytics & AI Partner Community Forum Autonomous, Adaptive, Intelligent Dear Valued Oracle Partner, We are inviting you to the Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

More Than 60% of Drug Safety Experts Plan to Use AI to Improve the Speed and Security of Adverse Event Case Processing

Oracle Press Releases - Thu, 2018-03-22 07:00
Press Release
More Than 60% of Drug Safety Experts Plan to Use AI to Improve the Speed and Security of Adverse Event Case Processing Research reveals the life sciences industry is adopting new technologies to automate safety processes and drive down costs, while staying compliant

Redwood Shores, Calif.—Mar 22, 2018

New research conducted in partnership with Informa Engage/Pharma Intelligence reveals that 62% of drug safety experts have implemented or plan to implement Artificial Intelligence (AI) to improve adverse event processing. The findings point to an industry-wide push for a faster, more accurate, and more secure approach to pharmacovigilance using new technologies.

The success of pharmacovigilance efforts hinges on the ability to spot risks early, manage them effectively, and comply with increasingly complex regulations. Drug safety teams view AI as a means to achieve this, and have already begun to explore its potential:

  • 27% have or plan to implement AI for quality assurance
  • 23% have or plan to implement AI for follow-up processing
  • 19% have or plan to implement AI for faster reporting

Andrea Charles, Editor, Custom Content, Informa Pharma Intelligence said, "Safety teams are under immense pressure to work quickly and accurately, but their biggest concerns are patient safety and the protection of their data. They need a robust approach to pharmacovigilance that allows them to work faster while maintaining a high standard of accuracy and security, which is why they are replacing and enhancing their systems with advanced technologies that are fit for the job.”

The research also reveals an industry-wide shift towards cloud-based systems to improve security. Almost 60% of respondents already have safety solutions in the cloud or are planning to move there within the next two years.

Bruce Palsulich, VP of safety product strategy for Oracle Health Sciences, said, “With the increase in adverse events reported, and the flat growth in resources to manage safety case processing, pharmacovigilance teams are under extreme pressure to do more with less. Fortunately, adverse event processing is becoming faster and smarter with the help of AI and the cloud. Both technologies are helping drug safety experts to improve quality and accuracy in the handling of the data they work with, and drive down their reporting costs. The time is ripe for wider adoption, and it’s encouraging to see the industry embrace new ways of working that will benefit them and the public.”

For this study, data was gathered from 153 professionals (director level and above) with titles including regulatory affairs, risk management, head of pharmacovigilance and adverse event case reporting, within pharmaceutical companies, CROs and sponsors from around the globe (North America, Europe, Asia Pacific, South America, Middle East and Africa). The data was gathered from November 2017 to December 2017.

On 22 March 2018 at 11:00 a.m. ET, Oracle Health Sciences and Pharma Intelligence will host a webcast to explore these trends in detail. Experts from both organizations will outline the forces behind the uptake of AI and cloud-based systems in the drug safety sector, and share insight into how these technologies will improve pharmacovigilance efforts. Click here for more information and to register for the live webcast.

Contact Info
Valerie Beaudett
Oracle
+1 650.400.7833
valerie.beaudett@oracle.com
Phebe Shi
Burson Marsteller for Oracle
+1 415.591.4032
phebe.shi@bm.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe, and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes, helping to bring new drugs to market faster. As the number one vendor in Life Sciences (IDC, 2017) and the number one provider of eClinical solutions (Everest Group, 2017), powered by the number one data management technology in the world (Gartner, 2017), Oracle Health Sciences is trusted by 29 of the top 30 pharma, 10 of the top 10 biotech, and 10 of the top 10 CROs for clinical trial and safety management around the globe.

About Pharma Intelligence

Informa Pharma Intelligence powers a full suite of analysis products - Datamonitor Healthcare, Sitetrove, Trialtrove, Pharmaprojects, Medtrack, Biomedtracker, Scrip, Pink Sheet and In Vivo – to deliver the data needed by the pharmaceutical and biomedical industry to make decisions and create real-world opportunities for growth. With more than 500 analysts keeping their fingers on the pulse of the industry, no key disease, clinical trial, drug approval or R&D project isn’t covered through the breadth and depth of data available to customers. For more information visit pharmaintelligence.informa.com.

About Informa Engage

Informa Engage is the marketing research arm for Informa Pharma Intelligence. We connect marketers with B2B decision makers. By combining unrivaled reach, deep knowledge of specialist markets and sophisticated marketing, we engage buyers – activating real results for marketers.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Phebe Shi

  • +1 415.591.4032

Oracle Opens State-of-the-Art Cloud Campus in Austin, Texas

Oracle Press Releases - Thu, 2018-03-22 07:00
Press Release
Oracle Opens State-of-the-Art Cloud Campus in Austin, Texas

Redwood Shores, Calif.—Mar 22, 2018

Oracle unveiled its new Austin, Texas campus as a sales innovation customer experience hub to support increasing demand for Oracle Cloud and attract top talent to service the business.

“Oracle is expanding in Austin to attract, hire and train the best talent to support the unprecedented growth of our cloud business,” said Oracle Chief Executive Officer Mark Hurd.  “This campus will help enable our customers to accelerate their Oracle Cloud adoption and drive success.”

The Austin complex features a 560,000-square-foot facility and spans 40 acres of Lady Bird Lake waterfront property that could ultimately support a workforce of up to 10,000. The campus also includes the adjacent 295-unit Azul apartment building for employees and the general public.

The new campus will feature a variety of design elements and amenities including:

  • Dining: Full service restaurant, Starbucks, food truck court
  • Health: Fitness center, beach volleyball and basketball courts, flag football field, B-cycle station, access to Lady Bird Lake hike and bike trail
  • Business: Training center, conference center, tech bar, innovation lab, interactive customer demo rooms
  • Collaboration Spaces: Game rooms, landscaped balconies, terraces and outdoor areas
  • Culture: Themed floors highlighting aspects of Austin’s culture including local art, music and murals.

“As a hub of innovation, Oracle’s new campus will be right at home in Austin, a city where good ideas become real,” said Austin Mayor Steve Adler.

Texas Governor Greg Abbott said, “On behalf of the state of Texas, I want to congratulate Oracle on the opening of their new campus. Texas is a growing hub for tech jobs and innovation, and I am pleased to see Oracle grow their investment here. Together, we will elevate the Lone Star State to new heights and expand economic opportunities for all Texans.”

The new campus will be a center of sales innovation. It will largely house cloud sales professionals, including “Oracle Digital Class Of,” a program that hires new university graduates and provides them significant resources and training, and the ability to develop a successful career path. “Oracle Digital Class Of” recruits from top colleges across the country. Previously focused on sales and technical resources, the program is expanding to encompass customer success and consulting. Recent graduates have chosen to join Oracle because of its immersive training programs located in desirable locations such as Austin.

Oracle’s Austin campus is the launch pad for the company’s Next Generation Contact Center (NGCC), which is built on the Oracle Sales Cloud platform and applications running on Oracle’s IaaS. NGCC enhances the customer experience by eliminating administrative tasks and providing real-time account intelligence. Set for company-wide deployment, NGCC provides Oracle sales representatives with tailored, modern tools and resources aimed to deliver a dynamic, efficient and successful sales process.

And, taking a new approach to selling and customer success, the Oracle Cloud Solution Hub will also be based at the Austin campus, as well as three other locations throughout the country. The hubs are designed to build and showcase innovation projects with and for customers. These hubs are technical focused with engineers to demonstrate next-generation cloud solutions for customers, including artificial intelligence, virtual reality, intelligent bots and more.

Additionally, the company recently announced that Oracle Startup Cloud Accelerator’s first U.S. residential program will be located in Austin, which will promote cloud innovation, as well as provide mentoring and Oracle Cloud technology to local startups.

We are proud to deepen our roots in the Austin community through volunteering and other community events. In FY 2017, more than 1,200 Oracle Volunteers donated nearly 3,000 hours of their time to local organizations, including Austin Pets Alive!, Keep Austin Beautiful, Central Texas Food Bank, and Girlstart.

Contact Info
Jessica Moore
Oracle Corporate Communications
650.506.3297
jessica.moore@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jessica Moore

  • 650.506.3297

Oracle Recognizes HR Pioneers in Annual Awards

Oracle Press Releases - Thu, 2018-03-22 07:00
Press Release
Oracle Recognizes HR Pioneers in Annual Awards Oracle HCM Cloud Rubies Awards celebrate organizations using modern cloud technologies to drive business success and improve employee engagement and happiness

Oracle HCM World, Dallas, TX—Mar 22, 2018

To celebrate the success organizations are achieving by modernizing HR processes in the cloud, Oracle today announced the winners of the Oracle HCM Cloud Rubies Award. The awards recognize HR professionals who are using Oracle Human Capital Management (HCM) Cloud to modernize business processes and improve the employee experience. The 2018 winners include Caesars Entertainment Corporation, Creighton University, Cummins Inc., Emerson, Macy’s, NSF International, PTC, and Reckitt Benckiser.

“The Oracle HCM Cloud Rubies Awards recognize HR professionals that have successfully transformed the HR function and related business processes,” said Emily He, Senior Vice President, Human Capital Management Cloud Business Group, Oracle. “This year’s winners truly are industry pioneers who have fully leveraged Oracle HCM Cloud solutions to enhance employee engagement and deliver outstanding results.”

Spanning eight categories, the global Oracle HCM Cloud Rubies Awards are peer-nominated awards that recognize HR professionals that have used technology to increase efficiencies, enhance the employee experience and deliver tangible business results. Winners include:

  • Game Changer Award: Terri Brown, Vice President, HR Business Processes and Solutions, Macy’s
  • Happily Ever After Award: Wayne Overla, Global HRIS Manager, NSF International
  • Talent Show Award: Mike Whitesell, HRIS Manager, Integrated Talent Management, Cummins Inc.
  • It’s All in the Numbers Award: Fabio Fukuda, HRIS Director, Global Integrated Talent Management, Cummins Inc.
  • Time is On My Side Award: Jim Rhodes, Vice President, Human Resources Information Systems, Emerson
  • Express Lane Award: Kathy Cullen-Cote, Corporate Vice President, Human Resources, PTC
  • Dynamic Duo Award: Michael O’Brien, Vice President, Corporate HR Services, Caesars Entertainment Corporation
  • Keep it in the Family Award: Molly Billings, Senior Director, Human Resources, Creighton University

For additional information on these winners, please visit the Oracle HCM Cloud Rubies Awards website.

Contact Info
Evelyn Tam
Oracle
650.506.5936
evelyn.tam@oracle.com
About Oracle
The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle, please visit us at oracle.com.
 
Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Talk to a Press Contact

Evelyn Tam

  • 650.506.5936

SQLDeveloper: User Defined Extensions and ForeignKey query revised

Darwin IT - Thu, 2018-03-22 02:42
It was so fun: yesterday I wrote  a small article on creating a query on Foreign Keys refering a certain table. A post with content that I made up dozens of times in my Oracle carreer. And right away I got 2 good comments. One was on the blog itself.

And of course Anonymous is absolutely right. So I added 'U' as a constraint type option.

The other comment was from my much appreciated colleague Erik. He brought this to another level, by pointing me out how to add this as a User Defined Extension in SQL Developer.

I must say I was already quite pleased with the Snippets in SQLDeveloper. So I already added the query as a snippet:
But the tip of Erik is much cooler.
He refered to a tip by Sue Harper that explains this (What, it's been in there since 2007?!).

Now what to do? First create an xml file, for instance referred_by_fks.xml,  with the following content:
<items>
<item type="editor" node="TableNode" vertical="true">
<title><![CDATA[FK References]]></title>
<query>
<sql>
<![CDATA[select fk.owner,
fk.table_name,
fk.constraint_name,
fk.status
from all_constraints fk
join all_constraints rpk on rpk.constraint_name = fk.r_constraint_name
where fk.constraint_type='R'
and rpk.constraint_type in('P','U')
and rpk.table_name = :OBJECT_NAME
and rpk.owner = :OBJECT_OWNER
order by fk.table_name, fk.constraint_name;]]>
</sql>
</query>
</item>
</items>

Note that I updated my query a bit.


Then to add the extension to SQL Developer:
  • Open the prefereces via: Tools > Preferences
  • Navigate to Database > User Defined Extensions
  • Click "Add Row" button
  • In Type choose "EDITOR", Location is where you saved the xml file above
  • Click "Ok" then restart SQL Developer

Now, if you click on a table in the navigater, you will have an extra tab on your table editor:

Cool stuff! And it's been there for ages!

Pages

Subscribe to Oracle FAQ aggregator