Feed aggregator

Fragmentation

Tom Kyte - 9 hours 44 min ago
Can tables which only experience inserts and updates and not delete be fragmented.
Categories: DBA Blogs

Join with where clause

Tom Kyte - 9 hours 44 min ago
Hi there, I have an example below which I'm not sure how oracle execute the where clause. Select a.id, b.column_A, b.column_B, b.column_C from A left join B on a.id = b.id where b.column_C = 'Yes' My question is that do oracle execute the...
Categories: DBA Blogs

Are results from SELECT with no order by and with ROWNUM predicate used in WHERE condition stable?

Tom Kyte - 9 hours 44 min ago
Consider the following simple scenario: PREPARATIONS: 1) Let's say we have a table USERS with one COLUMN NAME: create table USERS(NAME varchar(100)); 2) Let's put some values there: insert into USERS(name) values('User1'); insert into USERS...
Categories: DBA Blogs

difference of explain plan, autotrace and tkprof

Tom Kyte - 9 hours 44 min ago
I have confusion about explain plan, autotrace and tkprof. When explain plain showing to us the prediction about how optimizer will work to produce result and tkprof show the real thing that happen, so how about autotrace. In autotrace there are two ...
Categories: DBA Blogs

About BLOB and BFILE

Tom Kyte - 9 hours 44 min ago
Hi Tom. We are designing an office automation application and we want to store image of letters and documents.Our organization is very large (it has 100 departments) and We have to use replications. Now we want to know is BLOB a proper type for sto...
Categories: DBA Blogs

Getting RMAN-05502 when trying to duplicate from backup

Tom Kyte - 9 hours 44 min ago
I have very little experience with Oracle. We are a SQL Server shop with one legacy Oracle 11.2 database that serves an obscure but critical function. As part of our testing routine, this database needs to be copied from our production instance into ...
Categories: DBA Blogs

nVision Performance Tuning: 2. Effects of Performance Options

David Kurtz - 14 hours 14 min ago
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

In this post, I examine the effect of different settings for the tree performance options on how the SQL is generated.  It is common, to see different performance options in use on different trees in the same SQL query.  It is important to be able to look at a piece of SQL generated by nVision and to be able to work out which performance options have been set on which trees.
  • Access Method: join to tree selector –v- use literal values
  • Tree Selectors: statics –v- dynamic
  • Selector Options: single values –v- inequalities/between
Access Method
This option allows you control how the tree selector table is combined with the ledger table.  The choices of interest are whether to join the tree selector, or whether to use literal values where feasible.
JoinHere, two tree selectors are joined to the ledger table
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L4, … PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017

AND L.SELECTOR_NUM=73130
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
It is not feasible to use literal values for L4 because the data is grouped by L4.TREE_NODE, however, it is feasible to use literal values for L because no column from this table is included in either the SELECT or GROUP BY clauses.
Literal ValuesWhen 'use literal values' is selected nVision replaces the join to the tree selector table with a set of literal values generated from the contents of the tree selector table.
In this example, the join to the CHARTFIELD1 tree selector has been replaced with a series of literal value predicates.  There is one for every selected tree leaf.  A simple equality condition for single value leaves, and a BETWEEN condition for ranged leaves.
SELECT L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L2.SELECTOR_NUM=101142
AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.CHARTFIELD1='0070700'
OR A.CHARTFIELD1='0078999'

OR A.CHARTFIELD1 BETWEEN '0091100' AND '0091101'
OR A.CHARTFIELD1='0150204')
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM
RecommendationsIn general, the literal values option is very beneficial.
  • It simplifies the queries generated by nVision by removing a table from the from clause and leaves the optimizer with fewer choices when determining the execution plan.
  • Data is filtered on the scan of the ledger table rather than when it is joined to the tree selector.  On an engineered system these predicates can be pushed down to the storage cells.
However, there are some disadvantages
  • As the number of leaves on a tree increase, so the number of literal predicates in the query increases, and the time taken to parse the SQL increases.  As a rough guideline, I would say that the parse time starts to become significant as the tree exceeds 2000 leaves, and you might be better using the Join Option.
  • Also, as the number of literal predicates increase the time taken to evaluate them increases.  This applies to both conventional and engineered systems.
  • Some PeopleSoft customers have found it generally beneficial to the system to raise the value of OPTIMIZER_DYNAMIC_SAMPLING from the default of 2 to 4 so that Oracle collects dynamic statistics if there are expressions in where clause predicates or multiple predicates on the same table (see Database SQL Tuning Guide -> Managing Optimizer Statistics).  However, this can significantly increase the parse overhead of nVision SQL using many predicates.  If necessary, this parameter can be reset at session level for nVision with a trigger.
  • The nVision client also has to do more work to generate the SQL.
  • Literal values increase the variability of SQL generated, reports using different tree nodes will generate SQL with different numbers of predicates and there is effectively no chance that any two statements would be similar enough to use any of Oracle's plan stability technologies (SQL baselines, SQL profiles, SQL plan directives).
  • In theory, cursor sharing could reduce the parse, but in my experience is that the optimizer does not correctly cost the predicates resulting in different execution plans with poorer performance.
Tree SelectorsThis option allows you to choose whether a tree selector is managed in a static or dynamic manner.  As you will see, the Tree Selectors performance option is tied in with the Selector Options performance option.
StaticWhere a tree uses a static selector, the entire tree is extracted into the tree selector.  There will be no tree node number criteria in the extract statement.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT 78722,L.TREE_NODE_NUM, SUBSTR(L.RANGE_FROM,1,10),SUBSTR(L.RANGE_TO,1,10)
FROM PSTREELEAF L
WHERE L.SETID='GLOBE'
AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='FUNCTION'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
Note that the tree leaf was not joined to the underlying detail table.  Ranged leaves become ranged selectors.  This is because there is no mechanism to determine when the detail table has changed and thus when the static selector would be out of date and need to be refreshed.  This is why single value joins cannot be performed with static selectors.
Tree node number criteria will appear in the nVision queries to identify the required selectors.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10
AND L4.TREE_NODE_NUM BETWEEN 1000000000 AND 2000000000
A control row is maintained on the table PSTREESELCTL.  This row indicates that a particular selector is static, and the version number is compared to the version number of PSTREEDEFN to determine whether the selector is valid, or whether the tree has changed.  If the tree has changed it will have a higher version number, and the selector will be extracted again.  The selector number from the query can be looked up on this table to identify the tree.
DynamicDynamic selectors are built on-the-fly by nVision as the report runs.  They select just the rows from the tree that are required by the report.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT DISTINCT 108090,L.TREE_NODE_NUM,D.ACCOUNT,D.ACCOUNT FROM
PS_GL_ACCOUNT_TBL D, PSTREELEAF L
WHERE L.SETID='GLOBE' AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='ACCOUNT'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
AND L.RANGE_FROM<>L.RANGE_TO
AND D.ACCOUNT BETWEEN L.RANGE_FROM AND L.RANGE_TO
AND D.SETID='GLOBE' AND
(L.TREE_NODE_NUM BETWEEN 789473386 AND 793372019
OR L.TREE_NODE_NUM BETWEEN 810915873 AND 812865189

OR L.TREE_NODE_NUM BETWEEN 1089668204 AND 1095516154
OR L.TREE_NODE_NUM BETWEEN 1113060008 AND 1115009324)
All the tree node number predicates are found in the query that populates the selector table and not in the query that generates the report.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10 AND
A.CURRENCY_CD='USD' GROUP BY L4.TREE_NODE_NUM
nVision will delete dynamic selectors after they have been used.  However, if the report crashes or times out, rows will be left in the selector table, and this debris can build up over time.  So there is a requirement to manage these tables.
Note that in this example when the selector was extracted, the tree leaf table was joined to the underlying table on which the tree was based (in this case GL_ACCOUNT_TBL).  This occurs when the single value join option is selected.  The tree selector contains a row for each value rather than for each tree leaf.  This will result in larger tree selectors where ranged leaves are used.
PeopleSoft do not provide any way to identify the tree used to create a dynamic selector.  From the SQL query, all we can see is the field to which it was joined.  The above example is a tree related to CHARTFIELD1.
RecommendationsAlthough static selectors are easier to manage, the decision to use dynamic selectors is driven by the need to use single value joins.
  • It is important to maintain up-to-date statistics on the selector tables.  As new selectors are extracted, a series of ever-increasing selector numbers are allocated from a sequence maintained on the table PSTREESELNUM.   If not maintained, the selector numbers will exceed the high value recorded in the column statistics for column SELECTOR_NUM, and the database will not correctly calculate the number of rows expected, resulting in inappropriate execution plans.
  • It is easier to manage optimizer statistics on static selectors because the PSTREESELCTL table records when they have been updated.  In fact, it is possible to put a trigger on that table to maintain statistics.  However, static selectors imply range predicates that bring other problems that I discuss in the next section.
  • Using compound triggers on the selector tables it is possible to maintain a log of selectors, maintain statistics on them and purge selectors left behind by nVision reports that have not cleaned up after themselves.  This is also used to identify the trees used in dynamic selectors.  I will describe this solution later.
Recommendation: Prior to Oracle 11g, I have recommended using static selectors and ranged joins, and by careful management of statistics on the tree selector and indexing of the ledger tables good results can be obtained.  The tree selectors are Cartesian joined together before using index lookups on the ledger tables.
However, this approach is generally not effective for queries that reference more than 3 tree selector, nor for very large trees.  The size of the Cartesian product is the product of the number of rows extracted from each tree selectors table.  For example, three selectors with 100 rows each will result in 1 million lookups on the ledger table.  As the size of the Cartesian product grows the number of index look-ups also grows.  There comes a point when it is better to remove the least selective tree from the Cartesian and hash joining it after the looking up ledger.  Striking the correct balance can be difficult, and leads to using different performance options in different reports.
From Oracle 11g, and especially on engineered systems, I now prefer dynamic selectors because they permit the use of single value joins that can use Bloom filters and Extended Statistics.  Although, where smaller trees are in use that do not significantly cut down the query result set, range joins may still be preferable, in which cases, so might static selectors.
Selector OptionsThis performance option controls how the tree selector is joined to the ledger table.  The choice is between single value equality joins or inequality joins.
Ranged JoinsThere are two forms of ranged joins, either using the BETWEEN keyword or using a pair of inequalities.  There is no difference to Oracle because the optimizer rewrites BETWEEN as a pair of inequalities.
NB: If you specify single value joins on a static selector you still get an inequality ranged join because only the tree leaves are extracted to the selector.  .  nVision still does this even if all of the leaves in the tree are single values and not ranges.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10

AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
Single Value JoinsIf single value joins are selected, nVision generates an equality join between the tree selector and the ledger table.  The tree selector must be dynamic, and nVision extracts all the intermediate values between the tree node ranges on the underlying detail table.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
RecommendationsIn general, I would recommend using single value joins and therefore also dynamic selectors.  The main advantage is that they enable two particular optimisations in Oracle.
  • Bloom filters only work with equality predicates, and therefore only with single value joins.  They enable a filter on one column of one table to be roughly mapped to another column of another table upon which there is an equality join.  This filters data earlier in a query, cutting down the size of the eventual join operation.  It is a rough match in that it may return some false positives to the subsequent hash operation, but it will not omit any results.  The hash join then does an exact match of the rows to produce the correct result set.  On an Engineered Oracle system this filter can also be pushed down to the storage cells. 
  • This is an example of the kind of execution plan that we should see when a Bloom filter is used.  A filter has been created on PSTREESELECT10 at line 3, and it is applied to PS_LEDGER at line 6.
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 12876 (100)| | | |
| 1 | HASH GROUP BY | | 501 | 35571 | 12876 (42)| 00:00:01 | | |
| 2 | HASH JOIN | | 975K| 66M| 12822 (41)| 00:00:01 | | |
| 3 | JOIN FILTER CREATE | :BF0000 | 2577 | 59271 | 7 (0)| 00:00:01 | | |
| 4 | PARTITION RANGE SINGLE | | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 5 | INDEX FAST FULL SCAN | PS_PSTREESELECT10 | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 6 | JOIN FILTER USE | :BF0000 | 1715K| 78M| 12804 (41)| 00:00:01 | | |
| 7 | PARTITION RANGE ITERATOR | | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
| 8 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_LEDGER | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
-------------------------------------------------------------------------------------------------------------------------------
  • Extended statistics, (i.e. on groups of columns) also only work with equality predicates.  Single value join queries can, therefore, benefit from them.
There are drawbacks to be overcome.
  • Single value joins require the use dynamic selectors.  That requires management of statistics on the dynamic selectors, and debris left in the tree selectors table from failed reports that have not cleaned up after themselves.
  • Single value joins can result in larger tree selector tables with many more rows than tree leaves.  For large trees or trees with many values in leaf ranges, resulting in more than about 2000 rows on the tree selector tables, it can be advantageous to change the Access Method performance option from using literal values to join because the parse overhead becomes significant.

Submitted two talks to Collaborate 18

Bobby Durrett's DBA Blog - Fri, 2017-10-20 19:52

I submitted my two talks to Collaborate 18 through IOUG. These are the same two that I submitted to RMOUG Training Days. Hopefully one of the talks will get accepted at one of the two conferences but I don’t know. The nice thing is that they are both professional development talks and a user group meeting might be more open to that sort of talk than a vendor (i.e. Oracle OpenWorld) conference. But, there is a lot of competition out there so we will see. Hopefully I will get to put some of my new Toastmaster skills to work at a conference early next year. &#x1f642;

Bobby

Categories: DBA Blogs

ORA-04091: table is mutating, trigger/function may not see it

Tom Kyte - Fri, 2017-10-20 14:26
Hi, I am getting "ORA-04091: table is mutating, trigger/function may not see it" in my trigger. Please find the test case details as follows: <code>create table test(bug_number number, subject varchar2(50),ANALYZED_BY varchar2(50)); insert...
Categories: DBA Blogs

Neat and optimized code

Tom Kyte - Fri, 2017-10-20 14:26
I need the logic to derive the direction and line hour direction is Inbound if either one of org and dest or both falls under the jrny_in list and the rt list direction is outbound if either one of org and dest or both falls under the jrny_out ...
Categories: DBA Blogs

Performance - Index tablespaces with block size greater than data tablespace ?

Tom Kyte - Fri, 2017-10-20 14:26
Hi, I would like to know if it would it be good practice to create index tablespaces with block size larger than the data tablespace? Recently, a performance consultant company guided us to maintain indexes on tablespace with block size of 16K,...
Categories: DBA Blogs

Change a DBMS_SCHEDULER job's next run date

Tom Kyte - Fri, 2017-10-20 14:26
I have a job scheduled via DBMS_SCHEDULER. It is scheduled to run weekly. It has been running fine for some time now. All I want to do is alter the next_run_date. I cannot find any way to do this in the documentation. I can alter the original start d...
Categories: DBA Blogs

Dynamically Create Database Link

Tom Kyte - Fri, 2017-10-20 14:26
I have several scripts that are hard wired when creating a database link and all works well. The time has come though to take them to the next level and expand the user community. To do that the CREATE DATABASE LINK statement needs to be dynamicall...
Categories: DBA Blogs

Receive http requests with more than 32k

Tom Kyte - Fri, 2017-10-20 14:26
Hi Tom, on more question :) I'm using the utl_http package to exchange xml-data between different oracle databases. Until now i've requested the mod_plsql (until 11g) or the apache (from 12c) on the other database, posted the data with utl_http....
Categories: DBA Blogs

audsid

Tom Kyte - Fri, 2017-10-20 14:26
I am trying to track down the source (ie. machine and program) of a SQL that is erroring out with an ORA-00600. When the session produces a trace file, it identifies the session using the SID, serial# combination : *** SESSION ID:(1346.55751) 2017-10...
Categories: DBA Blogs

error ORA-01422 in select query

Tom Kyte - Fri, 2017-10-20 14:26
Hi, I am facing "ORA-01422: exact fetch returns more than requested number of rows" in my plsql procedure. Here are the details of test I am trying: create table cust_bug_metadata(product_id number, component varchar2(50),sub_component varchar...
Categories: DBA Blogs

PostgreSQL Index Suggestion With Powa

Yann Neuhaus - Fri, 2017-10-20 09:21

A few time ago my colleague Daniel did a blog about POWA. In a nice article he shown how this tool can be used to monitor our PostgreSQL.
In this present article I am going to show how this powerful tool can help by suggesting indexes which can optimize our queries.
I am using postgeSQL 9.6

[root@pgservertools extension]# yum install postgresql96-server.x86_64
[root@pgservertools extension]# yum install postgresql96-contrib.x86_64

And Then I initialize a cluster

[root@pgservertools extension]# /usr/pgsql-9.6/bin/postgresql96-setup initdb
Initializing database ... OK

POWA require following extensions:
pg_qualstats: gathers statistics on predicates found in WHERE statements and JOIN clauses
pg_stat_kcache : gathers statistics about real reads and writes done by the filesystem layer
hypopg : extension adding hypothetical indexes in PostgreSQL. This extension can be used to see if PostgreSQL will use the index or no
btree_gist : provides GiST index operator classes that implement B-tree equivalent behavior for various data types
powa_web : will provide access to powa via a navigator

Just we will note that following packages are installed to resolve some dependencies during the installation of these extensions.

yum install python-backports-ssl_match_hostname.noarch
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm

And then extensions are installed using yum

yum install powa_96.x86_64 pg_qualstats96.x86_64 pg_stat_kcache96.x86_64 hypopg_96.x86_64 powa_96-web.x86_64

After the installation the postgresql.conf is modified to load the extensions

[root@pgservertools data]# grep shared_preload_libraries postgresql.conf | grep -v ^#
shared_preload_libraries = 'pg_stat_statements,powa,pg_stat_kcache,pg_qualstats' # (change requires restart)
[root@pgservertools data]#

And then restart the PostgreSQL

[root@pgservertools data]# systemctl restart postgresql-9.6.service

For POWA configuration, the first step is to create a user for powa

postgres=# CREATE ROLE powa SUPERUSER LOGIN PASSWORD 'root';
CREATE ROLE

and the repository database we will use.

postgres=# create database powa;
CREATE DATABASE

The extensions must be created in the repository database and in all databases we want to monitor

postgres=#\c powa
powa=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
powa=# CREATE EXTENSION btree_gist;
CREATE EXTENSION
powa=# CREATE EXTENSION powa;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_qualstats;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_stat_kcache;
CREATE EXTENSION
powa=# CREATE EXTENSION hypopg;
CREATE EXTENSION

We can verify that extensions are loaded in the database using

powa=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
powa=#

Now let’s create a database named mydb for our tests and let’s create all extensions inside the database.

[postgres@pgservertools ~]$ psql
psql (9.6.5)
Type "help" for help.
postgres=# create database mydb;
CREATE DATABASE
postgres=#

Let’s again verify extensions into the database mydb

mydb=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
mydb=#

In mydb database we create a table mytab and insert in it some rows

mydb=# \d mytab
Table "public.mytab"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
val | text |
.
mydb=# select count(*) from mytab;
count
-----------
100000000
(1 row)

The last step is to configure the powa-web configuration file. Below is our file

[root@pgservertools etc]# pwd
/etc
[root@pgservertools etc]# cat powa-web.conf
servers={
'main': {
'host': 'localhost',
'port': '5432',
'database': 'powa',
'query': {'client_encoding': 'utf8'}
}
}
cookie_secret="secret"
[root@pgservertools etc]#

And then powa-beb can be started by following command

[root@pgservertools etc]# powa-web &
[1] 5600
[root@pgservertools etc]# [I 171006 13:54:42 powa-web:12] Starting powa-web on http://0.0.0.0:8888

We can now log with the user powa we created at http://localhost:8888/
powa1

And then we can choose mydb database to monitor it
powa2

Now let’s run some queries. As my load is very low I set my pg_qualstats.sample_rate=1 in the postgresql.conf file (thanks to Julien Rouhaud)

[postgres@pgservertools data]$ grep pg_qualstats.sample_rate postgresql.conf
pg_qualstats.sample_rate = 1


mydb=# select * from mytab where id in (75,25,2014,589);
id | val
------+-----------
25 | line 25
75 | line 75
589 | line 589
2014 | line 2014
(4 rows)

Time: 9472.525 ms
mydb=#

Using the tab Index suggestions, we click on Optimize the database. We can see that an index creation is recommended with the potential gain.
powa3
powa4
powa5
We will just note that PostgreSQL uses the extension hypopg to see if the index will be used or no. Let’s see how this extension works. Hypothetical indexes are useful to know if specific indexes can increase performance of a query. They do not cost CPU as they don’t exist.
Let’s create a virtual index in mydb database

mydb=# select * from hypopg_create_index('create index on mytab (id)');
indexrelid | indexname
------------+-----------------------
55799 | btree_mytab_id
(1 row)
mydb=#

We can verify the existence of the virtual index by

mydb=# SELECT * FROM hypopg_list_indexes();
indexrelid | indexname | nspname | relname | amname
------------+-----------------------+---------+---------+--------
55799 | btree_mytab_id | public | mytab | btree
(1 row)

Using explain, we can see that PostgreSQL will use the index.

mydb=# explain select * from mytab where id in (75,25,2014,589);
QUERY PLAN
-------------------------------------------------------------------------------------
Index Scan using btree_mytab_id on mytab (cost=0.07..20.34 rows=4 width=17)
Index Cond: (id = ANY ('{75,25,2014,589}'::integer[]))
(2 rows)

Just not that explain analyze will not use the virtual index
Conclusion
In this article we see how POWA can help for optimizing our PostgreSQL database.

References: https://pgxn.org/dist/hypopg/; http://powa.readthedocs.io/en/latest/

 

Cet article PostgreSQL Index Suggestion With Powa est apparu en premier sur Blog dbi services.

Extendable Lookups vs Lookups

Anthony Shorten - Thu, 2017-10-19 20:32

The Oracle Utilities Application Framework avoids hardcoding of values for maintenance, multi-lingual and configuration purposes. One of the features that supports this requirement is the Lookup object which lists the valid values (and associated related values like the description/override description and java code name for SDK use) for the field. Lookups can be exclusively owned by the product (where you can only change the override description and not add any additional values) or can customized where you can add new values. You are also free to use F1-GetLookupDescription to get the value for a lookup in any query zone, business service, business object (though you can do this on the element definition directly) and script.

There is a maintenance function to maintain Lookups. For example:

Example Lookup

The Lookup object is ideal for simple fields with valid values but if you needed to add additional elements to the lookup the lookup object cannot be extended. The concept of an Extendable Lookup was introduced. It allows implementations to build complex configurations similar to a lookup and introduce extended features for their custom configuration settings. To use Extendable Lookup the following is typically done:

  • You create a Business Object based upon the F1-EXT LKUP Maintenance Object. You can define the structure you want to configure for the lookup. There are numerous examples of this in the base product that you can use to get ideas for what you might need to support. It is highly recommended to use UI Hints on the BO Schema to build your user interface for the lookup.
  • You can refer to the Extendable Lookup using the F1-GetExtLookUpVal common business service that can return up to five attributes from your Extendable Lookup (if you need more you can develop your own call to directly return the values - like calling the BO directly).

Here are some delivered examples of Extendable Lookups:

Example Extendable Lookups

Extendable Lookup is very powerful where you not only want to put valid values in a list but also configure additional settings to influence the outcomes of your custom code. It is recommended to use Extendable Lookup instead of Lookup if the requirements for the valid value configuration is beyond the requirement of Lookup in terms of elements to record.

For more information on both Lookups and Extendable Lookups, refer to the online documentation for further advice.

Allocation memory for varchar2 index

Tom Kyte - Thu, 2017-10-19 20:06
Hello! I have read much information about allocation memory for varchar2 as pl\sql variable and column of table. So now we have memory problem (assign awr report) with a big report. Can u explain please: 1) type StringTable is table of varchar2(320...
Categories: DBA Blogs

Alternative for SQLPlusW

Tom Kyte - Thu, 2017-10-19 20:06
Hi All, I've been reading a lot on the internet and am searching for a replacement for sqlplusw.exe. The problem is that I've written a lot of scripts and with sqlplusw.exe you could set a path by reading in an sqlfile. The solutions I've seen sofar...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator