Feed aggregator

Tablespace sizing for datawarehouse

Tom Kyte - Sun, 2018-10-07 23:26
Hello Team, my question is a bit similar to https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:228413960506 but I will need further clarification. I am loading data on monthly partitioned tables from flat files using sqlloader ev...
Categories: DBA Blogs

Locally partitioned index rebuild issues

Tom Kyte - Sun, 2018-10-07 23:26
Hi, I have a huge partitioned table with 1 Billion rows, for some reason we dropped the index and were re creating the index when due to a Support issue we found out that we had duplicate rows. So we created the index in disabled mode and then ...
Categories: DBA Blogs

Insert data in a new table

Tom Kyte - Sun, 2018-10-07 23:26
I have a question table having 1 record. create table q_text_t (q_id number,q_text varchar2(100)); insert into q_text_t values(1,'What is the capital of India?'); I have another answer table having 4 answers to the corresponding question. creat...
Categories: DBA Blogs

Minimum of a Date column

Tom Kyte - Sun, 2018-10-07 05:06
Hello Tom, I am working on a current project wherein the requirement to calculate a certain column in a certain table is as under. The base table is this: <code> create table main_data (from_value varchar2(10), to_value varchar2(10), act...
Categories: DBA Blogs

Compound Trigger and Global Variables

Tom Kyte - Sun, 2018-10-07 05:06
Hi! To avoid mutating exception, i'm using a compound trigger, filling a array and then i intend to loop through this array e do my thing. The problem is that the global variable loses it contents when I enter "after statement" if (and only if)...
Categories: DBA Blogs

Oracle Offline Persistence Toolkit - Submitting Client Changes

Andrejus Baranovski - Sat, 2018-10-06 20:30
One of the key topics related to Oracle Offline Persistence toolkit - submitting client changes to backend when data conflict exists. If data was updated on the backend, while client was offline and client wants to submit his changes - we inform about the conflict and ask what client really wants to do. If client choose to submit changes, this means we should push client changes to the backend with the latest change indicator.

There is a special case, when client updates same data multiple times while offline - during online sync we need to make sure, change indicator will be retrieved in after sync and applied in before sync listeners, to make sure subsequent requests execute correctly. Check my previous post about before request sync listener - Oracle Offline Persistence Toolkit - Before Request Sync Listener.

Example - let's update a record and submit change to the backend:


Assume another user is offline and updates same record:


User updates same record again, before going online. Now we will have two requests in the sync queue:


Once going online, sync will be executed and we will get conflict for the first request (same row was updated already by another user). At this moment, after sync listener will get info about conflict and will cache latest change indicator value returned from backend. If user decides to apply his changes, requests is removed, new request is constructed with the latest change indicator value received from backend and this request is inserted into sync queue:


If same record was updated multiple times, second request will fail too - because this request wasn't updated yet with latest change indicator:


Assuming user decided to apply changes from the second request too, we will update request with latest change indicator and submit it for sync. In after sync listener, change indicator value stored in local cache will be updated.

Successful sync with change indicator = 296:


New change indicator value will be retrieved in after sync listener and applied in before sync listener for the second request, updating same data row:


Here is the code, which allows user to apply changes to backend. We remove failed request, update it and create new request in sync queue, resuming sync process:


Download sample code for the described use case from my GitHub repository.

Java: Slow java with server.policy enabled - how to fix this issue

Dietrich Schroff - Sat, 2018-10-06 14:35
If you use Java security manager for hardening your java processes, you have to add the following JVM options:
-Djava.security.manager
-Djava.security.policy=server.policy Create a server.policy file (you can use jdkXXX/jre/lib/security/java.policy as a tamplate) and add the following line:
permission java.net.SocketPermission "localhost:*", "listen, accept, connect, resolve"; Now create a small java program, which listens on a port (like this example).

If you send a message with netcat
nc -u localhost 9876Everyhting is fine.
Now send a message from a remote host. This does not work - like expected.

Try it again with the following network tracing running (capturing all DNS packets):
tcpdump -i any port 53Cool. For each connect a DNS-Lookup is done.
This could be a problem for high performance systems or for systems, which have to running/reachable DNS-Servers. In the latter case all requests will be sent to localhost:53 and of course, localhost will not give any answer. (This is not true - there will be a "ICMP - not reachable", but no DNS answer.).
If you add now line with *.*, to allow the connection the server.policy file should contain the following lines:
permission java.net.SocketPermission "*:*", "listen, accept, connect, resolve";
permission java.net.SocketPermission "localhost:*", "listen, accept, connect, resolve"; 
Hmmm. The connection is allowed, but there still DNS requests happening. The problem is that "*:*" is behind the "localhost:*" because Java reads this file from bottom to top - so if you write it this way:
permission java.net.SocketPermission "localhost:*", "listen, accept, connect, resolve";
permission java.net.SocketPermission "*:*", "listen, accept, connect, resolve";
there are no DNS requests happening anymore.

If you still see DNS requests: Take a look at this file:
YourJDK/jre/lib/security/java.policy there are some entries with java.net.SocketPermission like:
permission java.net.SocketPermission "localhost:0", "listen"; Because java first checks this file, you have to remove such lines, to get rid of the DNS requests.

If you do not need to use DNS, you should remove dns in /etc/nsswitch.conf. But, then no domain lookup will succeed  on this machine anymore...

Inserting with WITH FUNCTION Select is giving error

Tom Kyte - Sat, 2018-10-06 10:46
Hi Ask Tom Team, Below select query is working fine for me, but am not able to insert the result using insert statement., Please provide me some suggestions <code>WITH FUNCTION T11 (P_A1 VARCHAR2) RETURN NUMBER IS BEGIN IF P_A1 = ...
Categories: DBA Blogs

Is the Oracle regular expression supporting this character extraction?

Tom Kyte - Sat, 2018-10-06 10:46
Hi Oracle SQL experts, I am using Oracle regular expression to deal with some characters. My db is 12c. I have a string every line like this input like this 1234adhefd#123 345bheufs15# ... output will be from the first alpha lette...
Categories: DBA Blogs

SGA Management

Tom Kyte - Sat, 2018-10-06 10:46
Hi Tom, I would like to know that, how Oracle internally manages when end user try to extract the data which is more than SGA, for Example if our SGA is 7 GB and user query is about 20 GB data, how it will internally manages, as far as I know, ser...
Categories: DBA Blogs

Moving datafile

Tom Kyte - Sat, 2018-10-06 10:46
Hi there in our production database,we have a tablespace called TESTDB and this tablespace has 2 DATAFILEs. location for these 2 datafiles are D:\ORADATA\TESTDB\TESTDB01.DBF D:\ORADATA\TESTDB\TESTDB02.DBF Recently i have added a new datafi...
Categories: DBA Blogs

How to move contents data from one datafile to another?

Tom Kyte - Sat, 2018-10-06 10:46
Hi Tom, Thanks for your asktom website. A tablespace consists of several data files. My purpose is to cleanly move contents in one data file to another on and then I can drop that empty file from the tablespace. So I can reduce the data file nu...
Categories: DBA Blogs

Lock wait timeout

Tom Kyte - Fri, 2018-10-05 16:26
How to set lock wait timeout in Oracle. We are executing insert/update/delete from java applications. Sometimes due to long running transactions or slowness lock acquired by one transaction on particular row gets hit by another transaction and blocki...
Categories: DBA Blogs

More archives during impdp, what is the exact reason and what will happen internally

Tom Kyte - Fri, 2018-10-05 16:26
Why more archives will be generated during import (impdp) activity, i would like to know what will happen internally due to which more archives being generated. Is it only for import activity or it will generate more redo for export as well ? Many...
Categories: DBA Blogs

Query on dba_hist_active_sess_history query taking too long

Tom Kyte - Fri, 2018-10-05 16:26
Hi, I'm using the below query to fetch details from dba_hist_active_sess_history which matches a specific wait event occurring at a specific hour of the day within the last 90 days: select USER_ID, PROGRAM, MACHINE from dba_hist_active_sess_his...
Categories: DBA Blogs

Excessive archive log generation during data load

Tom Kyte - Fri, 2018-10-05 16:26
Hi Tom, I am encountering a situation related to data loading and excessive archive log generation. I am using Oracle 8.1.6 under Solaris 7. I insert twice a week about 1 million rows in to a table whose columns are of number datatype. The loa...
Categories: DBA Blogs

Working with REST POST and Other Operations in Visual Builder

Shay Shmeltzer - Fri, 2018-10-05 12:37

One of the strong features of Visual Builder Cloud Service is the ability to consume any REST service very easily. I have a video that shows you how to work with REST services in a completely declarative way, but that video doesn't show you what happens behind the scenes when you work with the quick starts. In addition, that video shows using the GET methods and several threads on our community's discussion forum asked for help working with other operations of REST.

The demo video aims to give you a better insight into working with REST operations showing how to:

  • Add service endpoints for various REST operations
  • Create a GET form manually for retrieving single records
  • Create a POST form manually
    • Create type for the request and response parameters
    • Create variables based on the types
    • Call the POST operation passing a variable as body
  • Get the returned values from the POST to show in a page or notifications

A couple of notes:

In the video I use the free REST testing platform at https://jsonplaceholder.typicode.com

While I do everything here manually - you should be able to use the quick starts for creating a "create" form and map them to the post operation - as long as you marked the specific entry as a "create" entry like I did in the demo.

If the concepts above such as types, variables, action chains are new to you - I would highly recommend watching this video on the VBCS Architecture and Building Blocks, it will help you better understand what VBCS is all about.

 

 

 

 

Categories: Development

Canada Alliance 2018

Jim Marion - Fri, 2018-10-05 10:02

Calling all Canadian Higher Education and Government customers! Canada Alliance is next month and boasts a great lineup of speakers and sessions. JSMPROS will host two pre-conference workshops Monday prior to the main conference. Please bring a laptop if you wish to participate. Please note: space is limited.

  • Configure, Don't Customize! PeopleSoft Page and Field Configurator Monday, November 12 from 10:00 AM–12:30 PM in Coast Hotel: Acadia Room
  • Advanced Query Monday, November 12 from 1:30 PM–4:00 PM in Coast Hotel: Acadia Room

For further details, please visit the Canada Alliance 2018 Workshop page. I look forward to seeing you soon!

Walking through the Zürich ZOUG Event – September the 18th

Yann Neuhaus - Fri, 2018-10-05 09:38

What a nice and an interesting new experience… My first ZOUG Event… Interesting opportunities to meet some great persons and hear to some great sessions. I had the chance to participate to Markus Michalewicz sessions. Markus is Senior Director, Database HA and Scalability Product Management by Oracle, and was the special guest to this event.

https://soug.ch/events/soug-day-september-2018/

The introduction session was done by Markus. He covered a presentation of the different HA solutions in order to talk about MAA. Oracle Maximum Availability Architecture (MAA) is, from my understanding, more a service delivered by Oracle in order to help customer to find their best solution at the lowest cost and complexity according to their constraint.

I was really looking to hear the next session from Robert Bialek from Trivadis about oracle database service high availability with Data Guard. Bialek covered a nice presentation of Data Guard, how it is working and providing some good tips in the way it should be configured.

The best session was certainly the next one, done by my colleague, Clemens Bleile, Oracle Technology Leader at dbi. What a great sharing experience from his past years as one of the managers in the Oracle Support Performance team EMEA. Clemens talked about SQLTXPLAIN, performance troubleshooting tool, its history and the future. Clemens also presented SQLT tool.

The last session I followed was chaired by Markus. The subject was autonomous database, and all the automatic features which came along the last Oracle releases. Will this make Databases been able to be managed themselves? The future will let us know. :-)

Thanks to dbi management to have given me the opportunity to join this Zoug event!

 

Cet article Walking through the Zürich ZOUG Event – September the 18th est apparu en premier sur Blog dbi services.

Join Cardinality – 2

Jonathan Lewis - Fri, 2018-10-05 09:37

In the previous note I posted about Join Cardinality I described a method for calculating the figure that the optimizer would give for the special case where you had a query that:

  • joined two tables
  • used a single-column to join on equality
  • had no nulls in the join columns
  • had a perfect frequency histogram on the columns at the two ends of the join
  • had no filter predicates associated with either table

The method simply said: “Match up rows from the two frequency histograms, multiply the corresponding frequencies” and I supplied a simple SQL statement that would read and report the two sets of histogram data, doing the arithmetic and reporting the final cardinality for you. In an update I also added an adjustment needed in 11g (or, you might say, removed in 12c) where gaps in the histograms were replaced by “ghost rows” with a frequency that was half the lowest frequency in the histogram.

This is a nice place to start as the idea is very simple, and it’s likely that extensions of the basic idea will be used in all the other cases we have to consider. There are 25 possibilities that could need separate testing – though only 16 of them ought to be relevant from 12c onwards. Oracle allows for four kinds of histograms – in order of how precisely they describe the data they are:

  • Frequency – with a perfect description of the data
  • Top-N (a.k.a. Top-Frequency) – which describes all but a tiny fraction (ca. one bucket’s worth) of data perfectly
  • Hybrid – which can (but doesn’t usually, by default) describe up to 2,048 popular values perfectly and gives an approximate distribution for the rest
  • Height-balanced – which can (but doesn’t usually, by default) describe at most 1,024 popular values with some scope for misinformation.

Finally, of course, we have the general case of no histogram, using only 4 numbers (low value, high value, number of rows, number of distinct values) to give a rough picture of the data – and the need for histograms appears, of course, when the data doesn’t look anything like an even distribution of values between the low and high with close to “number of rows”/”number of distinct values” for each value.

So there are 5 possible statistical descriptions for the data in a column – which means there are 5 * 5 = 25 possible options to consider when we join two columns, or 4 * 4 = 16 if we label height-balanced histograms as obsolete and ignore them (which would be a pity because Chinar has done some very nice work explaining them).

Of course, once we’ve worked out a single-column equijoin between two tables there are plenty more options to consider:  multi-column joins, joins involving range-based predicates, joins involving more than 2 tables, and queries which (as so often happens) have predicates which aren’t involved in the joins.

For the moment I’m going to stick to the simplest case – two tables, one column, equality – and comment on the effects of filter predicates. It seems to be very straightforward as I’ll demonstrate with a new model

rem
rem     Script:         freq_hist_join_03.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem

execute dbms_random.seed(0)

create table t1(
        id      number(8,0),
        n0040   number(4,0),
        n0090   number(4,0),
        n0190   number(4,0),
        n0990   number(4,0),
        n1      number(4,0)
)
;

create table t2(
        id      number(8,0),
        n0050   number(4,0),
        n0110   number(4,0),
        n0230   number(4,0),
        n1150   number(4,0),
        n1      number(4,0)
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   40) + 1                   n0040,
        mod(rownum,   90) + 1                   n0090,
        mod(rownum,  190) + 1                   n0190,
        mod(rownum,  990) + 1                   n0990,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   50) + 1                   n0050,
        mod(rownum,  110) + 1                   n0110,
        mod(rownum,  230) + 1                   n0230,
        mod(rownum, 1150) + 1                   n1150,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
end;
/

You’ll notice that in this script I’ve created empty tables and then populated them. This is because of an anomaly that appeared in 18.3 when I used “create as select”, and should allow the results from 18.3 be an exact match for 12c. You don’t need to pay much attention to the Nxxx columns, they were there so I could experiment with a few variations in the selectivity of filter predicates.

Given the purpose of the demonstration I’ve gathered histograms on the column I’m going to use to join the tables (called n1 in this case), and here are the summary results:


TABLE_NAME           COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS
-------------------- -------------------- --------------- ------------ -----------
T1                   N1                   FREQUENCY                119         119
T2                   N1                   FREQUENCY                124         124

     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2488       2619    6,516,072
         1       2693       2599    6,999,107
         2       2635       2685    7,074,975
         3       2636       2654    6,995,944
...
       113          1          3            3
       115          1          2            2
       116          4          3           12
       117          1          1            1
       120          1          2            2
                                 ------------
sum                               188,114,543

We’ve got frequencyy histograms, and we can see that they don’t have a perfect overlap. I haven’t printed every single line from the cardinality query, just enough to show you the extreme skew, a few gaps, and the total. So here are three queries with execution plans:


set serveroutput off

alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));


select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
and     t2.n1150 = 25
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

I’ve queried the pure join – the count was exactly the 188,114,543 predicted by the cardinality query, of course – then I’ve applied a filter to one table, then to both tables. The first filter n0990 = 20 will (given the mod(,990)) definition identify one row in 990 from the original 100,000 in t1; the second filter n1150 = 25 will identify one row in 1150 from t2. That’s filtering down to 101 rows and 87 rows respectively from the two tables. So what do we see in the plans:


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:23.47 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:23.47 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    188M|    188M|00:00:23.36 |     748 |  6556K|  3619K| 8839K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.02 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.02 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    190K|    200K|00:00:00.02 |     748 |  2715K|  2715K| 1647K (0)|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T1"."N0990"=20)



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    165 |    165 |00:00:00.01 |     748 |  2715K|  2715K| 1678K (0)|
|*  3 |    TABLE ACCESS FULL| T2   |      1 |     87 |     87 |00:00:00.01 |     374 |       |       |          |
|*  4 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T2"."N1150"=25)
   4 - filter("T1"."N0990"=20)


The first execution plan shows an estimate of 188M rows – but we’ll have to check the trace file to confirm whether that’s only an approximate match to our calculation, or whether it’s an exact match. So here’s the relevant pair of lines:


Join Card:  188114543.000000 = outer (100000.000000) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 188114543 Computed: 188114543.000000

Yes, the cardinality calculation and the execution plan estimates match perfectly. But there are a couple of interesting things to note. First, Oracle seems to be deriving the cardinality by multiplying the individual cardinalities of the two tables with a figure it calls “sel” – the thing that Chinar Aliyev has labelled Jsel the “Join Selectivity”. Secondly, Oracle can’t do arithmetic (or, removing tongue from cheek) the value it’s reported for the join selectivity is reported at only 6 decimal places, but stored to far more. What is the Join Selectivity, though ? It’s the figure we derive from the cardinality SQL divided by the cardinality of the cartesian join of the two tables – i.e. 188,114,543 / (100,000 * 100,000).

With the clue from the first trace file, can we work out why the second and third plans show 190K and 165 rows respectively. How about this – multiply the filtered cardinalities of the two separate tables, then multiply the result by the join selectivity:

  • 1a)   n0990 = 20: gives us 1 row in every 990.    100,000 / 990 = 101.010101…    (echoing the rounded execution plan estimate).
  • 1b)   100,000 * (100,000/990) * 0.0188114543 = 190,014.69898989…    (which is in the ballpark of the plan and needs confirmation from the trace file).

 

  • 2a)   n1150 = 25: gives us 1 row in every 1,150.    100,000 / 1,150 = 86.9565217…    (echoing the rounded execution plan estimate)
  • 2b)   (100,000/990) * (100,000/1,150) * 0.0188114543 = 165.2301651..    (echoing the rounded execution plan estimate).

Cross-checking against extracts from the 10053 trace files:


Join Card:  190014.689899 = outer (101.010101) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 190015 Computed: 190014.689899

Join Card:  165.230165 = outer (86.956522) * inner (101.010101) * sel (0.018811)
Join Card - Rounded: 165 Computed: 165.230165

Conclusion.

Remembering that we’re still looking at very simple examples with perfect frequency histograms: it looks as if we can work out a “Join Selectivity” (Jsel) – the selectivity of a “pure” unfiltered join of the two tables – by querying the histogram data then use the resulting value to calculate cardinalities for simple two-table equi-joins by multiplying together the individual (filtered) table cardinality estimates and scaling by the Join Selectivity.

Acknowledgements

Most of this work is based on a document written by Chinar Aliyev in 2016 and presented at the Hotsos Symposium the same year. I am most grateful to him for responding to a recent post of mine and getting me interested in spending some time to get re-acquainted with the topic. His original document is a 35 page pdf file, so there’s plenty more material to work through, experiment with, and write about.

 

Pages

Subscribe to Oracle FAQ aggregator