AW: Fastest way to count exact number of rows in a very large table

From: <ahmed.fikri_at_t-online.de>
Date: Mon, 5 Oct 2020 18:20:45 +0200 (CEST)
Message-ID: <1601914845273.482324.7b01ac83bbaa86f9b3b9a250b18787bb53645291_at_spica.telekom.de>



Hi Liz,  

we have that behind us too using XTTS (from 11.2 to 12.1 - 16 TB in about 4 days). I think the critical point was the export of the metadata (we have a huge number of (sub) partitions) and the datapump is buggy in 11.2 (we installed several patches).
Could you please share how many objects contains your DB and maybe how long took the metadata export?  

Best regards
Ahmed      

-----Original-Nachricht-----
Betreff: RE: Fastest way to count exact number of rows in a very large table
Datum: 2020-10-05T17:41:57+0200
Von: "Reen, Elizabeth" <dmarc-noreply_at_freelists.org> An: "andysayer_at_gmail.com" <andysayer_at_gmail.com>, "ahmed.fikri_at_t-online.de" <ahmed.fikri_at_t-online.de>      

              We just completed such a transition. We kept the Oracle version the same so we could see the impact of Linux. Transportable tablespaces was how we did it. We were able to move a 17 terabyte database in under 10 hours.    

Liz    

From: [External] oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> On Behalf Of [External] Andy Sayer Sent: Friday, October 2, 2020 3:09 PM
To: ahmed.fikri_at_t-online.de
Cc: list, oracle; ramukam1983_at_gmail.com Subject: Re: Fastest way to count exact number of rows in a very large table  

Just because a table has the same number of rows, it doesn’t mean it has the same data. With 108 billion rows, your data is going to be changing quickly, in order to get accurate counts at the right point in time you’re going to end up keeping your application offline for a window before and after your migration.  

What you need to do is determine where you expect data to go missing and work out a way to check.  

This will depend on how you’re doing your migration, I would suggest you use Cross-Platform Transportable Tablespaces (Doc Id 371556.1) as that would allow you to do a physical import and just convert the files to the right endianness. This starts by making sure all data has been written to your data files (so they can be read only on the source system). As you’re working with the physical data files rather than the logical data (rows in tables), the only way you’re going to loose rows is by corrupting your files. You can check for corruption using RMAN once you’ve imported the converted files. No need to count all your rows, and no need to hope that that’s all you need to compare.  

Hope that helps,

Andy      

On Fri, 2 Oct 2020 at 19:38, ahmed.fikri_at_t-online.de <mailto:ahmed.fikri_at_t-online.de> <ahmed.fikri_at_t-online.de <mailto:ahmed.fikri_at_t-online.de> > wrote:

 Hi Ashoke,         

 could you send the execute plan of the query too? I think there is no  general approach for that, it depends on several factors: whether the  table has indexes (normal/bitmap) and in case the table has indexes the  size of the table compared to the existing index...... But generally  parallel processing should help.         

 Best regards   

 Ahmed                     

 -----Original-Nachricht-----   

 Betreff: Fastest way to count exact number of rows in a very large table   

 Datum: 2020-10-02T19:45:19+0200   

 Von: "Ashoke Mandal" <ramukam1983_at_gmail.com <mailto:ramukam1983_at_gmail.com>
>
  

 An: "ORACLE-L" <oracle-l_at_freelists.org <mailto:oracle-l_at_freelists.org> >                        

 Dear All,

 I have a table with 108 billion rows and migrating this database from  Oracle 11g on Solaris to Oracle 12c on Linux.         

 After the migration I need to compare the row count of this table in both  the source DB and the destination DB. It takes almost two hours to get the  row count from this table.   

 SQL> select to_char(count(*), '999,999,999,999') from test_data;

 TO_CHAR(COUNT(*)



  108,424,262,144
 Elapsed: 02:22:46.18         

 Could you please suggest some tips to get the row count faster so that it  reduces the cut-over downtime.         

 Thanks,   

 Ashoke               



--
http://www.freelists.org/webpage/oracle-l
Received on Mon Oct 05 2020 - 18:20:45 CEST

Original text of this message