Feed aggregator

Search for string containing letters

Tom Kyte - Mon, 2016-11-28 14:06
Hi, I'm trying to create a SQL statement with a condition in the WHERE clause that will deliver all results that start with 3 letters. The column type is varchar, and all entries either look like 'ABC123' or 'A12345'. I'm not looking to retu...
Categories: DBA Blogs

Query runs really fast on 1st attempt, but then slows down considerably in subsequent runs

Tom Kyte - Mon, 2016-11-28 14:06
Hi Tom and Team, I have a SELECT query that runs fine if submitted the 1st time (takes about 3 seconds), but if I submit it right after again, it could take anywhere from 22 to 37 seconds (on a pretty consistent basis). If I wait a few hours (or ...
Categories: DBA Blogs

Oracle 12c – Finding the DBID – The last resort

Yann Neuhaus - Mon, 2016-11-28 07:46

The DBID is a very important part for Oracle databases. It is an internal, uniquely generated number that differentiates databases. Oracle creates this number automatically as soon as you create the database.

During normal operation, it is quite easy to find your DBID. Whenever you start your RMAN session, it displays the DBID.

oracle@oel001:/home/oracle/ [OCM121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 28 10:32:47 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM121 (DBID=3827054096)

RMAN>

Or you can just simply select it from your v$database view.

SQL> select DBID from v$database;

DBID
----------
3827054096

But what happens in case you have a restore/recovery scenario where you lost your database. In the NOMOUNT state, it is not possible to retrieve the DBID.

SQL> select DBID from v$database;
select DBID from v$database
                 *
ERROR at line 1:
ORA-01507: database not mounted

You can take a look into the alert.log or any other trace file in your DIAG destination, but you will not find a DBID there.

So, if the only thing that you have left is your RMAN Catalog, and your datafile copies in your FRA + Archivelogs, then you need the DBID beforehand, before you can restore/recover your database correctly.

There are three possibilities to get your DBID

  • You could check your RMAN backup log files, if you have set it up correctly
  • You could connect to your RMAN catalog and query the “DB” table from the catalog owner. Be careful, there might be more than one entry for your DB name, and then it might become difficult to get the correct one.  In my example, I have only one entry
    SQL> select * from db;
    
        DB_KEY      DB_ID REG_DB_UNIQUE_NAME             CURR_DBINC_KEY S
    ---------- ---------- ------------------------------ -------------- -
             1 3827054096 OCM121                                      2 N
  • And as the last resort, you can startup nomount (either with a backup pfile or with the RMAN dummy instance), and afterwards you can dump out the header of your datafile copies in your FRA

Dumping out the first block is usually enough, and besides that, you are not limited to the SYSTEM datafile. You can use any of your datafile copies in your FRA (like SYSAUX, USERS and so on) to dump out the block, like shown in the following example:

-- Dump the first block from the SYSTEM datafile
SQL> alter session set tracefile_identifier = dbid_system;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSTEM.457.926419155' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_6459_DBID_SYSTEM.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the SYSAUX datafile		
SQL> alter session set tracefile_identifier = dbid_sysaux;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSAUX.354.926417851' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7035_DBID_SYSAUX.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the USERS datafile
SQL> alter session set tracefile_identifier = dbid_users;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/USERS.533.926419511' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7064_DBID_USERS.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

As soon as you have your DBID, it is straight forward to do the rest. Connect to your target and RMAN catalog, set the DBID and then run your restore, recovery scripts.

rman target sys/manager catalog rman/rman@rman
set dbid=3827054096
run {
restore spfile from autobackup;
}

run {
restore controlfile ....
}

run {
restore database ....
recover database ....
}
Conclusion

Don’t forget to save your DBID with your RMAN backup jobs somewhere. Recovering a database at 3 o’clock in the morning with a missing DBID might become a nightmare.
Cheers,
William

 

 

Cet article Oracle 12c – Finding the DBID – The last resort est apparu en premier sur Blog dbi services.

Red Samurai - Oracle Cloud Customer (DBaaS, JCS, DevCS)

Andrejus Baranovski - Mon, 2016-11-28 05:55
We understand importance of Cloud services and decided to move out internal development infrastructure (ADF and JET) into Oracle Cloud. From today we are Oracle Cloud customers and users for the following services:

1. Oracle Database Cloud Service

2. Oracle Java Cloud Service

3. Oracle Developer Cloud Service

Exciting times ahead. Expect more interesting topics about Oracle Cloud and ADF/JET.

11 Tips to Improve Focus and Concentration at Work

Complete IT Professional - Mon, 2016-11-28 05:00
Do you often find yourself not focusing at work? Or, you try to focus, but something else comes along and distracts you? I know how this feels. I’ve listed eleven ways to remove distractions and increase focus in this article. I’ve been able to improve my focus by removing some of the distractions I have […]
Categories: Development

A Guide to the Oracle DROP TABLE Statement to Delete Tables in SQL

Complete IT Professional - Mon, 2016-11-28 05:00
How can you delete a table in SQL? Learn how to do this by using the DROP TABLE statement in this article. How Can I Delete a Table in Oracle SQL? To delete a table in Oracle SQL (or any SQL for that matter), you run a statement called DROP TABLE. It’s called DROP because […]
Categories: Development

Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’

Yann Neuhaus - Mon, 2016-11-28 02:00

At one of our customer, we were in the middle of a migration process of some docbases from 6.7 to 7.2. A few days after the migration, we started seeing some failures/errors during simple download of documents from D2 4.5. The migration has been done using the EMC EMA migration tool by some EMC colleagues. The strange thing here is that these download failures only applied to a few documents, far from the majority and only when opening the document using “View Native Content”. In addition to that, it appeared that the issue was only on migrated documents and it wasn’t happening for new ones.

 

This is an example of the error message we were able to see in the D2 4.5 log files:

2016-07-04 12:00:20 [ERROR] [[ACTIVE] ExecuteThread: '326' for queue: 'weblogic.kernel.Default (self-tuning)'] - c.e.d.d.s.D2HttpServlet[                    ] : Download failed
java.net.ProtocolException: Exceeded stated content-length of: '63000' bytes
        at weblogic.servlet.internal.ServletOutputStreamImpl.checkCL(ServletOutputStreamImpl.java:217) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:162) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:375) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:280) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.download.Download.processRequest(Download.java:132) [D2FS4DCTM-WEB-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.execute(D2HttpServlet.java:242) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGetAndPost(D2HttpServlet.java:498) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGet(D2HttpServlet.java:115) [D2FS4DCTM-API-4.5.0.jar:na]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) [weblogic.server.merged.jar:12.1.3.0.0]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:844) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:280) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:254) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:136) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:346) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.HttpHeaderFilter.doFilter(HttpHeaderFilter.java:77) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:66) [guice-servlet-3.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) [shiro-web-1.1.0.jar:na]
        at com.company.d2.auth.NonSSOAuthenticationFilter.executeChain(NonSSOAuthenticationFilter.java:21) [_wl_cls_gen.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:359) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:275) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:344) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:272) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:168) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) [guice-servlet-3.0.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:34) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3436) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3402) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) [com.oracle.css.weblogic.security.wls_7.1.0.0.jar:CSS 7.1 0.0]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:57) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2285) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2201) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1572) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:255) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:263) [weblogic.server.merged.jar:12.1.3.0.0]

 

So the error in this case is “Exceeded stated content-length of: ‘63000’ bytes”. Hum what does that mean? Well it is not really clear (who said not clear at all?)… So we checked several documents for which the download failed (iAPI using dumps) and the only common points we were able to find for these documents were the following ones:

  • They all had a r_full_content_size of: 0
  • They all had a r_content_size bigger than: 63 000

 

The issue only appeared for objects which were assigned a r_full_content_size of 0 during the migration. We tried to set this property to 0 on a document for which the download was working, in order to try to reproduce the issue, but that didn’t change anything: the download was still working properly.

 

So here is some background regarding this parameter: the expected behavior for this parameter is that it has a real value (obviously). If the file in question is smaller than 2 GB, then the r_full_content_size will have the same value as the r_content_size which is of course the size of the file in bytes. If the file is bigger than 2 GB, then the r_content_size field is too small to show the real size in bytes and therefore the real size is only displayed in the r_full_content_size field… The r_full_content_size is the one read by D2 when using the “View Native Content” action while other actions like “View” or “Edit” behave like older wdk clients and are therefore reading the r_content_size attribute… So yes there is a difference in behavior between the few actions that are doing a download and that’s the reason why we only had this issue with the “View Native Content” action!

 

Unfortunately and as you probably already understood if you read the previous paragraph, the r_content_size and r_full_content_size aren’t of the same type (Integer Vs. Double) and therefore you can’t simply execute one single DQL to set the value of r_full_content_size equal to the value of r_content_size because you will get a DM_QUERY_E_UP_BAD_ATTR_TYPES error. So you will have to do things a little bit slower.

 

The first thing to do is obviously to gather a list of all documents that need to be updated with their r_object_id and value of their r_content_size (r_full_content_size don’t really matter since you will gather only affected documents so this value is always 0):

> SELECT r_object_id, r_content_size FROM dm_document WHERE r_full_content_size='0' AND r_content_size>'63000';
r_object_id         r_content_size
------------------- -------------------
090f446780034513    289326
090f446780034534    225602
090f446780034536    212700
090f446780034540    336916
090f446780034559    269019
090f446780034572    196252
090f446780034574    261094
090f44678003459a    232887
...                 ...

 

Then a first solution would be to manually go through the list and execute one DQL query for each document setting r_full_content_size=’241309′ (or whatever the r_content_size is). For the first document listed above, that would therefore be the following DQL:

> UPDATE dm_document objects SET r_full_content_size='289326' WHERE r_object_id='090f446780034513';

 

Note 1: In case you will want at some point to restore a previous version of a document for example, then you will most probably need to use the “FROM dm_document(ALL)” instead of “FROM dm_document”. But then take care that non current versions are immutable and can’t therefore be updated using a simple DQL. You will need to remove the immutable flag, update the object and restore that so that’s a little bit more tricky ;)

Note 2: In case you have a few documents bigger than 2 GB, the r_content_size will not reflect the real value and therefore setting the r_full_content_size to that value isn’t correct… I wasn’t able to test that since our customer didn’t have any document bigger than 2 GB but you should most probably be able to use instead the full_content_size that is stored on the dmr_content for this object… Just like the dm_document, the dmr_content object has two fields that you should be able to use to find the correct size: content_size (that should reflect r_content_size) and full_content_size (that should reflect r_full_content_size). If that isn’t helping then a last solution would be to export and re-import all documents bigger than 2 GB…

 

Ok so updating all objects is possible but this is really boring so a second solution – and probably a better one – is to use a script to prepare a list of DQL queries to be executed. When you have the r_object_id and r_content_size of all affected documents, you can just put that in a CSV file (copy/paste in excel and save as CSV for example) and wrote a small script (bash for example) that will generate 1 DQL query per document, that’s really simple and if you have thousands of documents affected, then it will just take you a few minutes to write the script and in 1/2 seconds you will have thousands of DQL queries generated. Then you can put all these commands in a single file that can be executed against a docbase on the Content Server. That’s a better solution but actually the simplest solution you can ever find will always be to use dqMan (or any similar solution). Indeed dqMan has a dedicated feature that allows you to execute a “template DQL” on any list of objects returned by a specific command. Therefore you don’t need any bash scripting if you are using dqMan and that does the job in a few seconds.

 

A last solution would be to go directly to the database and execute SQL queries to set r_full_content_size equal to r_content_size BUT I would NOT recommend you to do that unless you have a very good knowledge of the Documentum Content Model and if you absolutely know what you are doing and what you are messing with ;).

 

See you soon!

 

Cet article Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’ est apparu en premier sur Blog dbi services.

Will EBS Work with Microsoft Unified Update Platform?

Steven Chan - Mon, 2016-11-28 02:00

Windows 10 logoMicrosoft is changing their packaging of major Windows 10 updates from full operating system downloads to smaller, incremental downloads.  This new delivery method is called the Unified Update Platform (UUP).

The new UUP delivery method will provide only the Windows 10 patches that have been released since the last time a desktop was updated.  Microsoft currently expects to switch to this new delivery method after the Windows 10 Creators Update is released in early 2017.

Will this work with EBS?

Yes.  Windows 10 is certified with all current EBS releases, including EBS 12.1 and 12.2.  This new delivery method is not expected to change that.

Microsoft emphasizes that nothing changes with Windows 10 itself -- this only modifies the way that Win10 patches are delivered:

It’s important to note that with UUP, nothing will look or behave differently on the surface, UUP is all underlying platform and service optimization that happens behind the scenes.

Related Articles

The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Categories: APPS Blogs

My Sangam 16 presentation: Policy Based Cluster Management In Oracle 12c

Oracle in Action - Mon, 2016-11-28 00:23

RSS content

Thanks to all those who attended my session on “Policy Based Cluster Management In Oracle 12c” during Sangam16. I have uploaded my presentation here.

Your comments and feedback are always welcome.



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [My Sangam 16 presentation: Policy Based Cluster Management In Oracle 12c], All Right Reserved. 2016.

The post My Sangam 16 presentation: Policy Based Cluster Management In Oracle 12c appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

FULL TABLE SCAN

Tom Kyte - Sun, 2016-11-27 19:46
What are the things that result in FULL TABLE SCAN?? Thanks in advance...
Categories: DBA Blogs

About archiving status of redo logs

Tom Kyte - Sun, 2016-11-27 19:46
Does archive of redo logs get completed when STATUS of redo log is active ? and what happens when STATUS of redo log is inactive in background ? Scenario that i tried Three groups of Redo log size 50mb each and Archiving is Enabled and insert...
Categories: DBA Blogs

need to access hana columnar table data in Oracle 11g through remote source

Tom Kyte - Sun, 2016-11-27 19:46
what will be the performance impact, if I remote source hana tables to Oracle 11g for real-time data read and update some oracle master table. as Oracle is OLTP and hana is used as data warehouse. Can you pls suggest, the performance impact due to ...
Categories: DBA Blogs

Can I do it with PostgreSQL? – 4 – External tables

Yann Neuhaus - Sun, 2016-11-27 12:59

In the last posts of this series we talked about restore points, how you could do things that would require the dual table in Oracle and how you can make use of tablespaces in PostgreSQL. In this post we’ll look at what my colleague Clemens thinks is one of the greatest features in Oracle. Can you do external external tables in PostgreSQL?

The easy answers is: yes, of course you can. And you can do it in various ways. To start with we’ll need a sample file were we can load data from. For the test here we’ll use this one. Note that this file uses Windows line feeds which you’ll need to convert to unix style if you are working on Linux like me. You can use VI to do this.

Once you extracted the file the content looks like this:

postgres@pgbox:/home/postgres/ [PG961] head -2 FL_insurance_sample.csv
policyID,statecode,county,eq_site_limit,hu_site_limit,fl_site_limit,fr_site_limit,tiv_2011,tiv_2012,eq_site_deductible,hu_site_deductible,fl_site_deductible,fr_site_deductible,point_latitude,point_longitude,line,construction,point_granularity
119736,FL,CLAY COUNTY,498960,498960,498960,498960,498960,792148.9,0,9979.2,0,0,30.102261,-81.711777,Residential,Masonry,1

So, we have a total of 18 columns and 36634 rows to test with. Should be fine :)

How can we bring that into PostgreSQL? Clemens talked about SQL*Loader in his post. There is a similar project for PostgreSQL called pg_bulkload which we’ll not be talking about. We will look at two options you can use to load data from files into PostgreSQL which are available by default:

  1. copy
  2. file_fdw

What we need no matter with which option we go first is the definition of the table. These are the columns we need:

postgres@pgbox:/home/postgres/ [PG961] head -1 FL_insurance_sample.csv | sed 's/,/,\n/g'
policyID,
statecode,
county,
eq_site_limit,
hu_site_limit,
fl_site_limit,
fr_site_limit,
tiv_2011,
tiv_2012,
eq_site_deductible,
hu_site_deductible,
fl_site_deductible,
fr_site_deductible,
point_latitude,
point_longitude,
line,
construction,
point_granularity

So the create table statement will look something like this:

(postgres@[local]:5439) [postgres] > create table exttab ( policyID int,
                                                           statecode varchar(2),
                                                           county varchar(50),
                                                           eq_site_limit numeric,
                                                           hu_site_limit numeric,
                                                           fl_site_limit numeric,
                                                           fr_site_limit numeric,
                                                           tiv_2011 numeric,
                                                           tiv_2012 numeric,
                                                           eq_site_deductible numeric,
                                                           hu_site_deductible numeric,
                                                           fl_site_deductible numeric,
                                                           fr_site_deductible numeric,
                                                           point_latitude numeric,
                                                           point_longitude numeric,
                                                           line varchar(50),
                                                           construction varchar(50),
                                                           point_granularity int);
CREATE TABLE

Now that we have the table we can use copy to load the data:

(postgres@[local]:5439) [postgres] > copy exttab from '/home/postgres/FL_insurance_sample.csv' with csv header;
COPY 36634
(postgres@[local]:5439) [postgres] > select count(*) from exttab;
 count 
-------
 36634
(1 row)

Quite fast. But there is a downside with this approach. As Clemens mentions in his posts one of the benefits of external tables in Oracle is that you can access the file via standard SQL and do transformations before the data arrives in the database. Can you do the same with PostgreSQL? Yes, if you use the file_fdw foreign data wrapper.

The file_fdw is available by default:

(postgres@[local]:5439) [postgres] > create extension file_fdw;
CREATE EXTENSION
Time: 442.777 ms
(postgres@[local]:5439) [postgres] > \dx
                        List of installed extensions
   Name   | Version |   Schema   |                Description                
----------+---------+------------+-------------------------------------------
 file_fdw | 1.0     | public     | foreign-data wrapper for flat file access
 plpgsql  | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [postgres] > create server srv_file_fdw foreign data wrapper file_fdw;
CREATE SERVER
(postgres@[local]:5439) [postgres] > create foreign table exttab2  ( policyID int,
                                statecode varchar(2),
                                county varchar(50),
                                eq_site_limit numeric,     
                                hu_site_limit numeric,     
                                fl_site_limit numeric,     
                                fr_site_limit numeric,     
                                tiv_2011 numeric,          
                                tiv_2012 numeric,          
                                eq_site_deductible numeric,
                                hu_site_deductible numeric,
                                fl_site_deductible numeric,
                                fr_site_deductible numeric,
                                point_latitude numeric,    
                                point_longitude numeric,   
                                line varchar(50),          
                                construction varchar(50),  
                                point_granularity int)     
server srv_file_fdw options ( filename '/home/postgres/FL_insurance_sample.csv', format 'csv', header 'true' );
CREATE FOREIGN TABLE

(postgres@[local]:5439) [postgres] > select count(*) from exttab2;
 count 
-------
 36634
(1 row)

From now on you can work with the file by accessing it using standard SQL and all the options you have with SQL are available. Very much the same as Clemens states in his post: “Because external tables can be accessed through SQL. You have all possibilities SQL-queries offer. Prallelism, difficult joins with internal or other external tables and of course all complex operations SQL allows. ETL became much easier using external tables, because it allowed to process data through SQL joins and filters already before it was loaded in the database.”

 

Cet article Can I do it with PostgreSQL? – 4 – External tables est apparu en premier sur Blog dbi services.

Exadata migration

Syed Jaffar - Sun, 2016-11-27 12:21
Had a wonderful Sangam16 conference in India, and received much applaud for the two presentations delivered,  Oracle 12c multitenancy and Exadata migration best practices.

After a very short trip to India, life started to be business as usual again, and become busy. Was fully occupied with multiple assignments: Oracle EBS database health check assessment at a client for 2 days, GI/RDBMS/PSU deployments on Oracle Sun Super Cluster M7, Exadata configuration preparation and 9 databases migration to Exadata during the week-end.

Over the last week-end, we (me and my colleague) were pretty busy with 9 databases migration to Exadata. There were a few challenges , and learned a few new things too. I would like to discuss couple of scenarios that were interesting:

One of the databases had corrupted blocks, and the expdp was keep failing with ORA-01555: snapshot too old: rollback segment number  with name "" too small. Our initial thoughts were tuning undo_retention, increasing the undo tablespace, setting an event, etc. Unfortunately, none of the workarounds helped in the situation. We then cameacross a MOS note which explains that an ORA-01555 with "", no rollback segment name is probably due to corrupted blocks. After applying the solution explained in the note, we managed to export/import the database successfully. My colleague has blogged about the scenario at his blog: http://bit.ly/2fBOxm7

Another database is running on Windows x86 64-bit, and its full of LOBs, hence, the datapumps (expdp) took significant time, as NFS filesystem used to store the  dump file. We then thought of doing direct RMAN restore from source to target, as the database on Windows x86 64bit and Linux x86 64-bit are the same (Litten) Endian formats. As per one of the MOS notes, we can also do the Data Guard setup, and do RMAN restore. However, RMAN recovery would fail with ORA-600, as cross platform redo conversion won't be possible. We are now thinking of taking a cold backup (consistent) and do a complete restore with reset logs option.

Stayed tuned for more updates on this.



The real cost of an exception

Tom Kyte - Sun, 2016-11-27 01:26
Hello, Tom. I my project we have a package with a pretty standard structure, I think. It has public procedures calling in turn private procedures, calling innermost procedures eventually. Simplifying, something like this: -------------------------...
Categories: DBA Blogs

Bad cardinality in join with column with skewed data

Tom Kyte - Sun, 2016-11-27 01:26
Hi guys. I have a problem with the estimation of the cardinality of a skewed column The distribution of the data is as follows: <code>select m.m_pricelist_id, count(*) from m_pricelist_version m group by m.m_pricelist_id 2 3 ;</code>...
Categories: DBA Blogs

Another Thanksgiving Email

Tim Tow - Sat, 2016-11-26 16:06
Here is another email we got Wednesday from a Dodeca Excel Add-In for Essbase customer.  This customer has a lot of VBA macros running some automation with Essbase and asked for some assistance.  We told it was as easy as replacing they Essbase function declarations file with our Dodeca Add-In  function declarations file, and then setting the variables that contain the location for the Dodeca-Essbase server.  In other words, replace this file:

With this file:





Easy, right? Here is the email:

I hadn't had a chance to test this massive file out with the latest version of the add-in yet, but was more than pleasantly surprised when I replaced the add-in code for the Dodeca wrapper and the only thing I had to do was change the connection to our new server and the file was live! It's truly drop and go! Thanks so much!!!!



Categories: BI & Warehousing

ADF 12.2.1.1 New Feature - Masonry Layout Custom Size Dashboard

Andrejus Baranovski - Sat, 2016-11-26 13:44
ADF 12.2.1.1 and 12.2.1.2 respectively comes with improvement for Masonry Layout component. Now we can define custom sizes for tiles - 9.4.1 How to Use a masonryLayout Component. This helps to define larger tiles and organize dashboard layout in more accurate way.

I have uploaded working demo app on GitHub, you can download it directly from repository or browse through the code - ADFAltaApp. I will be using this app for ADF Bindings webinar - Live Webinar - Master Class - ADF Bindings Explained.

To access Masonry Layout dashboard with custom tile sizes, go to Employees section and open Profile tab. You should see such dashboard layout (one tile 2x4, one tile 4x2 and two tiles 2x2). All four tiles are defined with custom size:


Masonry Layout is responsive out of the box. On narrow screen tiles will be re-arranged automatically:


Custom tiles for Masonry Layout are defined through CSS. You should create custom style class and set it for Masonry Layout component. I have define custom style class - RedSamuraiDashboard:


Each tile group with custom size must be defined in CSS separately. Width and hight should be proportional. If you define 250px for size 2, this means size for 4 must be 500px:


Masonry Layout tiles are assigned with style class which defines size:


You could have ADF region inside tile, it renders perfectly well:

Online Redo Log Switching from RMAN Backup

Michael Dinh - Sat, 2016-11-26 11:28

I was troubleshooting backup from standby databases and encountered an oddity which I wanted to verify.

Backing Up Archived Redo Logs with RMAN

Before beginning the backup, RMAN switches out of the current redo log group, and archives all online redo logs that have not yet been archived, up to and including the redo log group that was current when BACKUP command with any of the following clauses:
PLUS ARCHIVELOG
ARCHIVELOG ALL
ARCHIVELOG FROM …

 

 


Use REST API to get the data

Dylan's BI Notes - Sat, 2016-11-26 10:37
We now see more and more data are available in REST API.  Here are some of the data sources I worked earlier: ServiceNow Table API, REST API for Oracle Service Cloud, Here is a check list to enable a data source for BI: 1. How does the service perform the authentication? In the case of […]
Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator