Re: SQL*Loader: Field termination
Date: 14 Jan 2002 02:04:55 -0800
Message-ID: <f191bfd4.0201140204.4929b628_at_posting.google.com>
"Henry Hayden" <gumzilla_at_voicenet.com> wrote in message news:<mzV%7.51$Ld.8946_at_news2.voicenet.com>...
> Could you be a little more specific about the problems caused by the "." on
> the end of the timestamp field?
> In the control file, are you using the parameter OPTIONALLY ENCLOSED BY in
> the control file? What is the data type of the timestamp field? It seems
> to me that if you used the switches FIELDS TERMINATED BY "," OPTIONALLY
> ENCLOSED BY '"' (terminated by a comma, strings enclosed by double quotes)
> the problem- whatever it is- might go away.
>
> In addition, you might want to (or maybe you have to) specify the field as a
> DATE field and supply a valid NLS_DATE_FORMAT mask. For example:
>
> timestamp DATE(19) "DD-MON-YYYY HH:MI:SS"
>
> If you do not supply a field width, SQL*Ldr will take if from the supplied
> mask.
>
> Let us know if this helps, or if there is still a problem.
>
>
> Henry
>
> "Russell Smith" <rsmith_at_serck-controls.co.uk> wrote in message
> news:f191bfd4.0201100342.c98246f_at_posting.google.com...
> > Hi,
> > I have been importing a bunch of csv's into oracle using the
> > sql*loader, and are having some problems.
> >
> > basically the timestamp field when the csv was generated was set to be
> > a fixed length field, and therefore sometimes i get a date like this;
> >
> > 24-MAY-1998 00:00:10.
> >
> > the . on the end is causing errors;
> > i need to know how to do an OPTIONALLY TERMINATED BY (i have just made
> > this up but this is what i need to do).
> >
> > Any ideas?
> >
> > Russ
> >
> > p.s. unfortunatly the csv's can't be done again properly.
Right, the csv's were generated from another database, the program
that did it has an error where the time is truncated (there should be
milliseconds too), this is why there is occasionally a . at the end.
I am not using the optionally enclosed by thing. It rejects all the
records with a . on the end. one current (but shit) solution is to
process the data once and then process the bad file looking for a . on
the end.
any other ideas?
Received on Mon Jan 14 2002 - 11:04:55 CET