Feed aggregator

View Syntax isn't working

Tom Kyte - Thu, 2016-10-06 04:06
Tom, I have an existing view that works fine. It is used to compare rows in two tables and to display which table has records that the other doesn't. I've been asked to add one more column to this view from a new table. For one reason or ano...
Categories: DBA Blogs

Temporary tables (comparision with SQLServer)

Tom Kyte - Thu, 2016-10-06 04:06
We are migrating some Stored Procedures from Sybase 11 to Oracle 8i. Most of them use temporal tables like: SELECT x, y, z INTO #Temp FROM some_table ... -OR- CREATE TABLE #Temp ( .... ) Where #Temp is a Temporal Table in a temporal area wh...
Categories: DBA Blogs

How to size the log_buffer

Tom Kyte - Thu, 2016-10-06 04:06
Hello In looking into how to tune log_buffer, I came across two different warnings. One claims that if the log buffer is too small, we will get a significant amount of "log buffer space." The solution is to "consider making the log buffer bigger...
Categories: DBA Blogs

Running a Cassandra cluster in a single server

Yann Neuhaus - Thu, 2016-10-06 02:25

This blog post is about the configuration of Apache Cassandra for running a cluster of 3 instances on a single host. Basically C* is not really made to run it in a multi instances environment, but for many cases and reasons you might run a C* cluster on a single server. The principal reason is for tests cases.

Context

The objective is to have a Cassandra ring of 3 instances on 1 host. For that, we will use:

  • apache-cassandra-3.0.9 (community version)
  • Separated disks for a segregation between binaries, data and logs: binaries, data and logs will be in separated mount points
  • Multiple IPs interfaces for each instance (virtual interfaces in our case)
  • Instances name: csd1, csd2, csd3
  • Cluster name: dbitest
Binary installation

Download the binaries from the Apache website, http://www.apache.org/dyn/closer.lua/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz and install it. In this post we are going to install the 3.0.9 release.

wget http://www.pirbot.com/mirrors/apache/cassandra/3.0.9/apache-cassandra-3.0.9-bin.tar.gz

Extract it:

tar -zxvf apache-cassandra-3.0.9-bin.tar.gz

Place the software into the product directory:

cp –r apache-cassandra-3.0.9 /u00/app/cassandra/product/

 

Instances configuration

First, we will create the admin directory for each instance. The admin directory contains the following sub folders:

  • /u00/app/cassandra/admin/csd$i
    • /backup: soft link to the backup mount point
    • /dump: soft link to the dump mount point
    • /etc: Cassandra configuration files, such as Cassandra.yaml, logback.xml and Cassandra-env.sh.
    • /log: soft link to the log file from the log mount point
    • /pid: contain the pid of the C* process

To create these directories, use the following loop.

for i in 1 2 3 ;
do mkdir -p /u00/app/cassandra/admin/csd$i &&
mkdir /u00/app/cassandra/admin/csd$i/backup &&
mkdir /u00/app/cassandra/admin/csd$i/dump &&
mkdir /u00/app/cassandra/admin/csd$i/etc &&
mkdir /u00/app/cassandra/admin/csd$i/log &&
mkdir /u00/app/cassandra/admin/csd$i/pid &&
mkdir /u00/app/cassandra/admin/csd$i/product;
done

Next copy the C* configuration files into each admin/csd$i/etc directory. Every instance has its own configuration files.

for i in 1 2 3;
do cp -r /u00/app/cassandra/admin/csd$i/product/apache-cassandra-3.0.9/conf/* /u00/app/cassandra/admin/csd$i/etc/;
done

Now, create the three data directories for each instance.

for i in 1 2 3;
do
sudo mkdir -p /u01/cassandradata/csd$i &&
sudo chown -R cassandra:cassandra /u01/cassandradata/csd$i;
done

Configure virtual network interfaces

You must create additional virtual network interfaces for each C* instance, to use a different listen_address and rpc_address for each instance.

sudo ifconfig enp0s3:0 192.168.56.105
sudo ifconfig enp0s3:1 192.168.56.106

Then, you have 3 interfaces (default + 2 virtuals), for your C* instances.

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.104  …

enp0s3:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.105  …

enp0s3:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.106  …

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        …

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
…

As all the required files are in place, we can now change the 3 main configuration files, to set up the three instances (csd1, csd2 and csd3).

cassandra-env.sh:

Use a different “JMX_PORT” for each node. For instance: 7199 for csd1, 7299 for csd2 and 7399 for csd3.

JMX will bind to the local host IP by default, you can use the default.

cassandra.yaml

The central configuration file of Apache Cassandra. Change the following parameters:

  • cluster_name: ‘dbitest’ All instances must have the same cluster name
  • commitlog_directory: /u01/cassandradata/csd$i/commitlog
  • data_file_directories:/u01/cassandradata/csd$i/data
  • saved_caches_directory: /u01/cassandradata/csd$i/saved_caches
  • listen_address: 192.168.56.104
  • rpc_address: 192.168.56.104
  • seeds: specify the list of IP of the seed nodes

logback.xml

The configuration file for C* logs. Make the following changes:

<file>${cassandra.logdir}/system.log</file> to <file>/u00/app/cassandra/admin/csd$i/log/system.log</file>

<fileNamePattern>${cassandra.logdir}/system.log.%i.zip</fileNamePattern> to 
<fileNamePattern>/u00/app/cassandra/admin/csd$i/log/system.log.%i.zip</fileNamePattern>

<file>${cassandra.logdir}/debug.log</file> to <file>/u00/app/cassandra/admin/csd$i/debug.log</file>

<fileNamePattern>${cassandra.logdir}/debug.log.%i.zip</fileNamePattern> to
<fileNamePattern>/u00/app/cassandra/admin/csd1/log/debug.log.%i.zip</fileNamePattern>

Make the change for the three instances.

Starting the nodes

Before starting each node, you must dynamically set the environment variables for each of them.

$CASSANDRA_HOME=/u00/app/cassandra/product/apache-cassandra-3.0.9
$CASSANDRA_CONF=/u00/app/cassandra/admin/csd$i/etc/

After, setting the variables for each node you can start Apache Cassandra by command-line:

./$CASSANDRA_HOME/bin/cassandra –f

Execute the same command in a separated shell, for each node. Do not forget to set the environment variables $CASSANDRA_HOME and $CASSANDRA_CONF.

Cluster status

Verify the cluster status with nodetool utility. Check if all nodes are up and running.

[cassandra@test bin]$ ./nodetool status
 Datacenter: datacenter1
 =======================
 Status=Up/Down|/ State=Normal/Leaving/Joining/Moving
 --  Address         Load       Tokens       Owns (effective)  Host ID                               Rack
 UN  192.168.56.104  263.42 KB  256          64.5%             cddee7f4-c0d5-4cba-9ddc-b773a08a9245  rack1
 UN  192.168.56.105  212.19 KB  256          67.5%             19cd17d2-1aeb-48e5-9299-7a0282c2e92e  rack1
 UN  192.168.56.106  147.54 KB  256          68.0%             f65d5b92-ed3b-4f68-b93d-72e6162eafca  rack1

 

Enjoy ;-)

 

Cet article Running a Cassandra cluster in a single server est apparu en premier sur Blog dbi services.

Updated: Minimizing EBS 12.2.6 Upgrade Downtimes

Steven Chan - Thu, 2016-10-06 02:05

We have updated our recommendations for minimizing downtimes when upgrading to EBS 12.2.6:

Written by Jim Machin, a senior member of our Applications Performance group, this white paper contains a wealth of detailed technical recommendations for reducing the amount of time needed for an EBS 12.2.6 upgrade. 

Events vs. sessions bottlenecks diagram

It covers:

  • Technical preparation and planning activities
  • Diagnostics for monitoring and troubleshooting performance issues
  • Common solutions to performance issues
  • Oracle Applications Tablespace Model (OATM) considerations
  • Methods for purging old data
  • Gathering statistics
  • Detailed recommendations for improving resource management, sizing, and workloads

This white paper is updated regularly with new tips and recommendations based upon our Application Performance group's work with actual EBS 12.2 customers and their Service Requests.  All EBS 12.2 sysadmins should monitor this white paper regularly.

Related Articles

Categories: APPS Blogs

Links for 2016-10-05 [del.icio.us]

Categories: DBA Blogs

Build your Applications Faster with Application Builder Cloud Service

In today's IT business, development teams are often busy with requests for enhancements and updates. It emerges the need for business users to build their solutions without relying on developers. For...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Here’s why Oracle Cloud Apps rocked it at OpenWorld 2016

Linda Fishman Hoyle - Wed, 2016-10-05 13:31

A Guest Post by Oracle Sr. Director Deb Hamilton, Cloud Applications, Outbound Product Management (pictured left)

 For Oracle and our SaaS customers, OpenWorld 2016 was a time to celebrate our mutual success. We are leading and growing the SaaS market with more than 12,500 overall SaaS customers, 10x more ERP customers than Workday, and 2.5x more new HCM customers in FY16 than Workday.

We are grateful for the many customers who shared their success stories in sessions and on panels. This vast customer presence did not go unnoticed by the analysts, influencers, and press. Many commented on the strong customer advocacy at this year’s OpenWorld, and how effectively customers articulated their reasons for choosing and moving forward with Oracle Cloud.

We deliver innovation fast

Our executives re-iterated our expectation that 100 percent of our customers will move to the cloud. The driving force behind this tremendous shift to the cloud is our ability to deliver modern and superior applications, faster than we ever could before. With every SaaS customer on the same release, we can get precise and immediate feedback through usage and performance data.

In addition, customers are sharing best practices and product insights with each other. In fact, customers are pushing us to deliver even more innovation faster. Once users fully understand how the cloud model works, they cannot get enough of the “good stuff” and keep pushing the envelope.

We’re becoming a service-centered company

Not only has the way we design products changed, the customer experience has changed. We are making a radical transformation from a product-centered to a service-centered company. Our modern ownership experience promises zero-hassle buying, accelerated adoption, and unlimited growth. Just see for yourself the strides we have made.

What hasn’t changed is the breadth of our offering. Customers can choose from a complete suite of business applications for CX, ERP, HCM, SCM, EPM, and data. For existing customers, we provide prebuilt adapters back to our existing on-premises applications so that their businesses remain connected.

Our apps are unique

What is fundamentally unique about our SaaS applications is that they’re data-driven, leveraging both internal and external data. We’ve also added more ways to personalize these applications without interfering with upgrades. In addition, our level of investment in security is hard to match, making our SaaS applications more secure than most on-premises deployments.

And there’s more: New smart software solutions

And, as a cherry on top, at this OpenWorld we announced a brand new category of applications, our Adaptive Intelligent applications. These are unique in the industry. They combine mounds of data and expertise of our highly trained and skilled data scientists, courtesy of Oracle Data Cloud, with complex data parsing algorithms, and our broad suite of SaaS applications to deliver insightful actions and offers. Examples include optimized payment terms, personalized commerce offers, best fit candidates, best value freight, optimized working capital and balanced costs, and many, many other next best actions and offers. Learn more about our Adaptive Intelligent apps.

Stupid Stuff I’ve Done : The Clone Wars

Tim Hall - Wed, 2016-10-05 13:29

stormtrooper-1343772_640Following on from the theme of yesterday’s confession about my screw-ups with pluggable databases comes one about cloning.

For some systems we use cloning (RMAN DUPLICATE) as a method of refreshing Dev and Test environments from Live. The process of doing a backup-based or active duplicate is not that complicated and typically you do it once, script it and you don’t have to worry too much about it again.

Yesterday we had to refresh a test environment from live, so we used our trusty clone script, but it failed with the following message.

RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence ???? and starting SCN of ????

Strange! The message is pretty clear. It is expecting an archived redo log that doesn’t exist. If this were a backup-based clone I would suspect one of the archived redo logs had not been copied across, but this was an active duplicate, so it automatically transports everything from the target to the auxiliary instance. How could it lose something?

I knew I had seen this before, but no matter how much I searched my website, private work notes, MOS or using Uncle Google I couldn’t find the solution. Everything pointed to a missing archived redo log, but that couldn’t be happening on an active duplicate right? Wrong!

After much denial, wailing and gnashing of teeth I remembered the archived redo log backups! During the clone operation an archived redo log backup had kicked in on the target database and deleted all the logs that had been backed up. By the time the active duplicate tried to grab them they were gone and hence the error…

Notes to self:

  • When you see something like this happen, write a note about it! Don’t think you will remember it next time. You won’t, or you will waste loads of time before you do remember it! How many times do you have to tell yourself, “If it’s not written down it doesn’t exist!”
  • The error message is trying to tell you the answer. Don’t try and intellectualise your way round it. It’s probably not a bug that is causing some spurious misreporting of the “real problem”. You probably don’t know better than the error trapping. Think less about why the error message might be wrong and think more about what could have caused the situation that resulted in the error, as it is reported.
  • Don’t write a blog post confessing how rubbish you are at your job or people will find out and… Doh!

Cheers

Tim…

Stupid Stuff I’ve Done : The Clone Wars was first posted on October 5, 2016 at 7:29 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

October 11, 2016: KPN Netherlands—Oracle Sales Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2016-10-05 11:23

Join us for an Oracle Sales Cloud Customer Reference Forum on Tuesday, October 11, 2016.

KPN is the largest telecom and IT service provider in the Netherlands. Irma van Mil, Unit Manager Sales & Service, KPN, will discuss how Oracle Sales Cloud continues to contribute to KPN’s strategy of “strengthen, simplify, and grow.”

Register now to attend the live Forum on October 11, 2016, at 4:00 p.m. CET.

Fabric … Simple #GoldenGate Administration over SSH

DBASolved - Wed, 2016-10-05 10:45

 

For awhile now, I’ve been looking at different approaches for automating some of my GoldenGate monitor tasks. Yes, there are tools out there that provide a wide variety of ways for interaction and monitoring of Oracle GoldenGate. However, the problem with Oracle GoldenGate is that you have at least two parts of the equation when interacting or monitoring an environment, i.e. source and target. If you are so lucky to have a multi-master configuration, then you have to deal with multiple target sites. In the end, making administration and knowing what is going on a bit of a time-consuming process.

I’ve looked at different tools to try and solve this problem; mostly in Perl since I write a lot of that from time to time. Trying to get Perl to do network commands like SSH was time consuming and cumbersome when scripting it. This is when I started to look at Python. Initially I wasn’t sold on Python, but after spending some time to get familiar with the language, I actually like it; but it still was missing the SSH functionality I wanted, until recently.

As I was browsing the inter-web over the last week or so, I came across a blog post that compared deployment tools such as Chef, Puppet, Ansible, Fabric and a few others. The website provided pros and cons for each of the deployment options they were reviewing. Take a look and see what you may like to use for deployment options in your environments.

Out of the tools the website talked about, Fabric peaked my attention because it was a Python-based tool for streamlining SSH application deployments. I was thinking, if I could deploy applications with Fabric, then I should be able to do some basic monitoring with it as well; sending me down the path of solving basic Oracle GoldenGate monitoring from a single command line. Yea sounds like a good plan!

After spending some time, figuring out how to install Fabric and what exactly is a “fabfile”, I wrote my first monitoring script for Fabric!

What exactly does this monitoring script look like? Below is the monitoring script, also known as a “fabfile”. This script is written in Python syntax and sets up the environment and what functions should be called.

from fabric.api import *

env.hosts = [
'fred.acme.com',
'wilma.acme.com'
]

env.user = "oracle"
env.password = [ do not place in clear text ]

def info_all():
        with cd("$OGG_HOME"):
                run("echo info all | ./ggsci")

def ggstatus():
        info_all()

The environment that is going to be monitored are my test servers that I use for many of my tests (Fred and Wilma). Then I’m telling Fabric to use the “oracle” userid and password to login to each server. In this environment I have all my oracle passwords set the same so I only have to provide it once. After the environment is set, I define the functions that I want to perform. The first function is the info_all() function which is logging into GGSCI and providing me the output of the info all command. The second function is just a generic function that calls the first function.

With the “fabfile” created, I can now run a single command to access both servers (Fred & Wilma) and check on the status of the Oracle GoldenGate processes running on these servers. The command that I run is:

$ fab ggstatus -f ./fab_gg.py

This command is executing the Fabric executable “fab” followed by the function in the “fabfile” I want to execute. Then the “-f ./fab_gg.py” is the “fabfile” that I want to use during the execution. Upon execution, I will spool output on my STDOUT providing me the status of each Oracle GoldenGate environment for each server I’ve requested.

AMAC02P37LYG3QC:python bobby.l.curtis$ fab ggstatus -f ./fab_gg.py
[fred.acme.com] Executing task 'ggstatus'
[fred.acme.com] run: echo info all | ./ggsci
[fred.acme.com] out: The Oracle base has been set to /u01/app/oracle
[fred.acme.com] out: ====================================
[fred.acme.com] out: ORACLE_SID=src12c
[fred.acme.com] out: ORACLE_BASE=/u01/app/oracle
[fred.acme.com] out: ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[fred.acme.com] out: OGG_HOME=/u01/app/oracle/product/12.2.0/oggcore_1
[fred.acme.com] out: JAVA_HOME=/home/oracle/Downloads/jdk1.8.0_71
[fred.acme.com] out: OGGSTUDIO_HOME=/u01/app/oracle/product/oggstudio/oggstudio
[fred.acme.com] out: OGGSTUDIO_HOME1=/u01/app/oracle/product/oggstudio/12.2.1.1.0/oggstudio
[fred.acme.com] out: ====================================
[fred.acme.com] out:
[fred.acme.com] out: Oracle GoldenGate Command Interpreter for Oracle
[fred.acme.com] out: Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2_FBO
[fred.acme.com] out: Linux, x64, 64bit (optimized), Oracle 12c on Nov 11 2015 03:53:23
[fred.acme.com] out: Operating system character set identified as UTF-8.
[fred.acme.com] out:
[fred.acme.com] out: Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.
[fred.acme.com] out:
[fred.acme.com] out:
[fred.acme.com] out:
[fred.acme.com] out: GGSCI (fred.acme.com) 1>
[fred.acme.com] out: Program     Status      Group       Lag at Chkpt  Time Since Chkpt
[fred.acme.com] out:
[fred.acme.com] out: MANAGER     RUNNING
[fred.acme.com] out: JAGENT      RUNNING
[fred.acme.com] out: EXTRACT     RUNNING     EGG12C      00:00:09      00:00:02
[fred.acme.com] out: Description 'Integrated Extract'
[fred.acme.com] out: EXTRACT     RUNNING     PGG12C      00:00:00      00:00:02
[fred.acme.com] out: Description 'Data Pump'
[fred.acme.com] out:
[fred.acme.com] out:
[fred.acme.com] out: GGSCI (fred.acme.com) 2>
[wilma.acme.com] Executing task 'ggstatus'
[wilma.acme.com] run: echo info all | ./ggsci
[wilma.acme.com] out: The Oracle base has been set to /opt/app/oracle
[wilma.acme.com] out: ====================================
[wilma.acme.com] out: ORACLE_SID=rmt12c
[wilma.acme.com] out: ORACLE_BASE=/opt/app/oracle
[wilma.acme.com] out: ORACLE_HOME=/opt/app/oracle/product/12.1.0.2/dbhome_1
[wilma.acme.com] out: OGG_HOME=/opt/app/oracle/product/12.2.0.0/oggcore_1
[wilma.acme.com] out: JAVA_HOME=/home/oracle/Downloads/jdk1.8.0_71
[wilma.acme.com] out: ODI_HOME=/opt/app/oracle/product/12.2.1.1.0/odi_home1
[wilma.acme.com] out: ====================================
[wilma.acme.com] out:
[wilma.acme.com] out: Oracle GoldenGate Command Interpreter for Oracle
[wilma.acme.com] out: Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2_FBO
[wilma.acme.com] out: Linux, x64, 64bit (optimized), Oracle 12c on Nov 11 2015 03:53:23
[wilma.acme.com] out: Operating system character set identified as UTF-8.
[wilma.acme.com] out:
[wilma.acme.com] out: Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.
[wilma.acme.com] out:
[wilma.acme.com] out:
[wilma.acme.com] out:
[wilma.acme.com] out: GGSCI (wilma.acme.com) 1>
[wilma.acme.com] out: Program     Status      Group       Lag at Chkpt  Time Since Chkpt
[wilma.acme.com] out:
[wilma.acme.com] out: MANAGER     RUNNING
[wilma.acme.com] out: JAGENT      RUNNING
[wilma.acme.com] out: REPLICAT    RUNNING     RGG12C      00:00:00      00:00:03
[wilma.acme.com] out:
[wilma.acme.com] out:
[wilma.acme.com] out: GGSCI (wilma.acme.com) 2>

Done.
Disconnecting from wilma.acme.com... done.
Disconnecting from fred.acme.com... done.

As you can tell, I get a scrolling output of both servers showing that the servers were logged into and changed directories to the $OGG_HOME. Then executed and “info all” command against GGSCI with the returning output. This makes for a quick and easy way to get the current status of all Oracle GoldenGate processes in an environment.

With this being an introduction to Fabric, I’m looking forward to seeing what else I can do with it. I’ll keep everyone posted on additional things I do may do with it. But for now, I encourage you to take a look at it and see if you can simplify some of your administration tasks with it.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: Dev Ops, Golden Gate
Categories: DBA Blogs

SQL Loader- Multibyte character error

Tom Kyte - Wed, 2016-10-05 09:46
Hi Team, When I am trying to load an UTF8 characterset datafile using sqlldr, it fails with Multibyte character error. Steps to reproduce the issue: 1. Create table. create table LoaderTest( rec_no number, rec_id_no nvarchar2(30), col...
Categories: DBA Blogs

How to export .xls file with table data as a attachment from Plsql block

Tom Kyte - Wed, 2016-10-05 09:46
We are using below code for excel property p_file_name:='demo.xls'; UTL_TCP. write_line (v_tcp_connection, 'Content-Type: application/ms-excel;'); ----- 2ND BODY PART. UTL_TCP. write...
Categories: DBA Blogs

Get dates between given periods

Tom Kyte - Wed, 2016-10-05 09:46
I have table tt with columns: skey number, start date, end date Start date and end date will always be month end dates. data will be 1 31/01/2016 31/03/2016 2 29/02/2016 31/03/2016 3 29/02/2016 30/04/2...
Categories: DBA Blogs

SQL to display colours for values

Tom Kyte - Wed, 2016-10-05 09:46
Hi, create table test code number(5)); insert into text values (1); insert into text values (1); insert into text values (2); insert into text values (2); insert into text values (3); insert into text values (4); insert into text values (4...
Categories: DBA Blogs

MATCH_RECOGNIZE and matching to multiple pattern

Tom Kyte - Wed, 2016-10-05 09:46
Hi! I've been playing with MATCH_RECOGNIZE feature in 12c and I've encountered situation which I'm not able to explain myself - so I'd like to ask you for support. Please find the script here: https://livesql.oracle.com/apex/livesql/s/dyhcdwiuu...
Categories: DBA Blogs

Displaying data in timetable grid

Tom Kyte - Wed, 2016-10-05 09:46
Hi Guys Really hope you can help with an issue that I have. I use the code below to output the days of the current week (stored in a table called dates) <code>SELECT to_char(NEXT_DAY(SYSDATE,'MON') + case when to_char(SYSDATE,'D') in (6,7) t...
Categories: DBA Blogs

Scroll to Right

Tom Kyte - Wed, 2016-10-05 09:46
Hi Chris & Connor it is not a question. While reading ask tom's old question answers there are a lot of scroll to right. Very difficult to read conversation that is scrolling to long right. I stop reading when is see right scroll. Can you f...
Categories: DBA Blogs

SQL and/or PL/SQL in ODI vs in the Database

Tom Kyte - Wed, 2016-10-05 09:46
This is sort of a #ThickDB question. I can put functions, procedures, etc. in ODI for it to execute on the database or I can put the functions, procedures, etc. in the database and have ODI execute them when appropriate. I've tried this both ways and...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator