DBA Blogs

Able to create table even tablespace is offline, but unable to insert data

Tom Kyte - Wed, 2017-05-17 19:46
Halo Tom, I made one tablespace offline to drop. But as a test case i created one table in that offlined tablespace, I was shock i.e; im able to create. But unable to insert rows, while inserting its throwing error tablespace is offline unable t...
Categories: DBA Blogs

Oracle Messaging

Tom Kyte - Wed, 2017-05-17 19:46
A little background: Today, we have several tables in our DB2 LUW environment where after update, insert, and delete triggers are defined. When these triggers are fired, an MQ Message is formatted and then sent to an IBM MQ message queue via an MQS...
Categories: DBA Blogs

Data crawled by 'Python Program' cannot import Oracle correctly

Tom Kyte - Wed, 2017-05-17 19:46
I crawled through the web used something like 'Python Program' to pick up some tables to oracle. Because of the different table structures, the contents of the fields were confused. Now, how do I clean the data and structure it to the table I need? T...
Categories: DBA Blogs

History of a cell

Tom Kyte - Wed, 2017-05-17 19:46
Hello, the purpose of my task is to track the modifications on a table. More precisely i want to know if a cell has been updated or a new row inserted. I know i can use an audit tables to do that but it is not convenient because with an audit table...
Categories: DBA Blogs

how indexes handled by reference partitioning

Tom Kyte - Wed, 2017-05-17 19:46
when we have parent child tables partitioned by reference something like <code> CREATE TABLE orders ( order_id NUMBER(12), order_date date, order_mode VARCHAR2(8), customer_id NUMBER(6), ...
Categories: DBA Blogs

filesystem_like_logging is not in ddl of table

Tom Kyte - Wed, 2017-05-17 19:46
Hi, A table with blob column and filesystem_like_logging feature is created using following script: <code>create table contracts_sec_fs ( contract_id number(12), contract_name varchar2(80), file_size numb...
Categories: DBA Blogs

Finished reading the Snowflake database documentation

Bobby Durrett's DBA Blog - Tue, 2017-05-16 13:36

I just finished reading the Snowflake database documentation and I thought I would blog about my impressions. I have not spent a lot of time using Snowflake so I can not draw from experience with the product. But, I read all the online documentation so I think I can summarize some of the major themes that I noticed. Also, I have read a lot of Oracle database documentation in the past so I can compare Oracle’s documentation to Snowflake’s. Prior to reading the online documentation I also read their journal article which has a lot of great technical details about the internals. So, I intend to include things from both the article and the documentation.

My observations fall into these major categories:

  • Use of Amazon Web Services instead of dedicated hardware for data warehouse
  • Ease of use for people who are not database administrators
  • Use of S3 for storage with partitioning and columnar storage
  • Lots of documentation focused on loading and unloading data
  • Lots of things about storing and using JSON in the database
  • Limited implementation of standard SQL – no CREATE INDEX
  • Computer science terms – from the development team instead of marketing?
  • Role of database administrator – understand architecture and tuning implications
  • Focus on reporting usage of resources for billing
  • JavaScript or SQL stored functions
  • Down side to parallelism – high consumption of resources
  • High end developers but still a new product
  • Maybe specialized purpose – not general purpose database

First let me say that it is very cool how the Snowflake engineers designed a data warehouse database from scratch using Amazon Web Services (AWS). I have worked on a couple of different generations of Exadata as well as HP’s Neoview data warehouse appliance and a large Oracle RAC based data warehouse so I have experience with big, expensive, on site data warehouse hardware. But Snowflake is all in Amazon’s cloud so you don’t have to shell out a lot of money up front to use it. It makes me think that I should try to come up with some clever use of AWS to get rich and famous. All of the hardware is there waiting for me to exploit it and you can start small and then add hardware as needed. So, to me Snowflake is a very neat example of some smart people making use of the cloud. Here is a pretty good page about Snowflake’s architecture and AWS: url

You would not think that I would be happy about a database product that does not need database administrators since I have been an Oracle database administrator for over 20 years. But, it is interesting how Snowflake takes some tasks that DBAs would do working with an on site Oracle database and makes them easy enough for a less technical person to do them.  There is no software to install because Snowflake is web-based. Creating a database is a matter of pointing and clicking in their easy to use web interface. Non-technical users can spin up a group of virtual machines with enormous CPU and memory capacity in minutes. You do not setup backup and recovery. Snowflake comes with a couple of built-in recovery methods that are automatically available. Also, I think that some of the redundancy built into AWS helps with recovery. So, you don’t have Oracle DBA tasks like installing database software, creating databases, choosing hardware, setting up memory settings, doing RMAN and datapump backups. So, my impression is that they did a good job making Snowflake easier to manage. Here is a document about their built-in backup and recovery: url

Now I get to the first negative about Snowflake. It stores the data in AWS’s S3 storage in small partitions and a columnar data format. I first saw this in the journal article and the documentation reinforced the impression: url1,url2. I’ve used S3 just enough to upload a small file to it and load the data into Snowflake. I think that S3 is AWS’s form of shared filesystem. But, I keep thinking that S3 is too slow for database storage. I’m used to solid state disk storage with 1 millisecond reads and 200 microsecond reads across a SAN network from a storage device with a large cache of high-speed memory. Maybe S3 is faster than I think but I would think that locally attached SSD or SSD over a SAN with a big cache would be faster. Snowflake seems to get around this problem by having SSD and memory caches in their compute nodes. They call clusters of compute nodes warehouses, which I think is confusing terminology. But from the limited query testing I did and from the reading I think that Snowflake gets around S3’s slowness with caching. Caching is great for a read only system. But what about a system with a lot of small transactions? I’ve seen Snowflake do very well with some queries against some large data sets. But, I wonder what the down side is to their use of S3. Also, Snowflake stores the data in columnar format which might not work well for lots of small insert, update, and delete transactions.

I thought it was weird that out of the relatively small amount of documentation Snowflake devoted a lot of it to loading and unloading data. I have read a lot of Oracle documentation. I read the 12c concepts manual and several other manuals while studying for my OCP 12c certification. So, I know that compared to Oracle’s documentation Snowflake’s is small. But, I kept seeing one thing after another about how to load data. Here are some pages: url1,url2. I assume that their data load/unload statements are not part of the SQL standard so maybe they de-emphasized documenting normal SQL constructs and focused on their custom syntax. Also, they can’t make any money until their customers get their data loaded so maybe loading data is a business priority for Snowflake. I’ve uploaded a small amount of data so I’m a little familiar with how it works. But, generally, the data movement docs are pretty hard to follow. It is kind of weird. The web interface is so nice and easy to use but the upload and download syntax seems ugly. Maybe that is why they have some much documentation about it?

Snowflake also seems to have a disproportionate amount of documentation about using JSON in the database. Is this a SQL database or not? I’m sure that there are Oracle manuals about using JSON and of course there are other databases that combine SQL and JSON but out of the relatively small Snowflake documentation set there was a fair amount of JSON. At least, that is my impression reading through the docs. Maybe they have customers with a lot of JSON data from various web sources and they want to load it straight into the database instead of extracting information and putting it into normal SQL tables. Here is an example JSON doc page: url

Snowflake seems to have based their product on a SQL standard but they did not seem to fully implement it. For one thing there is no CREATE INDEX statement. Uggh. The lack of indexes reminds me strongly of Exadata. When we first got on Exadata they recommended dropping your indexes and using Smart Scans instead. But, it isn’t hard to build a query on Exadata that runs much faster with a simple index. If you are looking up a single row with a unique key a standard btree index with a sequential, i.e. non-parallel, query is pretty fast. The lack of CREATE INDEX combined with the use of S3 and columnar organization of the data makes me think that Snowflake would not be great for record at a time queries and updates. Of course, an Oracle database excels at record at a time processing so I can’t help thinking that Snowflake won’t replace Oracle with its current architecture. Here is a page listing all the things that you can create, not including index: url

Snowflake sprinkled their article and documentation with some computer science terms. I don’t recall seeing these types of things in Oracle’s documentation. For example, they have a fair amount of documentation about HyperLogLog. What in the world? HyperLogLog is some fancy algorithm for estimating the number of rows in a large table without reading every row. I guess Oracle has various algorithms under the covers to estimate cardinality. But they don’t spell out the computer science term for it. At least that’s my impression. And the point of this blog post is to give my impressions and not to present some rigorous proof through extensive testing. As a reader of Oracle manuals I just got a different feel from Snowflake’s documentation. Maybe a little more technical in its presentation than Oracle’s. It seems that Snowflake has some very high-end software engineers with a lot of specialized computer science knowledge. Maybe some of that leaks out into the documentation. Another example, their random function makes reference to the name of the underlying algorithm: url. Contrast this with Oracle’s doc: url. Oracle just tells you how to use it. Snowflake tells you the algorithm name. Maybe Snowflake wants to impress us with their computer science knowledge?

Reading the Snowflake docs made me think about the role of a database administrator with Snowflake. Is there a role? Of course, since I have been an Oracle DBA for over 20 years I have a vested interest in keeping my job. But, it’s not like Oracle is going away. There are a bazillion Oracle systems out there and if all the new students coming into the work force decide to shy away from Oracle that leaves more for me to support the rest of my career. But, I’m not in love with Oracle or even SQL databases or database technology. Working with Oracle and especially in performance tuning has given me a way to use my computer science background and Oracle has challenged me to learn new things and solve difficult problems. I could move away from Oracle into other areas where I could use computer science and work on interesting and challenging problems. I can see using my computer science, performance tuning, and technical problem solving skills with Snowflake. Companies need people like myself who understand Oracle internals – or at least who are pursuing an understanding of it. Oracle is proprietary and complicated. Someone outside of Oracle probably can not know everything about it. It seems that people who understand Snowflake’s design may have a role to play. I don’t want to get off on a tangent but I think that people tend to overestimate what Oracle can do automatically. With large amounts of data and challenging requirements you need some human intervention by people who really understand what’s going on. I would think that the same would be true with Snowflake. You need people who understand why some queries are slow and how to improve their performance. There are not as many knobs to turn in Snowflake. Hardly any really. But there is clustering: url1,url2,url3. You also get to choose which columns fit into which tables and the order in which you load the data, like you can on any SQL database. Snowflake exposes execution plans and has execution statistics: url1,url2,url3. So, it seems that Snowflake has taken away a lot of the traditional DBA tasks but my impression is that there is still a role for someone who can dig into the internals and figure out how to make things go faster and help resolve problems.

Money is the thing. There are a lot of money related features in the Snowflake documentation. You need to know how much money you are spending and how to control your costs. I guess that it is inevitable with a web-based service that you need to have features related to billing. Couple examples: url1,url2

Snowflake has SQL and JavaScript based user defined functions. These seem more basic than Oracle’s PL/SQL. Here is a link: url

There are some interesting things about limiting the number of parallel queries that can run on a single Snowflake warehouse (compute cluster). I’ve done a fair amount of work on Oracle data warehouses with people running a bunch of parallel queries against large data sets. Parallelism is great because you can speed up a query by breaking its execution into pieces that the database can run at the same time. But, then each user that is running a parallel query can consume more resources than they could running serially. Snowflake has the same issues. They have built-in limits to how many queries can run against a warehouse to keep it from getting overloaded. These remind me of some of the Oracle init parameters related to parallel query execution. Some URLs: url1,url2,url3 In my opinion parallelism is not a silver bullet. It works great in proofs of concepts with a couple of users on your system. But then load up your system with users from all over your company and see how well it runs then. Of course, one nice thing about Snowflake is that you can easily increase your CPU and memory capacity as your needs grow. But it isn’t free. At some point it becomes worth it to make more efficient queries so that you don’t consume so many resources. At least, that’s my opinion based on what I’ve seen on Oracle data warehouses.

I’m not sure if I got this information from the article or the documentation or somewhere else. But I think of Snowflake as new. It seems to have some high-end engineers behind it who have worked for several years putting together a system that makes innovative use of AWS. The limited manual set, the technical terms in the docs, the journal article all make me think of a bunch of high-tech people working at a startup. A recent Twitter post said that Snowflake now has 500 customers. Not a lot in Oracle terms. So, Snowflake is new. Like any new product it has room to grow. My manager asked me to look into technical training for Snowflake. They don’t have any. So, that’s why I read the manuals. Plus, I’m just a manual reader by nature.

My impression from all of this reading is that Snowflake has a niche. Oracle tries to make their product all things to all people. It has every feature but the kitchen sink. They have made it bloated with one expensive add-on option after another. Snowflake is leaner and newer. I have no idea how much Snowflake costs, but assuming that it is reasonable I can see it having value if companies use it where it makes sense. But I think it would be a mistake to blindly start using Snowflake for every database need. You probably don’t want to build a high-end transactional system on top of it. Not without indexes! But it does seem pretty easy to get a data warehouse setup on Snowflake without all the time-consuming setup of an on premise data warehouse appliance like Exadata. I think you just need to prepare yourself for missing features and for some things not to work as well as they do on a more mature database like Oracle. Also, with a cloud model you are at the mercy of the vendor. In my experience employees have more skin in the game than outside vendors. So, you sacrifice some control and some commitment for ease of use. It is a form of outsourcing. But, outsourcing is fine if it meets your business needs. You just need to understand the pros and cons of using it.

To wrap up this very long blog post, I hope that I have been clear that I’m just putting out my impressions without a lot of testing to prove that I’m right. This post is trying to document my own thoughts about Snowflake based on the documentation and my experience with Oracle. There is a sense in which no one can say that I am wrong about the things that I have written as long as I present my honest thoughts. I’m sure that a number of things that I have written are wrong in the sense that testing and experience with the product would show that my first impressions from the manuals were wrong. For example, maybe I could build a transactional system and find that Snowflake works better than I thought. But, for now I’ve put out my feeling that it won’t work well and that’s just what I think. So, the post has a lot of opinion without a ton of proof. The links show things that I have observed so they form a type of evidence. But, with Oracle the documentation and reality don’t always match so it probably is the same with Snowflake. Still, I hope this dump of my brain’s thoughts about the Snowflake docs is helpful to someone. I’m happy to discuss this with others and would love any feedback about what I have written in this post.


Categories: DBA Blogs

performance tuning views

Tom Kyte - Tue, 2017-05-16 07:26
My Main Problem is I want to see all the details about running or executed a query or procedure so i can take action like performance tuning and query optimizations. So what are the views and tables available in oracle 11g R2 Database so i can fou...
Categories: DBA Blogs

Timestamp + interval arithmetics fails for precision over 6

Tom Kyte - Tue, 2017-05-16 07:26
Hi team, I have a problem with adding/subtracting Interval datatype to Timestamp datatype, when precision of the Interval is between 7 and 9. It seems that it doesn't work correctly, see results 'EQ' (equals=wrong) in below example. The second q...
Categories: DBA Blogs

Active Tables,

Tom Kyte - Tue, 2017-05-16 07:26
Hello, I have a need to find out "active" tables in a 11.2 (and 12.1) database. Active means - a table that has been involved in any of the DML or SELECT statement. I could use v$sql to get the full text of the statement and figure out. Besid...
Categories: DBA Blogs

procedure call count

Tom Kyte - Tue, 2017-05-16 07:26
I need a query to count a specific procedure calls in each session over all schemas per hour. this procedure exist in a package which might be called concurrently by different session in different schemas. what i'm doing now is to collect the users...
Categories: DBA Blogs


Tom Kyte - Mon, 2017-05-15 13:06
Hello. Problem: When the bind variable for the organization code is null the optimzer will chose the index with the organization code column otherwise it will choose the index without the organization code column. This seems backwards. Even a...
Categories: DBA Blogs

ORA-12560: TNS:protocol adapter error

Tom Kyte - Mon, 2017-05-15 13:06
i'm running oracle on standalone computer at home.it used to work properly but from couple of days i'm getting error stating "<b>ORA-12560: TNS:protocol adapter error</b> " what should i do? please help.. ...
Categories: DBA Blogs

BULK COLLECT/FORALL statements with dynamic query and table name- Oracle PL/SQL

Tom Kyte - Mon, 2017-05-15 13:06
Hi all, I need help in optimizing this query to use bulk collect and forall statements. I have created backup tables (BCK_xxxx) to copy all data from original tables (ORIG_xxx) but I am having problems converting this to bulk collect. Most example...
Categories: DBA Blogs

Indentifying Dynamic Sampling

Tom Kyte - Mon, 2017-05-15 13:06
Hi there, I have a few databases that started using dynamic sampling on large tables after we upgraded to version 12.1. I doing my best keeping the table stats updated, still, it happens that someone load a large number of rows to a table then run...
Categories: DBA Blogs


Tom Kyte - Mon, 2017-05-15 13:06
Hi Tom, Would appreciate your response to the below : Am using the following command to copy files on the application server. These files are NX4 files(.prt) Utl_File.Fcopy('ENG_PATH' ,from_file_name,'ENG_PATH',to_file_name) --- where ENG_PATH...
Categories: DBA Blogs

SHORTCUT component not working.

Tom Kyte - Mon, 2017-05-15 13:06
I try to use oracle apex shortcut component to for reference. but it's not working, just print shortcut name (Plain Text). Is it apex 5.1 bug or my mistake. I am not sure. expect perfect Solution :)
Categories: DBA Blogs

12c Patching Resume with Nonrolling Option While Analyze - JSON Magic

Pakistan's First Oracle Blog - Mon, 2017-05-15 01:41
I was engaged in an interesting Oracle 12c patching today.  Patch applicability was checked by using:

"$GRID_HOME/OPatch/opatchauto apply /u01/app/oracle/software/24436306 -analyze"

and it failed because its a non-rolling patch:

"OPATCHAUTO-72085: Cannot execute in rolling mode, as execution mode is set to non-rolling for patch ID 24315824.
OPATCHAUTO-72085: Execute in non-rolling mode by adding option '-nonrolling' during execution. e.g. /OPatch/opatchauto apply -nonrolling
After fixing the cause of failure Run opatchauto resume with session id "F7ET "]"

So now I wanted to analyze the patch with non-rolling option.

$GRID_HOME/OPatch/opatchauto apply /u01/app/oracle/software/24436306 -analyze -nonrolling

OPatchauto session is initiated at Mon May 15 01:32:43 2017
Exception in thread "main" java.lang.NoClassDefFoundError: oracle/ops/mgmt/cluster/NoSuchExecutableException
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.loadOptions(SystemInfoGenerator.java:322)
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.validateOptions(SystemInfoGenerator.java:280)
        at com.oracle.glcm.patch.auto.db.util.SystemInfoGenerator.main(SystemInfoGenerator.java:134)
Caused by: java.lang.ClassNotFoundException: oracle.ops.mgmt.cluster.NoSuchExecutableException
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 3 more

OPatchauto session completed at Mon May 15 01:32:44 2017
Time taken to complete the session 0 minute, 1 second

opatchauto bootstrapping failed with error code 1.


In 12c, we have the patching sessions with their configuration in JSON files.

So go to directory $grid_home/OPatch/auto/dbsessioninfo/

and find the JSON file with session id F7ET, which was given in above error.

Edit this file and change the flag of non-rolling to TRUE.

      "key" : "nonrolling",
      "value" : "false"

Change above to :

      "key" : "nonrolling",
      "value" : "true"

Save the file and run the opatchauto analyze again with resume session

$GRID_HOME/OPatch/opatchauto resume -session F7ET

and it works!!!

Happy Patching!!!

Categories: DBA Blogs

12c MultiTenant Posts -- 1 : Creating a PDB in a different directory

Hemant K Chitale - Sun, 2017-05-14 04:12
What if the current (default) location for Pluggable Databases is running out of space and, yet, you want to create an Oracle Managed Files Pluggable Database ?

First, I start with this configuration :

SQL> select con_id, file#, substr(name,1,56)
2 from v$datafile
3 order by 1,2;

---------- ----------
1 1

1 3

1 7

1 15

2 5

2 6

2 8

3 9

3 10

3 11

3 12

3 13

3 14

13 rows selected.


Currently, I have one Pluggable Database (CON_ID=3) but none of the database files are OMF.  I don't have enough space in /u01 and want to create the new PDB in /u02 and also use OMF.  As long as I have /u02/oradata precreated by the system administrator, I can :

SQL> show parameter db_create_file

------------------------------------ ----------- ------------------------------
db_create_file_dest string
SQL> alter session set db_create_file_dest='/u02/oradata';

Session altered.

SQL> create pluggable database NEWPDB admin user newpdb_adm identified by newpdb_adm;

Pluggable database created.

SQL> alter pluggable database NEWPDB open;

Pluggable database altered.

SQL> select con_id, open_mode
2 from v$pdbs
3 where name = 'NEWPDB'
4 /

---------- ----------

SQL> select file#, name
2 from v$datafile
3 where con_id=4
4 order by file#
5 /




SQL> show parameter db_unique_name

------------------------------------ ----------- ------------------------------
db_unique_name string orcl12c

Note how the parent folder name "ORCL12C" is inherited from the DB_UNIQUE_NAME.  I can now proceed to setup this new PDB.  Later, I  can migrate it as an OMF PDB to another Container Database.
Categories: DBA Blogs

Database Link

Tom Kyte - Sun, 2017-05-14 00:26
hi we have the following scenario: DB1 user Monitor in DB1 procedure1 in DB1 owned by Monitor DB2 user Monitor in DB2 user U1 in DB2 user U2 in DB2 create database link getcode connect to monitor identified by xxxx using 'DB1'; the q...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator - DBA Blogs