Feed aggregator

run sql from windows cmd prompt CLI

Learn DB Concepts with me... - Wed, 2016-05-11 09:36

Method 1:

set ORACLE_HOME=D:\app\oracle\product\11.2.0\db_1
set ORACLE_SID=ORCL

sqlplus -s "USERNAME/PASSWORD" @C:\Shell\Drop_objs.sql

sqlplus -s "USERNAME/PASSWORD" @C:\Shell\Change_pass.sql



Method 2:

set ORACLE_HOME=D:\app\oracle\product\11.2.0\db_1
set ORACLE_SID=ORCL
(ECHO select username from dba_users where username in ('SCOTT');
ECHO exit;) | sqlplus -s "USER/PASS" > C:\Shell\test_out.txt



Categories: DBA Blogs

Oracle Midlands : Event #15

Tim Hall - Wed, 2016-05-11 08:42

Don’t forget Oracle Midlands Event #15 next week!

om15

Please show your support and come along. It’s free thanks to the sponsorship by RedStackTech.

Cheers

Tim…

Oracle Midlands : Event #15 was first posted on May 11, 2016 at 2:42 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MobaXterm 9.0

Tim Hall - Wed, 2016-05-11 08:21

Another recent release that I managed to miss was

dbms_xplan

Jonathan Lewis - Wed, 2016-05-11 06:22

My favourite format options for dbms_xplan.display_cursor().

This is another of those posts where I tell you about something that I’ve frequently mentioned but never documented explicitly as a good (or, at least, convenient) idea. It also another example of how easy it is to tell half the story most of the time when someone asks a “simple” question.

You’re probably familiar with the idea of “tuning by cardinality feedback” – comparing the predicted data volumes with the actual data volumes from an execution plan – and I wrote a short note about how to make that comparison last week; and you’re probably familiar with making a call to dbms_xplan.display_cursor() after enabling the capture of rowsource execution statistics (in one of three ways) for the execution of the query, and the format parameter usually suggested for the call is ‘allstats last’ to get the execution stats for the most recent execution of the query. I actually like to see the Cost column of the execution plan as well, so I usually add that to the format, so (with all three strategies shown for an SQL*Plus environment):

set linesize 180
set trimspool on
set pagesize 60
set serveroutput off

alter session set "_rowsource_execution_statistics"=true;
alter session set statistics_level=all;

select /*+ gather_plan_statistics */ * from user_tablespaces;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

So what do we often forget to mention:

  • For SQL*Plus it is important to ensure that serveroutput is off
  • The /*+ gather_plan_statistics */ option uses sampling, so may be a bit inaccurate
  • The two accurate strategies may add a significant, sometimes catastrophic, amount of CPU overhead
  • This isn’t appropriate if the query runs parallel

For a parallel query the “last” execution of a query is typically carried out by the query co-ordinator, so the rowsource execution stats of many (or all) of the parallel execution slaves are likely to disappear from the output. If you’re testing with parallel queries you need to add some “tag” text to the query to make it unique and omit the ‘last’ option from the format string.

Now, a common suggestion is that you need to add the ‘all’ format option instead – but this doesn’t mean “all executions” it means (though doesn’t actually deliver) all the data that’s available about the plan. So here’s an execution plans produced after running a parallel query and using ‘allstats all’ as the format option (t1 is a copy of all_objects, and this demo is running on 12.1.0.2).

SQL_ID  51u5j42rvnnfg, child number 1
-------------------------------------
select  /*+   parallel(2)  */  object_type,  sum(object_id) from t1
group by object_type order by object_type

Plan hash value: 2919148568

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |       |   113 (100)|          |        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |       |            |          |        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |      0 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,01 | P->S | QC (ORDER) |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    SORT GROUP BY         |          |      2 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,01 | PCWP |            |     30 |00:00:00.01 |       0 |      0 |  2048 |  2048 |     2/0/0|
|   4 |     PX RECEIVE           |          |      2 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,01 | PCWP |            |     50 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND RANGE       | :TQ10000 |      0 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,00 | P->P | RANGE      |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       HASH GROUP BY      |          |      2 |     30 |   420 |   113   (9)| 00:00:01 |  Q1,00 | PCWP |            |     50 |00:00:00.05 |    1492 |   1440 |  1048K|  1048K|     2/0/0|
|   7 |        PX BLOCK ITERATOR |          |      2 |  85330 |  1166K|   105   (2)| 00:00:01 |  Q1,00 | PCWC |            |  85330 |00:00:00.03 |    1492 |   1440 |       |       |          |
|*  8 |         TABLE ACCESS FULL| T1       |     26 |  85330 |  1166K|   105   (2)| 00:00:01 |  Q1,00 | PCWP |            |  85330 |00:00:00.01 |    1492 |   1440 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   8 - SEL$1 / T1@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "OBJECT_TYPE"[VARCHAR2,23], SUM()[22]
   2 - (#keys=0) "OBJECT_TYPE"[VARCHAR2,23], SUM()[22]
   3 - (#keys=1; rowset=200) "OBJECT_TYPE"[VARCHAR2,23], SUM()[22]
   4 - (rowset=200) "OBJECT_TYPE"[VARCHAR2,23], SYS_OP_MSR()[25]
   5 - (#keys=1) "OBJECT_TYPE"[VARCHAR2,23], SYS_OP_MSR()[25]
   6 - (rowset=200) "OBJECT_TYPE"[VARCHAR2,23], SYS_OP_MSR()[25]
   7 - (rowset=200) "OBJECT_ID"[NUMBER,22], "OBJECT_TYPE"[VARCHAR2,23]
   8 - (rowset=200) "OBJECT_ID"[NUMBER,22], "OBJECT_TYPE"[VARCHAR2,23]

Note
-----
   - Degree of Parallelism is 2 because of hint


48 rows selected.

You’ll notice we’ve reported the “alias” and “projection” information – those are two of the format options that you can use with a + or – to include or exclude if you want. We’ve also got E-Bytes and E-time columns in the body of the plan. In other words (at least in my opinion) we’ve got extra information that makes the output longer and wider and therefore harder to read.

The format string I tend to use for parallel query is ‘allstats parallel cost’ – which (typically) gives something like the following:

SQL_ID  51u5j42rvnnfg, child number 1
-------------------------------------
select  /*+   parallel(2)  */  object_type,  sum(object_id) from t1
group by object_type order by object_type

Plan hash value: 2919148568

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | Cost (%CPU)|    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |      1 |        |   113 (100)|        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   1 |  PX COORDINATOR          |          |      1 |        |            |        |      |            |     30 |00:00:00.04 |       5 |      0 |       |       |          |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |      0 |     30 |   113   (9)|  Q1,01 | P->S | QC (ORDER) |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    SORT GROUP BY         |          |      2 |     30 |   113   (9)|  Q1,01 | PCWP |            |     30 |00:00:00.01 |       0 |      0 |  2048 |  2048 |     2/0/0|
|   4 |     PX RECEIVE           |          |      2 |     30 |   113   (9)|  Q1,01 | PCWP |            |     50 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND RANGE       | :TQ10000 |      0 |     30 |   113   (9)|  Q1,00 | P->P | RANGE      |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       HASH GROUP BY      |          |      2 |     30 |   113   (9)|  Q1,00 | PCWP |            |     50 |00:00:00.05 |    1492 |   1440 |  1048K|  1048K|     2/0/0|
|   7 |        PX BLOCK ITERATOR |          |      2 |  85330 |   105   (2)|  Q1,00 | PCWC |            |  85330 |00:00:00.03 |    1492 |   1440 |       |       |          |
|*  8 |         TABLE ACCESS FULL| T1       |     26 |  85330 |   105   (2)|  Q1,00 | PCWP |            |  85330 |00:00:00.01 |    1492 |   1440 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
   - Degree of Parallelism is 2 because of hint


30 rows selected.

Of course you may prefer ‘allstats all’ – and sometimes I do actually want to see the alias or projection information – but I think there’s so much information available on the execution plan output that anything that makes it a little shorter, cleaner and tidier is a good thing.

You might have noticed, by the way, that the Buffers, Reads, and A-Time columns have still managed to lose information on the way up from operation 6; information that should have been summing up the plan has simply disappeared.  Make sure you do a sanity check for disappearing numbers when you’re looking at more complex plans.

 


DBaaS in Oracle Public Cloud

Pat Shuff - Wed, 2016-05-11 02:07
Before we dive deep into database as a service with Oracle we need to define some terms. We have thrown around concepts like Standard Edition, Enterprise Edition, High Performance Edition, and Extreme Performance Edition. We have talked about concepts like DataGuard, Real Application Clustering, Partitioning, and Compression. Today we will dive a little deeper into this so that we can focus on comparing them running in the Oracle Public Cloud as well as other cloud providers.

First, let's tackle Standard Edition (SE) vs Enterprise Edition (EE). Not only is there a SE, there is a SE One and SE2. SE2 is new with the 12c release of the database and the same as SE and SE1 but with different processor and socket restrictions. The Oracle 12c documentation details the differences between the different versions. We will highlight the differences here. Note that you can still store data. The data types do not change between the versions of the database. A select statement that works in SE will work in SE2 and will work in EE.

The first big difference between SE and EE is that SE is licensed on a per socket basis and EE is licensed on a per core basis. The base cost of a SE system is $600 per month per processor in the Oracle Public Cloud. The Standard Edition is limited to 8 cores in the cloud. If you are purchasing a perpetual license the cost is $17,500 and can run across two sockets or single sockets on two systems. The SE2 comes with a Real Application Cluster (RAC) license so that you can have a single instance running on two computers. The SE2 instance will also limit the database to run in 16 threads so running in more cores will have no advantage. To learn more about the differences and limitations, I recommend reading Mike Dietrich's Blog on SE2.

The second big difference is that many of the optional features are not available with SE. For example, you can't use diagnostics and tuning to figure out if your sql command is running at top efficiency. You can't use multi-tenant but you can provision a single pluggable database. This means that you can unplug and move the database to another database (and even another version like EE). The multi-tenant option allows you to have multiple pluggable databases and control them with a master SGA. This allows admins to backup and patch a group of databases all at once rather than having to patch each one individually. You can separate security and have different logins to the different databases but use a global system or sys account to manage and control all of the databases. Storage optimization features like compression and partitioning are not available in SE either. Data recovery features like DataGuard and FlashBack are not supported in SE. DataGuard is a feature that copies changes from one system through the change logs and apply them to the second system. FlashBack does something similar and allows you to query a database at a previous time and return the state of the database at that time. It uses the change log to reconstruct the database as it was at the time requested. Tools like RMAN backup and streams don't work in SE. Taking a copy of a database and copying it to another system is not allowed. The single exception to this is RMAN works in the cloud instance but not in the perpetual on-premise version. Security like Transparent Data Encryption, Label Security, Data Vault, and Audit Vault are not supported in SE. The single exception is transparent data encryption to allow for encryption in the public cloud is supported for SE. All of these features are described here.

When we get Enterprise Edition in the Oracle Public Cloud at $3K/OCPU/month or $5.04/OCPU/hour and the only option that we get is transportation data encryption (TDE) bundled with the database. This allows us to encrypt all or part of a table. TDE encrypts data on the disk when it is written with a SQL insert or update command. Keys are used to encrypt this data and can only be read by presenting the keys using the Oracle Wallet interface. More information on TDE can be found here. The Security Inside Out blog is also a good place to look for updates and references relating to TDE. This version of the database allows us to scale upto 16 processors and 4.6 TB of storage. If we are looking to backup this database, the largest size that we can have for storage is 2.3 TB. If our table requirements are greater than 2.3 TB or 4.6 TB you need to go to Exadata as a Service or purchase a perpetual license and run it on-premise. If we are looking to run this database in our data center we will need to purchase a perpetual license for $47.5K per processor license. If you are running on an IBM Power Server you need to license each processor per core. If you are running on x86 or Sparc servers you multiply the number of cores by 0.5 and can run two cores per processor license. TDE is part of the Advanced Security Option which lists for $15K per processor license. When calculating to see if it is cheaper to run on-premise vs the public cloud you need to factor in both license requirements. The same is true if you decide to run EE in AWS EC2 or Azure Compute. Make sure to read Cloud Licensing Requirements to understand the limits of the cost of running on EC2 or Azure Compute. Since all cloud providers use x86 processors the multiplication factor is 0.5 times the number of cores on the service.

The High Performance Edition contains the EE features, TDE, as well as multi-tenant, partitioning, advanced compression, advanced security, real application testing, olap, DataGuard, and all of the database management packs. This is basically everything with the exception of Real Application Clusters (RAC), Active DataGuard, and In-Memory options. High Performance comes in at $4K/processor/month or $6.72/OCPU/hour. If we wanted to bundle all of this together and run it in our data center we need to compare the database at $47.5K/processor license plus roughly $15K/processor/option (there are 12 of them). We can then calculate which is cheaper based on our accounting rules and amortization schedule. The key differential is that I can use this version on an hourly or monthly basis for less than a full year. For example, if we do patch testing once a quarter and allocate three weeks a quarter to test if the patch is good or bad, we only need 12 weeks a year to run the database. This basically costs us $12K/processor/year to test on a single processor and $24K on a dual processor. If we purchased the system it would cost us $47.5K capital expenditure plus 22% annually for support. Paying this amount just to do patch testing does not make sense. With the three year cost of ownership running this on premise will cost us $78,850. If we use the metered services in the public cloud this will cost us $72K. The $6,850 does not seem like a lot but with the public cloud service we won't need to pay for the hardware, storage, or operating system. We can provision the cloud service in an hour and replicate our on site data to the cloud for the testing. If we did this to a computer or virtual image on site it will take hours/days to provision a new computer, storage, database, and replicate the data.

It is important to note here that you need to be careful with virtualization. You need to use software that allows for hard partitioning. Products like VMWare and HyperV are soft partitioning virtualization software. This means that you can grow the number of processors dynamically and are required to license the Oracle software for the potential high water mark or all of the cores in the cluster. If you are running on something like a Cisco UCS blade server that has a dual socket 16 core processor, you must license all 32 cores to run the database even though you might just create a 2 core virtual instance in this VMWare installation. It gets even worse if you cluster 8 blades into one cluster then you must license all 256 cores. This get a little expensive at $47.5K times 128 processors. Products like OracleVM, Solaris Contailers, and AIX LPARs solve this cost problem with hard partitions.

The third enterprise edition is the Extreme Performance Edition of the database. This feature is $5K/OCPU/month or $8.401/processor/hour. This option comes with RAC, Active DataGuard, and In-Memory. RAC allows you to run across multiple compute instances and restart queries that might fail if one node fails. Active DataGuard allows you to have two databases replicating to each other and for both to be open and active at the same time. Regular or passive DataGuard allows you to replicate the data but not keep the target open and active. In-Memory allows you to store data not only in row format but in column format. When data is entered into the table it is stored on disk in row format. A copy is also placed in memory but stored in column format. This allows you to search faster given that you have already sorted the data in memory and can skip stuff that does not apply to your search. This is typically done with an index but we can't always predict what questions that the users are going to ask and adding too many indexes slows down all operations.

It is important to reiterate that we can take our perpetual license and run it in IaaS or generic compute. We can also effectively lease these licenses on a monthly or hourly rate. If you are running the database, you are consuming licenses. If you stop the database, you stop consuming the database license but continue to consume the storage and processor services. If you terminate the database you stop consuming the database, processor, and storage services because they are all deleted upon termination.

In summary, there are four flavors of DBaaS; Standard Edition, Enterprise Edition, High Performance Edition, and Extreme Performance Edition. Standard Edition and Enterprise Edition are available by other cloud providers but some require perpetual licenses and some do not. If you decide to run this service as PaaS or DBaaS in the Oracle Public Cloud you can pay hourly or monthly and start/stop these services if they are metered to help save money. All of these services come with partial management features offloaded and done by Oracle. Backups, patches, and, restart of services are done automatically for you. This allows you to focus more on how to apply the database service to provide business benefits rather than the feeding and maintenance to keep the database operational.

Up next, we will dive into use cases for database as a service and look at different configurations and pricing models to solve a real business problem.

Generate Multiple AWR Reports Quickly

VitalSoftTech - Tue, 2016-05-10 20:40
Occasionally there is a need to generate multiple AWR reports for database analysis. In my case, a storage vendor will use a tool to extract data from all time periods from the AWR reports to find IO related specific information. Here is how I generated these reports.
Categories: DBA Blogs

Join Oracle for a Hackathon on 5/18 - New York, NY

OTN TechBlog - Tue, 2016-05-10 14:48

Developing and Deploying Java SE, Node.JS Managed Microservices in the Cloud

Join us Wed 5/18 for a fun, hands-on, informative Hackathon day.  Learn lightweight microservices development using Java 8, Javascript development and cloud devops tools developing with the latest Java 8 features, JAX-RS via Jersey/Grizzly, Node.js, a new front end Javascript toolkit called JET and get first hand experience with Developer Cloud Service and Application Container Cloud Service (Java SE and Node.js) from Oracle. 

A foundation reference implementation and source code will be provided of a back end Java SE microservice that authenticates to Twitter via OAuth using only Jersey/Grizzly without a servlet container or Java EE container and exposing a real time tweet stream to a front end Node.js instance running the JET Javascript toolkit and building something new and exciting from that foundation.

Seats are very limited so register early!

Learn how to optimize text searches in SQL Server 2014 by using Full-Text Search – Part 1

Pythian Group - Tue, 2016-05-10 09:50

In this article, we’ll cover the functionality available in SQL Server 2014 for textual research known as Full-Text Search, its installation and implementation. Additionally, we will see how to develop textual searches using the predicates CONTAINS, CONTAINSTABLE, FREETEXT and FREETEXTTABLE, and use the FILESTREAM feature to improve the research and storage of binary data in tables.

The research based on words and phrases is one of the main features of the search tools on the web, like Google, and digital document management systems. To perform these searches efficiently, many developers create highly complex applications that do not have the necessary intelligence to find terms and phrases in the columns that store text and digital documents in the database tables.

What the vast majority of these professionals don’t know is that SQL Server has an advanced tool for textual research, the Full-Text Search (FTS).

FTS has been present in SQL Server since version 7, and through use textual searches can be performed both in columns that store characters, and in columns that store documents (for example, Office documents and PDFs), in its native form.

With options like searches for words and phrases, recognition of different languages, derivation of words (for example: play, played and playing), the possibility of developing a thesaurus, the creation of ranked results, and elimination of stopwords for search, FTS becomes a powerful tool for textual searches. As main factors for the use of textual searches we have:

  • The current databases are increasingly used as repositories of digital documents;
  • The cost for storage of information has slowed considerably, enabling the storage of Gigabytes, Terabytes and even Petabytes;
  • New types of digital documents are constantly being created, and the requirements for their storage, and subsequent research, are becoming larger and more complex;
  • Developers need a robust and reliable interface for performing textual research intelligence.

FTS has great advantages over other alternatives for textual research. For example, the command LIKE. The main tasks you can perform with FTS are:

  • Textual research based on linguistics. A linguistic research is based on words or phrases in a particular language, taking into consideration the verb conjugation, derived words, accent, among other features. Unlike the LIKE predicate, FTS uses an efficient indexing structure to perform textual research;
  • Automatic removal of stopwords informed in a textual research. The following are considered stopwords ones that don’t add to the result of the survey, such as from, to, the, the, a, an;
  • Assigning weights to the terms searched, making certain words are more important than others within the same textual research;
  • Generation of prioritization, allowing a better view of the documents that are most relevant according to the research carried out;
  • Indexing and searching in the most diverse types of digital documents. With FTS you can carry out searches in text files, spreadsheets, ZIP files, among others.

In this article will describe the architecture of the FTS, your installation and configuration, the main T-SQL commands used in textual research, the use of FTS in conjunction with the FILESTREAM, and also some techniques to optimize searches through the FTS.

FTS architecture

The architecture of the FTS has several components working in conjunction with the SQL Server query processor to perform textual research efficiently. The Figure 1 illustrates the major components of the architecture of the FTS. Let’s look at some of them:

  • Client Consultation: The client application sends the textual queries to the SQL Server query processor. It is the responsibility of the client application to ensure that the textual queries are written in the right way by following the syntax of FTS;
  • SQL Server Process (sqlservr.exe): The SQL Server process contains the query processor and also the engine of the FTS, which compiles and executes the textual queries. The integration between SQL Server and process the FTS offers a significant performance boost because it allows the query processor lot more efficient execution plans for textual searches;
  • SQL Server Query Processor: The query processor has multiple subcomponents that are responsible for validating the syntax, compile, generate execution plans and execute the SQL queries;
  • Full-Text Engine: When the SQL Server query processor receives a query FTS, it forwards the request to the FTS Engine. The Engine is responsible for validating FTS the FTS query syntax, check the full-text index, and then work together with the SQL Server query processor to return the textual search results;
  • Indexer: The indexer works in conjunction with other components to populate the full-text index;
  • Full-Text Index: The full-text index contains the most relevant words and their respective positions within the columns included in the index;
  • Stoplist: A stoplist is a list of stopwords for textual research. The indexer stoplist query during the indexing process and implementation of textual research to eliminate the words that don’t add value to the survey. SQL Server 2014 stores the stoplists within the database itself, thus facilitating their administration;
  • Thesaurus: The thesaurus is an XML file (stored externally to the database) in which you can define a list of synonyms that can be used for the textual research. The thesaurus must be based on the language that will be used in the search. The full-text engine reads the thesaurus file at the time of execution of research to verify the existence of synonyms that can increase the quality and comprehensiveness of the same;
  • Filter daemon host (fdhost.exe): Is responsible for managing the processes of filtering, word breaker and stemmer;
  • SQL Full-Text Filter Daemon Launcher (fdlauncher.exe): Is the process that starts the Filter daemon host (Fdhost.exe) when the full-text engine needs to use some of the processes managed by the same.

FTS1

Figure 1. Architecture of FTS.

For the better understanding of the process of creation, use and maintenance of the structure of full-text indexes, you must also know the meaning of some important concepts. They are:

  • Term: The word, phrase or character used in textual research;
  • Full-Text Catalog: A group of full-text indexes;
  • Word breaker: The process that is the barrier every word in a sentence, based on the grammar rules of the language selected for the creation of full-text index;
  • Token: A word, phrase or character defined by the word breaker;
  • Stemmer: The process that generates different verb forms for the words, based on the grammar rules of the language selected for the creation of full-text index;
  • Filter: Component responsible for extracting textual information from documents stored with the data type varbinary(max) and send this information to the process word breaker.
Indexing process

The indexing process is responsible for the initial population of a full-text index and update of this index when the data modifications occur on the columns that have been indexed by FTS. This initialization process and update the full-text index named crawl.

When the crawl process is started, the FTS component known as protocol handler accesses the data in the table being indexed and begins the process to load into memory the existing content in this table, also known as streaming. To have access to data which are stored on disk, the protocol handler allows FTS to communicate with the Storage Engine. After the end of streaming the filter daemon host process performs data filtering, and initiates the processes of word breaker and stemmer for the filling in of the full-text index.

During the indexing process the stoplist is queried to remove stopwords, and so fill the structure of the full-text index with words that are meaningful to the textual research. The last step of the indexing process is known as a master merge, in which every word indexed are grouped in a single full-text index.

Despite the indexing process requires a high i/o consumption, it is not necessary to the blocking of the data being indexed. However a query performed using a full-text index during the indexing process can generate a result incomplete.

Full-Text query processing

For the full-text query processing are used the same words and phrases limiters that were defined by the Word breaker during the indexing process. You can also use additional components, as for example, the stemmer and the thesaurus, depending on the full-text predicates (CONTAINS or FREETEXT) used in the textual research. The use of full-text predicates will be discussed later in this article.

The process stemmer generates inflectional forms of the searched words. For example, from the term “play” is searched also the terms “played”, “play”, “play” beyond the term itself “play”.

Through rules created in the thesaurus file you can use synonyms to replace or expand the searched terms. For example, when performing a textual search using the term “Ruby”, the full-text engine can replace it by the synonym “red”, or else expand the research considering the terms automatically “red”, “wine”, “Scarlet” and also “Ruby”.

After processing of the full-text query, the full-text engine provides information to SQL query processor that assist in creating an execution plan optimized for textual research. There is a greater integration between the full-text engine and the query processor of SQL (both are components of the SQL Server process), enabling textual searches are conducted in a more optimized.

In the next post of this 4 part series, we will learn how to install the FTS and how to use it. Stay tuned!

Categories: DBA Blogs

Oracle JET 2.0.1 - Upgrade for CRUD Sample

Andrejus Baranovski - Tue, 2016-05-10 09:39
Oracle JET 2.0.1 was released in April and I decided to upgrade my CRUD sample (based on ADF BC REST services) implemented with previous JET version. There is migration guide, describing main points to consider, while moving to the next JET version. I'm going to add few more points. Hopefully you will find it useful.

Here is CRUD sample UI running with Oracle JET 2.0.1 and interacting with ADF BC REST:


This sample comes with advanced validation rule integration, which is being enforced in ADF BC and propagated to JET UI (read my previous posts about this):


As per migration guide, you must update reference for Alta CSS (2.0.1):


Next update module names in main.js:


CRUD app is rendering paged table out of collection table datasource. Somehow it worked previously, without specifying ojs/ojcollectiontabledatasource module, now it must be specified, otherwise there is error about constructor:


There are changes on UI side. I have wrapped table/form into div with oj-flex-items-pad (includes padding). Table and form and places into separate div's, with oj-flex-item class:


To enable table pagination, I moved it out of div, in previous version it worked inside separate div:


Form group elements are wrapped into oj-flex/oj-flex-item class, this creates better layout for form items:


Update method was changed to handle ADF BC validation error in slightly different way. I'm showing error first and next resetting collection data - to return original value into the table.

Download sample application - JETCRUDApp_v6.zip.

Speaker Scores

Jonathan Lewis - Tue, 2016-05-10 07:10

I published a note this morning that I drafted in January 2015, and I didn’t notice that it had gone back in time to publish itself on the date that I first drafted it – and it’s already been tweeted twice so I can’t move it. So this is a temporary link to pop it to the head of the queue while leaving it where it first appeared.


KeePass 2.33

Tim Hall - Tue, 2016-05-10 03:30

I just noticed KeePass 2.33 was released a couple of days ago. You can download it here.

You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

Cheers

Tim…

KeePass 2.33 was first posted on May 10, 2016 at 9:30 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

A week in the cloud… (Just to clarify)

Tim Hall - Tue, 2016-05-10 03:04

AWSA comment on yesterday’s post by Andy C makes me think I should clarify a couple of things I mentioned in yesterday’s post.

“Infrastructure as a Service (IaaS), not really what I consider the cloud.”

For *me* the definition of cloud must include some value-add in relation to ease of use. I’ve used IaaS on Azure, AWS and Oracle Cloud. In all cases I’m left to do the same administration stuff I had to when I was on a physical box or a VM on a local virtual machine. For *me* it only becomes cloud when I have Platform as a Service (PaaS) or Software as a Service (SaaS), where the administration is simplified greatly through tooling. IaaS is just another hosting provider. It’s not really cloud in *my* book. That’s not to say it’s not cool or useful. It’s just not cloud to *me*.

Notice the heavy use of *me* and *my*. This is not the law or even some text book definition. It’s just the way I feel about it, having used a number of hosting companies for business and personal use prior to “the cloud”. You are allowed to think differently, and certainly cloud providers do.

Exadata as a Service

Pat Shuff - Tue, 2016-05-10 02:07
For the last four days we have been focusing on Database as a Service in the cloud. We focused on Application Express, or Schema as a Service, in the last three days and looked at pricing and how to get APEX working in the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. With the Oracle Public Cloud we have three options for database in the cloud at the platform as a service layer; Schema as a Service, Database as a Service, and Exadata as a Service. We could run this in compute as a service but have already discussed the benefits of offloading some of the database administration work with platform as a service (backup, patching, restarting services, etc).

The question that we have not adequately addressed is how you choose between the three services offered by Oracle. We touched on one of the key questions, database size, when we talked about Schema as a Service. You can have a free database in the cloud if your database is smaller than 25 MB. It will cost you a little money, $175/month, if you have a database smaller than 5 GB. You can grow this to 50 GB and stay with the Schema as a Service. If your database is larger than 50 GB you need to look at Database as a Service or Exadata as a Service. You also need to look at these alternatives if you are running an application in a Java container and need to attach to the database through the standard port 1521 since Schema as a Service only supports http(s) connection to the database. If you can query the database with a REST api call, Schema as a Service is an option but is not necessarily tuned for performance. Products like WebLogic or Tomcat or other Java containers can buffer select statements in cache and not have to ask the same question over and over again from the database. For example, if we census data and are interested in the number of people who live in Texas, we get back roughly 27 million rows of data from the query. If we want to drill down and look at how many people live in San Antonio, we get back 1.5 million rows. If our Java code were smart enough and our application server had enough buffer space, we would not need to read the 27 million rows back when we want to just look at the 1.5 million rows relating to San Antonio. The database can keep the data in memory as well and does not need to read the data back from disk to make the select statement to find the state or city rows that match the query.

Let's take a step back and talk about how a database works. We create a table and put information in columns like first name, last name, street address, city, state, zip code, email address, and phone number. This allows us to contact each person either through snail mail, email, or phone. If we allocate 32 bytes for each field we have 8 fields and each row takes up 256 bytes to identify each person. If we store data for each person who lives in Texas we consume 27 million rows. Each row takes up 256 bytes. The whole table will fit into 6.9 GB of storage. This data is stored in a table extent or file that we save into the /u02/data directory. If we expand our database to store information about everyone who lives in the United States we need 319 million rows. This will expand our database to 81.7 GB. Note that we have crossed the boundary for Schema as a Service. We can't store this much information in a single table so we have to look at Database as a Service or Exadata as a Service. Yes, we can optimize our database by using less than 32 bytes per column. We can store zip codes in 16 bytes. We can store phone numbers in 16 bytes. We can store state information in two bytes. We can also use compression in the database and not store the characters "San Antonio" in a 32 byte field but store it in an alternate table once and correlate it to the hexadecimal number 9c. We then store 9c into the state field which tells us that the city name is stored in another table. This saves us 1.5 million times 31 bytes (one to store the 9c) or 46 MB of storage. If we can do this for everyone in Texas shrink the storage by 840 MB. This is roughly 13% of what we had allocated for all of the information related to people who live in Texas. If we can do this for the city, state, and zip code fields we can reduce the storage required by 39% or shrink the 81.7 GB to 49.8 GB. This is basically what is done with a technology called Hybrid Columnar Compression (HCC). You create a secondary table that correlates the 9c value to the character string "San Antonio". You only need to store the character string once and the city information shrinks from 32 bytes to 1 byte. When you read back the city name, the database or storage that does the compression returns the string to the application server or application.

When you do a select statement the database looks for the columns that you are asking for in the table that you are doing a select from and returns all of the data that matches the where clause. In our example we might use

select * from census where state = 'Texas';
select * from census where city = 'San Antonio';
We can restrict what we get back by not using the "*" value. We can get just the first_name and last_name and phone number if that is all we are interested in. The select statement for San Antonio will return 1.5 million rows times 8 columns times 32 bytes or 384 MB of data. A good application server will cache this 384 MB of data and if we issue the same select statement again in a few seconds or minutes we do not need to ask the database again. We issue a simple request to the database asking it if anything has changes since the last query. If we are running on a slow internet connection as we find in our homes we are typically running at 3 MB/second download speeds. To transfer all of this data will take us 128 seconds or about two minutes. Not reading the data a second time save us two minutes.

The way that the database finds which 384 MB to return to the application is done similarly. It looks at all of the 81.7 GBs that store the census data and compares the state name to 'Texas' or hex value of corresponding to the state name. If the compare is the same, that row is put into a response buffer and transmitted to the application server. If someone comes back a few seconds later and requests the information correlating to the city name 'San Antonio', the 81.7 GB is read from disk again and and the 384 MB is pulled out to return to the application server. A smart database will cache the Texas data and recognize that San Antonio is a subset of Texas and not read the 81.7 GB a second time but pull the data from memory rather than disk. This can easily be done by partitioning the data in the database and storing the Texas data in one file or disk location and storing the data correlating to California in another file or disk location. Rather than reading back 81.7 GB to find Texas data we only need to read back 6.9 GB since it has been split out in storage. For a typical SCSI disk attached to a computer, we read data back at 2.5 GB/second. To read back all of the US data it takes us 33 seconds. It we read back all of the Texas data it takes us 2.76 seconds. We basically save 30 seconds by partitioning our data. If we read the Texas data first and the San Antonio data second with our select statements, we can cache the 6.9 GB in memory and not have to perform a second read from disk saving us yet another 33 seconds (or 3 seconds with partitioned data). If we know that we will be asking for San Antonio data on a regular basis we setup an index or materialized view in the database so that we don't have to sort through the 6.9 GB of data but access the 384 MB directly but read just the relevant 384 MB of data the first time and reduce our disk access times to 0.15 seconds. It is important to note that we have done two simple things that reduced our access time from 33 seconds to 0.15 seconds. We first partitioned the data and the way that we store it by splitting the data by state in the file system. We second created an index that helped us access the San Antonio data in the file associated with Texas without having to sort through all of the data. We effectively pre-sort the data and provide the database with an index. The cost of this is that every insert command to add a new person to San Antonio requires not only updating the Texas table but updating the index associated with San Antonio as well. When we do an insert of any data we must check to see if the data goes into the Texas table and update the index at the same time whether the information correlates to San Antonio or not because the index might change if data is inserted or updated in the middle of the file associated with the Texas information.

Our original question was how do we choose between Schema as a Service, Database as a Service, and Exadata as a Service. The first metric that we used was table size. If our data is greater than 25 MB, we can't use the free APEX service. If our data is greater than 50 GB, we can't use the paid APEX or Schema as a Service. If we want to use features like compression or partitioning, we can't use the Schema as a Service either unless we have sys access to the database. We can create indexes for our data to speed requests but might or might not be able to setup compression or partitioning since these are typically features associated with the Enterprise Edition of the database. If we look at the storage limitations of the Database as a Service we can currently store 4.8 TB worth of data in the database. If we have more data than that we need to go to Exadata as Service. The Exadata service comes in different flavors as well and allows you to store up to 42 TB with a quarter rack, 84 TB with a half rack, and 168 TB with a full rack. If you have a database larger than 168 TB, there are no solutions in the cloud that can store your data attached to an active database. You can backup your data to cloud storage but you can not have an active database attached to it.

If we look a little deeper into the Exadata there are multiple advantages to going with Exadata as a Service. The first and most obvious is that you are suddenly working on dedicated hardware. In most cloud environments you share processors with other users as well as storage. You do not get a dedicated bandwidth from processor to disk but must time share this with other users. If you provision a 16 core system, it will typically consume half of a 32 core system that has two sockets. This means that you get a full socket but have to share the memory and disk bandwidth with the next person running in the same server. The data read from the disk is cached in the disk controller's cache and your reads are optimized until someone else reads data from the same controller and your cached data gets flushed to make room. Most cloud vendors go with commodity hardware for compute and storage so they are not optimized for database but for general purpose compute. With an Exadata as a Service you get hardware optimized for database and you get all of the processors in the quarter, half, or full rack. There is no competing for memory bandwidth or storage bandwidth. You are electrically isolated from someone in the other quarter or half rack through the Infiniband switch. Your data is isolated on spindles of your own. You get the full 40 GB/second to and from the disk. Reading the 81.7 GB takes 2.05 seconds compared to 32.68 seconds through a standard SCSI disk controller. The data is partitioned and stored automatically so that when we ask for the San Antonio data, we only read back the 384 MB and don't need to read back all of the data or deal with the index update delays when we write the data. The read scans all 81.7 GB and returns the results in 0.01 seconds. We effectively reduce the 33 seconds it took us previously and dropped it to 10 ms.

If you want to learn more about Exadata and how and why it makes queries run faster, I would recommend the following books

or the following youtube video channels or the following web sites

The Exadata as a Service is a unique offering in the cloud. Amazon and Microsoft have nothing that compares to it. Neither company offers dedicated compute that is specifically designed to run a database in the cloud with dedicated disk and dedicated I/O channels. Oracle offers this service to users of the Enterprise Edition of the database that allows them to replicate their on-premise data to the cloud, ingest the data into an Exadata in the cloud, and operate on the data and processes unchanged and unmodified in the cloud. You could take your financial data that runs on a 8 or 16 core system in your data center and replicate it to an Exadata in the cloud. Once you have the data there you can crunch on the data with long running queries that would take hours on your in house system. We worked with a telecommunications company years ago that was using an on-premise transportation management system and generated an inventory load list to put parts on their service trucks, work orders for the maintenance repair staff, and a driving list to route the drivers on the optimum path to cover the largest number of customers in a day. The on-premise system took 15-16 hours to generate all of this workload and was prone to errors and outages requiring the drivers to delay their routes or parts in inventory to be shipped overnight for loading in the morning onto the trucks. Running this load on an Exadata dropped the analytics to less than an hour. This allowed trucks to be rerouted mid-day to higher profit customers to handle high priority outages as well as next day delivery of inventory between warehouses rather than rush orders. Reducing the analytics from 15 hours to less than an hour allowed an expansion of services as well as higher quality of services to their customer base.

Not all companies have daily issues like this and look for higher level processing once a quarter or once or twice a year. Opening new retail outlets, calculating taxes due, or provisioning new services that were purchased as Christmas presents are three examples of predictable, periodic instances where consuming a larger footprint in the cloud rather than investing in resources that sits idle most of the year in your data center. Having the ability to lease these services on an monthly or annual basis allows for better utilization of resources not only in your data center but reduces the overall spend of the IT department and expanding the capabilities of business units to do things that they normally could not afford.

Exadata as a Service is offered in a non-metered configuration at $40K per month for a quarter rack (16 cores and 144 TB of disk), $140K per month for a half rack (56 cores and 288 TB of disk), or $280K per month for a full rack (112 cores and 576 TB of disk). The same service is offered on a metered basis for $80K for a quarter rack, $280K for a half rack, and $560K for a full rack (in the same configuration as the non-metered service). One of the things that we recommend is that you analyze the cost of this service. Is it cheaper to effectively lease a quarter rack at $80K for a month and get the results that you want, effectively lease a quarter rack at $480K for a year, or purchase the hardware, database license, RAC licenses, storage cell licenses, and other optional components to run this in your data center. We will not dive into this analysis because it truly varies based on use cases, value to your company for the use case, and cost of running one of these services in your data center. It is important to do this analysis to figure out which consumption model works for you.

In summary, Exadata as a Service is a unique service that no other cloud vendor offers. Having dedicated hardware to run your database is unique for cloud services. Having hardware that is optimized for long, complex queries is unique as well. Exadata is one of the most popular hardware solutions offered by Oracle and having it available on a monthly or annual basis allows customers to use the services at a much lower cost than purchasing a box or larger box for their data center. Having Oracle manage and run the service frees up your company to focus on the business impact of the hardware and accelerated database rather than spend month to administer the server and database. Tomorrow we will dive into Database as a Service and see how a generic database in the cloud has a variety of use cases and different cost entry points as well as features and functions.

Links for 2016-05-09 [del.icio.us]

Categories: DBA Blogs

Telstra SMS API Swagger Enabled and deployable on Bluemix Sydney Public Instance

Pas Apicella - Mon, 2016-05-09 23:56
The following demo below can be used to expose the Telstra SMS Public API https://dev.telstra.com/content/sms-api-0

https://github.com/papicella/TelstraSMSAPIPublic

You can deploy this to Bluemix by simply using the "Deploy to Bluemix" button as shown below.


Once deployed you have a Swagger UI enabled REST endpoints to consume as shown below.

Application once deployed on Bluemix


Swagger UI 



More Information

https://dev.telstra.com/content/sms-api-0
http://bluemix.net


Categories: Fusion Middleware

Taxonomy isn’t just for frogs anymore. What taxonomy means in document management.

 

taxonomyfrogTaxonomy can be a nebulous term. It has existed for years, having probably its most common roots in the sciences, but has blossomed to apply its practices to a plethora of other fields.  The wide application of taxonomy shows how useful and effective it is, yet its meaning can be unclear due to its diversity.  We identify with taxonomy in library sciences with the Dewey Decimal System and we identify with taxonomy in the scientific use when we talk about animals (Kingdom: Animalia; Phylum: Chordata; Class: Amphibia; Clade: Salientia; Order: Anura (frog)).  These are familiar uses to us.  We learned of them early on in school.  We’ve seen them around for years—even if we didn’t identify them as taxonomies.  But what is taxonomy when we talk about subjects, like documents and data, that aren’t so tangible?  As a Business Solutions Architect at Fishbowl Solutions, I encounter this question quite a bit when working on Oracle WebCenter Content document management projects with customers.

The historical Greek term taxonomy means “arrangement law.”  Taxonomy is the practice in which things, in this case documents, are arranged and classified to provide order for users.  When it comes to documents, we give this order by identifying field names, field values, and business rules and requirements for tagging documents with these fields.  These fields then describe the document so that we can order the document, know more about it, and do more with it.

Here’s an example:lilypadtax

  • Document Type: Policy
  • Document Status: Active
  • Document Owner: Administrator
  • Lifecycle: Approved
  • Folder: HR
  • Sub-Folder: Employee Policies
  • And so on…

Defining taxonomy for documents provides a host of business and user benefits for document management, such as:

  • A classification and context for documents. It tells users how a document is classified and where it “fits in” with other documents. It gives the document a name and a place. When a document is named and placed, it enables easier searching and browsing for users to find documents, as well as an understanding of the relationship of one document to another. Users know where it will be and how to get it.
  • A simplified experience. When we have order, we reduce clutter and chaos. No more abandoned or lost documents. Everything has a place. This simplifies and improves the user experience and can reduce frustration as well. Another bonus: document management and cleanup is a simple effort. Documents out of order are easy to identify and can be put in place. Documents that are ordered can be easily retrieved, for instance for an archiving process, and managed.frogelement
  • An arrangement that makes sense for the business. Using taxonomy in a document management system like Oracle’s WebCenter Content allows a company to define its own arrangement for storing and managing documents that resonates with users. Implementing a taxonomy that is familiar to users will make the document management system exponentially more usable and easier to adopt. No more guessing or interpreting arrangement or terminology—users know what to expect, terms are common, they are in their element!
  • A scalable framework. Utilizing a defined and maintained taxonomy will allow users to adopt the common taxonomy as they use the document management system, but will also allow for business growth as new scope (documents, processes, capabilities, etc.) is added. Adding in a new department with new documents? Got it. Your scalable taxonomy can be reused or built upon. Using a comprehensive taxonomy that is scalable allows for an enterprise approach to document management where customizations and one-offs are minimized, allowing for a common experience for users across the business.
  • A fully-enabled document management system. Lastly, defining a taxonomy will allow for full utilization of your OracleWebCenter Content, or other, document management system.   Defining a taxonomy and integrating it with your document management system will enable building out:
    • logical folder structures,
    • effective browse and search capabilities,
    • detailed profiles and filters,
    • advanced security,
    • sophisticated user interfaces and more.

Clearly, a taxonomy is the solution to providing necessary order and classification to documents. It creates a common arrangement and vocabulary to empower your users, and your document management system, to work the best for you.  Now hop to it!

This blog is the first in a series discussing taxonomy topics.  Watch for the next blog entitled “Taxonomy is a Sleeper. The reasons from A to ZZZs that taxonomy hasn’t been a part of your most important projects—but should be!”

Carrie McCollor is a Business Solutions Architect at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our site to learn more about what we do.

The post Taxonomy isn’t just for frogs anymore. What taxonomy means in document management. appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Did You Know: Oracle EBS R12.2 #1 – Managing OHS

Pythian Group - Mon, 2016-05-09 15:21

For a long time now I’ve wanted to start a blog series outlining the not-so-obvious things that have changed in the new Oracle E-Business Suite R12.2. Here comes the new “Did You Know” series specifically for Oracle E-Business Suite! Lets start this series with Apache, aka Oracle HTTP Server.

People are already aware that OHS10g/OC4J is replaced with OHS11g/Weblogic in R12.2.X. On the surface it looks like a simple change, but a lot changed under the hood. In Oracle EBS 11i/R12.1, one could change apache port, but just updating the Context XML file and running autoconfig.

In R12.2, we have to change OHS stuff like web port by logging into the EM console url and then running $AD_TOP/bin/adSyncContext.pl script to sync OHS config parameters to the Context XML file. This script is only needed for OHS config for now. Any changes for Weblogic als need to be done in weblogic console url, not in Context XML file. But these changes in weblogic console are automatically propagated to xml file by the adRegisterWLSListeners.pl script.

You can get more details of the procedures by reviewing the MOS notes below.

  • E-Business Suite 12.2 Detailed Steps To Change The R12.2 Default Port To 80 (Doc ID 2072420.1)
  • Managing Configuration of Oracle HTTP Server and Web Application Services in Oracle E-Business Suite Release 12.2 (Doc ID 1905593.1)
Categories: DBA Blogs

Oracle Field Service Cloud Announces Significant, End-to-End Upgrade

Linda Fishman Hoyle - Mon, 2016-05-09 14:24

Oracle issued an announcement last week trumpeting its latest Field Service Cloud release. This release delivers comprehensive enhancements from mobility to ease of use. It gives companies the ability to more easily connect field service technicians to the back-office tools and resources they need to complete their work in the field. Organizations can now power more connected customer service experiences, faster response times, and reduced service delivery costs.

If you have questions, contact Christine Friscic, Oracle product manager (pictured left), at christine.friscic@oracle.com.

Compression -- 7 : Updating after BASIC Compression

Hemant K Chitale - Mon, 2016-05-09 09:57
In the first blog post on compression, I had shown BASIC Compression and also the effect of executing an UPDATE on a table with BASIC compression.

To expand on the them of UPDATEs of BASIC compression blocks ....

SQL> select count(*) from source_data;

COUNT(*)
----------
367156

SQL> create table target_comp row store compress basic as select * from source_data where 1=2;

Table created.

SQL> select pct_free from user_tables where table_name = 'TARGET_COMP';

PCT_FREE
----------
0

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
0 4452

SQL>
SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 0 4452

SQL>


So we have a table with 1.1million rows and no Row Chaining.

What happens if we update about 20% of the rows ?

SQL> begin
2 for rec in (select rowid from target_comp where rownum < 220001)
3 loop
4 update target_comp set owner=owner where rowid=rec.rowid;
5 commit;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
202189 7882

SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 202189 7882

SQL>


I have updated 220,000 rows without actually increasing the notional length of each row (I set OWNER=OWNER).  Yet, The CHAIN_CNT is now 202K and the table's HighWaterMark has expanded from 4,452 blocks to 7,882 blocks.  A significant increase !
(YMMV may vary in your tests !)

It was Jonathan Lewis who suggested getting the Chain Count (or LIST CHAINED ROWS) to understand the impact of UPDATEs on a table with BASIC compression.
.
.
.


Categories: DBA Blogs

Tip of the day: Always put this in your .bashrc

RDBMS Insight - Mon, 2016-05-09 06:52

if you like to scp:

# If not running interactively, don't do anything
[[ $- == *i* ]] || return

Otherwise scp will fail without error – it’s a known bug.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator