DBA Blogs

Which Cassandra version should you use for production?

Pythian Group - Thu, 2016-04-07 12:47
What version for a Production Cassandra Cluster?

tl;dr; Latest Cassandra 2.1.x

Long version:

A while ago, Eventbrite wrote:
“You should not deploy a Cassandra version X.Y.Z to production where Z <= 5.” (Full post).

And, in general, it is still valid up until today! Why “in general“? That post is old, and Cassandra has moved a lot since them. So we can get a different set of sentences:

Just for the ones that don’t want follow the links, and still pick 3.x for production use, read this:

“Under normal conditions, we will NOT release 3.x.y stability releases for x > 0.  That is, we will have a traditional 3.0.y stability series, but the odd-numbered bugfix-only releases will fill that role for the tick-tock series — recognizing that occasionally we will need to be flexible enough to release an emergency fix in the case of a critical bug or security vulnerability.

We do recognize that it will take some time for tick-tock releases to deliver production-level stability, which is why we will continue to deliver 2.2.y and 3.0.y bugfix releases.  (But if we do demonstrate that tick-tock can deliver the stability we want, there will be no need for a 4.0.y bugfix series, only 4.x tick-tock.)”

What about end of life?

Well, it is about stability, there are still a lot of clusters out there running 1.x and 2.0.x. And since it is an open source software, you can always search in the community or even contribute.

If you still have doubts about which version, you can always contact us!

Categories: DBA Blogs

Links for 2016-04-06 [del.icio.us]

Categories: DBA Blogs

Partner Webcast – Oracle WebLogic Server 12.2.1 Multitenancy and Continuous Availability

As part of the latest major Oracle Fusion Middleware release, Oracle announced the largest release of Oracle WebLogic Server in a decade. Oracle WebLogic Server 12c, the world’s first cloud-native,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #468: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-04-06 15:38

This Log Buffer Edition rounds up Oracle, SQL Server, and MySQL blog posts of the week.

Oracle:

When using strings such as “FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=9,10? within the scheduler, sometimes its not readily apparent how this will translate to actual dates and times of the day that the scheduled activity will run. To help you understand, a nice little utility is to use EVALUATE_CALENDAR_STRING”.

Most developers have struggled with wires in SOA composites. You may find yourself in a situation where a wire has been deleted. Some missing wires are restored by JDeveloper. Other missing wires have to be added manually, by simply re-connecting the involved adapters and components. Simple.

In-Memory Parallel Query and how it works in 12c.

Oracle recently launched a new family of offerings designed to enable organizations to easily move to the cloud and remove some of the biggest obstacles to cloud adoption. These first-of-a-kind services provide CIOs with new choices in where they deploy their enterprise software and a natural path to easily move business critical applications from on premises to the cloud.

Two Oracle Server X6-2 systems, using the Intel Xeon E5-2699 v4 processor, produced a world record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 27,509.59 SPECjEnterprise2010 EjOPS. One Oracle Server X6-2 system ran the application tier and the second Oracle Server X6-2 system ran the database tier.

SQL Server:

To be able to make full use of the system catalog to find out more about a database, you need to be familiar with the metadata functions.

Powershell To Get Active Directory Users And Groups into SQL!

A code review is a serious business; an essential part of development. Whoever signs off on a code review agrees, essentially, that they would be able to support it in the future, should the original author of the code be unavailable to do it.

Change SQL Server Service Accounts with Powershell

Learn how to validate integer, string, file path, etc. input parameters in PowerShell as well as see how to test for invalid parameters.

MySQL:

The MySQL Utilities has announced a new beta release of MySQL Utilities. This release includes a number of improvements for usability, stability, and a few enhancements.

In this webinar, we will discuss the practical aspects of migrating a database setup based on traditional asynchronous replication to multi-master Galera Cluster.

Docker has gained widespread popularity in recent years as a lightweight alternative to virtualization. It is ideal for building virtual development and testing environments. The solution is flexible and seamlessly integrates with popular CI tools.

How ProxySQL adds Failover and Query Control to your MySQL Replication Setup

Read-write split routing in MaxScale

Categories: DBA Blogs

Six things I learned about privacy from some of the world’s top IT pros

Pythian Group - Wed, 2016-04-06 08:32

 

Everyone has their own experiences with data privacy. Ask almost anyone, and they’ll be able to tell you how a retail company found some innovative way of collecting their personal data, or how Facebook displayed a suggested post or ad that was uncannily targeted at them based on personal information they did not purposely share.

 

The discussion about privacy can get pretty interesting when you get a group of CIOs and IT leaders in the same room to talk about it.  And that’s just what we did at Pythian’s Velocity of Innovation (Velocity) events in New York, San Francisco and Sydney. With their ears constantly to the ground for the latest trends and unparalleled insider IT knowledge, there’s no better group with whom to talk security and privacy than our Velocity panelists and attendees.

 

At Pythian we manage the revenue-generating systems for some of the world’s leading enterprises, and work with our clients on some of the most difficult security and privacy issues. But staying on top of the evolving threats to data privacy is a constant challenge, even for the most knowledgeable security experts. One of our biggest challenges is helping our clients use data responsibly to improve customer experiences, while protecting individual privacy and safeguarding personally identifiable information from malicious threats. So there’s always more we can learn. This is one of the reasons we keep listening to our customers and peers through various channels and events like our Velocity discussions.

 

Privacy is a high stakes game for companies that want to maximize their use of client data while adhering to legislation designed to protect personal information. In many countries, new, more rigorous laws are emerging to enable individuals to maintain more control over their personal information, and to help them understand when they are being monitored or when their personal information is recorded, shared or sold. These laws include the recently  reformed  EU Data protection rules and PIPEDA in Canada. But companies are responding by finding clever ways of making it attractive for consumers to trade their data for services, or worse, by making it impossible to get service at all without giving up certain information.

With these issues in mind, we had some interesting discussions at each of our Velocity sessions. Here are six main privacy-related themes that emerged at these recent events:

 

  1. Privacy and security are not the same

Although are related, they refer to different ideas. Privacy is a major goal of security, and relates to a consumer’s right to safeguard their personal information. It can involve vulnerable data such as social media data, customer response data, demographic data or other personal information. In general, privacy is the individual’s right to keep his or her data to himself or herself. By contrast, security refers to the protection of enterprise or government systems. Security may incorporate customer privacy as part of its agenda, but the two are not synonymous. According to Tim Sheedy, principal analyst with Forrester Research, a panelist at our Sydney event, security and privacy are different things. “But if you have a security breach, privacy becomes an issue,” he said. “So they’re closely linked.”

 

  1. Balance is key

Our panelists agreed that you have to balance protecting customer data with moving the business forward. But if you’re in the insurance business, like San Francisco Velocity panelist and CTO at RMS Cory Isaacson, that’s a tough balance to strike. He has to worry about how his company’s data privacy measures up against industry standards and the regulations they have meet. At the same time he has to ensure that data security doesn’t paralyze innovation in the business. He says you have to be innovative with moving the business forward, as well as keeping up with standards and regulations.

“If you listen to your ISO auditor you’re never going to get out of bed in the morning,” Isaacson said. “If you’re faced with securing highly sensitive customer data, you have to incorporate it into the way you work every day instead of letting it slow you down.”

 

  1. Don’t be creepy

Companies have to walk a fine line between using personal data to enhance the customer experience and being intrusive for no good reason. Forrester’s Sheedy warns that perception is everything. “Don’t be creepy,” he said. “When you’re creepy that that’s when privacy becomes an issue. Don’t underestimate the ability of people to give away information for a discount or for a benefit. But the key is that there has to be a benefit. When there isn’t an upside, it becomes creepy for customers, and you have to wonder if you’ve  gone too far.”

 

  1. People will share data in exchange for perks.

Our Sydney panelists and attendees exchanged ideas about new ways that companies are obtaining information about customers, behaviours and preferences. They talked about social media companies learning your behaviours by using more than just the data you share on your profile, including using tactics like tracking your location through GPS and triangulation between cell phone towers.

 

“When I visit Pythian’s head office in Ottawa, the nearby coffee shop has an app that asks if I want them to prepare my usual drink when it detects I’m nearby. Is that intrusive? No, because I opted in,” said Francisco Alvarez, vice president, APAC at Pythian.

 

A Marketing Magazine article reported that in a recent survey, a majority of consumers will share data for certain benefits: 80 percent said they would share data for rewards from a company, 79 percent would share data for cash back, and 77 percent would share data for coupons. The majority also said they would share personal data for location-based discounts (69 percent).

 

  1. Millennials have a higher tolerance for sharing data.

The Australian panelists talked about this extensively, citing personal experience. They expressed the same concern many of us feel  about how much data their children’s generation is a sharing online and with companies. One of our attendees had a child who was denied a prestigious scholarship because his digital footprint wasn’t favourable. But, Alvarez said, this type of consequence doesn’t seem to deter millennials from freely sharing very personal information. And when it comes to sharing information with businesses in exchange for perceived benefits, their tolerance is very high.  

 

In fact, according to a survey conducted by the USC Annenberg Center for Digital Future and Bovitz Inc., millennials (individuals between the ages of 18 and 35) have a different attitude than Internet users 35 years and older when it comes to sharing their personal data online with businesses.  According to the study results, millennials were more likely than older respondents (35 years and older) to trade some of their personal information in exchange for more relevant advertising.

 

  1. New  approaches for protecting customer data are on the horizon

In our San Francisco discussion, Pythian Chief Data Officer Aaron Lee cited Google’s new initiative for protecting customer data.  The new approach involves placing corporate applications on the Internet itself and focusing its security efforts on registered devices and user authentication.
“It’s really simple idea,” Lee said.  “In most companies the internal network is special. If you’re on the internal network you get stuff you can’t get from outside just because you’re sitting there on the network. That’s not the case anymore. With Google’s Beyond Corp initiative it is like there are no more internal networks. If you want to talk to my service, you treat it like you came in from the internet just like everybody else. That way I can apply a consistent approach to authentication, authorization, all that good stuff. It’s actually acting to reduce the surface of complexity. It’s an approach that takes the simplest way that we can secure these things, and treats them the same regardless of where they actually live,” he said.

Categories: DBA Blogs

DEFAULT_CACHE_SIZE mentioned in alert.log of an #Oracle database

The Oracle Instructor - Wed, 2016-04-06 04:53

Today, I got this message in my alert.log file:

Full DB Caching disabled: DEFAULT_CACHE_SIZE should be at least 709 MBs bigger than current size.

When I look at the datafile sizes and compare them with the buffer cache size, it shows:

 

SYS@cloudcdb > select name,bytes/1024/1024 as mb from v$sgainfo;

NAME                                                       MB
-------------------------------------------------- ----------
Fixed SGA Size                                     2,80265045
Redo Buffers                                       13,1953125
Buffer Cache Size                                        3296
In-Memory Area Size                                      2048
Shared Pool Size                                          736
Large Pool Size                                            32
Java Pool Size                                             16
Streams Pool Size                                           0
Shared IO Pool Size                                       208
Data Transfer Cache Size                                    0
Granule Size                                               16
Maximum SGA Size                                         6144
Startup overhead in Shared Pool                    181,258133
Free SGA Memory Available                                   0

14 rows selected.

SYS@cloudcdb > select sum(bytes)/1024/1024 as mb from v$datafile;

        MB
----------
      3675

It is true, the database doesn’t fit completely into the buffer cache, missing roughly that amount of space mentioned. There is no such parameter as DEFAULT_CACHE_SIZE, though.
What we have instead is DB_CACHE_SIZE. In order to fix that issue, I was using this initialization parameter file to create a new spfile from:

[oracle@uhesse-service2 dbs]$ cat initCLOUDCDB.ora
*.audit_file_dest='/u02/app/oracle/admin/CLOUDCDB/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/u02/app/oracle/oradata/CLOUDCDB/control01.ctl','/u03/app/oracle/fra/CLOUDCDB/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='CLOUDCDB'
*.db_recovery_file_dest='/u03/app/oracle/fra'
*.db_recovery_file_dest_size=10737418240
*.diagnostic_dest='/u02/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLOUDCDBXDB)'
*.enable_pluggable_database=true
*.open_cursors=300
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
*.sga_target=6g
*.pga_aggregate_target=2g
*.inmemory_size=1g
*.db_cache_size=4g

That reduced the size of the In-Memory Column Store to make room for the buffer cache. Now the database fits nicely into the buffer cache again:

SYS@cloudcdb > select name,bytes/1024/1024 as mb from v$sgainfo;

NAME                                                       MB
-------------------------------------------------- ----------
Fixed SGA Size                                     2,80265045
Redo Buffers                                       13,1953125
Buffer Cache Size                                        4256
In-Memory Area Size                                      1024
Shared Pool Size                                          800
Large Pool Size                                            32
Java Pool Size                                             16
Streams Pool Size                                           0
Shared IO Pool Size                                         0
Data Transfer Cache Size                                    0
Granule Size                                               16
Maximum SGA Size                                         6144
Startup overhead in Shared Pool                    181,290176
Free SGA Memory Available                                   0

14 rows selected.

Accordingly the message in the alert.log now reads
Buffer Cache Full DB Caching mode changing from FULL CACHING DISABLED to FULL CACHING ENABLED

Don’t get me wrong: I’m not arguing here against the In-Memory Option or in favor of Full Database Caching. Or whether it makes sense to use any of them or both. This post is just about clarifying the strange message in the alert.log that may confuse people.

And by the way, my demo database is running in the Oracle Cloud:-)


Tagged: 12c New Features
Categories: DBA Blogs

Redshift Table Maintenance: Vacuuming

Pythian Group - Tue, 2016-04-05 13:38
Overview

Part of the appeal of AWS’ Redshift is that it’s a managed service, which means lower administration costs. While you don’t have to hire a full time DBA to make sure it runs smoothly (from Pythian’s experience it takes ~10-20 hours/month to manage Redshift), there are still some tasks that should be attended to keep it happy:

  • Vacuuming
  • Analyzing
  • Skew analysis
  • Compression analysis
  • Query monitoring

Let us start with Vacuuming as the first topic of a series of deeper dives into this list.

Vacuuming is an integral part of performance maintenance of Redshift.  Since deletes and updates both flag the old data, but don’t actually remove it, if we’re doing those kinds of actions, vacuuming is needed to reclaim that space. Updates and deletes can be pretty big performance hits (a simple update can easily take 60 secs on a 50 million record table on a small cluster, so we’re looking at 20 minutes for a similar update on a 1 billion record table), so we try to avoid them as much as we can on large tables. The space reclamation portion of the vacuum typically accounts for 10% of the time we see spent on the tables.  We can use the SORT ONLY parameter to skip this phase, but we generally have no compelling reason to.

In addition, if tables have sort keys, and table loads have not been optimized to sort as they insert, then the vacuums are needed to resort the data which can be crucial for performance.  While loads of empty tables automatically sort the data, subsequent loads are not. We have seen query times drop by 80% from the implementation of vacuuming, but of course the impact varies with table usage patterns.

The biggest problem we face with vacuuming is the time it takes. While vacuuming does not block reads or writes, it can slow them considerably as well as take significant resources from the cluster, and you can only vacuum one table at a time. Remember that resource utilization can be constrained through WLM queues. A typical pattern we see among clients is that a nightly ETL load will occur, then we will run vacuum and analyze processes, and finally open the cluster for daily reporting. The faster the vacuum process can finish, the sooner the reports can start flowing, so we generally allocate as many resources as we can.

Operations

Let us start with the process itself.  It’s simple enough and you can get syntax documentation from AWS . There’s not too much that’s tricky with the syntax and for most use cases

VACUUM myschema.mytablename;

will suffice.  Note that INTERLEAVED sort keys need the REINDEX parameter added for all re-indexing to occur.  You can discern which tables have this set up by using the query:

select schemaname, tablename
from (SELECT
n.nspname AS schemaname
,c.relname AS tablename
,min(attsortkeyord) min_sort FROM pg_namespace AS n
INNER JOIN&nbsp; pg_class AS c ON n.oid = c.relnamespace
INNER JOIN pg_attribute AS a ON c.oid = a.attrelid
WHERE c.relkind = 'r'
AND abs(a.attsortkeyord) &gt; 0
AND a.attnum &gt; 0
group by 1,2 )
where min_sort&lt;0

In order to give the vacuum process more resources, we preface this command with

SET wlm_query_slot_count TO <N>;

where N is the maximum number of query slots we think we can get away with.  If you’re not sure what that number should be (we’ll discuss WLM queues in another post), usually 5 is a safe number though be warned that if the value of wlm_query_slot_count is larger than the number of available slots for the service class, the vacuum command will fail.

Knowing when to vacuum is reasonably straight forward. Anytime after substantial inserts, updates, or deletes are made is always appropriate, but you can be more exacting by querying two tables:

select * from STL_ALERT_EVENT_LOG where Solution LIKE ‘%VACUUM command%’

and

select * from SVV_TABLE_INFO where unsorted > 8

The latter check works great for daily loads. We will often set the threshold at 8 (percent) immediately after the loads, then run another vacuum process in the evening with a lower threshold (4 percent) that addresses larger tables that take a fair amount of time to vacuum since we want to avoid that situation in the morning.

Last fall AWS built a nice tool to automate vacuums, Analyze & Vacuum Schema Utility, that incorporated these queries. It works quite well, and we recommend it to our clients as a simple way to set up this maintenance. However, note that it does not automatically add the REINDEX parameter for those tables with INTERLEAVED sortkeys. The code is all available, so it is easy enough to adjust to make more custom filtering of tables (on fact_* and dim_* for instance) within a schema.

AWS has built a very useful view, v_get_vacuum_details, (and a number of others that you should explore if you haven’t already) in their Redshift Utilities repository that you can use to gain some insight into how long the process took and what it did. None of the system tables for vacuuming keep any historical information which would be nice for tracking growing process times, but you can see them for a week in STL_QUERY which gets purged to a history of 7 days. I recommend creating a simple process to track the vacuum data:

First create the table:

create table vacuum_history sortkey (xid) as select * from v_get_vacuum_details where processing_seconds > 0;

Then set up a cron process to populate:

0 18 * * * psql -h myRScluster -U myUser -p5439 -c “INSERT INTO vacuum_history SELECT * FROM v_get_vacuum_details WHERE xid > (SELECT MAX(xid) FROM vacuum_history) where processing_seconds > 0;” &> /var/log/vacuum_history.log

Once you start to see tables taking an inordinate amount of time to vacuum, some additional intervention may be appropriate. Our team recently ran into a sizable table (3 billion records) that had been taking 3 hours to vacuum daily. Some issue occurred where the table needed a partial reload of 2 billion rows. Once that finished, we ran a vacuum which kept going all afternoon. Checking SVV_VACUUM_PROGRESS we could see that it would take almost 30 hours to complete. Note that restarting a stopped vacuum does not mean the process will pick up where it left off. Since this would have impacted the daily load performance, we killed the vacuum with “cancel <pid>” using the pid pulled from

select pid, text from SVV_QUERY_INFLIGHT where text like ‘%Vacuum%’

We then ran a deep copy (created a new version of the table and ran a SELECT INTO) which took about 5 hours. The load into an empty table triggers the correct sorting, so a subsequent vacuum took only a few minutes to complete.

Just a note on killing long running vacuums: it sometimes doesn’t work especially once it’s in the initialize merge phase. We’ve found that continually issuing the cancel command while it’s in the sort phase is effective, but the point it to be wary of vacuuming large tables for their first time. Vacuums on large, unsorted tables write temporary data to disk, so there is also the potential to run out of disk and freeze the cluster, so be sure to always check that up to 3x the table size of disk space is available.

There are a few simple strategies to prevent long running vacuums:

  • Load your data in SORTKEY order: The incoming data doesn’t have to be pre-ordered, just greater than existing data.
  • Vacuum often: A table with a small unsorted region vacuums faster than one with a large unsorted region.
  • If tables become too large to vacuum within a maintenance window, consider breaking them apart: We often see multi-billion record tables where the only data being queried is from the last month or two.
  • Deep copies can be a faster solution than vacuums.
Categories: DBA Blogs

Considering Total Cost of Incident in the Age of “What if?”

Pythian Group - Tue, 2016-04-05 12:03

 

I’m currently in the process of moving houses and planning a wedding – busy, exciting, happy times. There are lots of positives – more yard space, more square footage, time with friends and family, cake…the list goes on. However, mixed in with these big, happy life changes is an underlying anxiety that something could go wrong. There’s always a chance that accidents, inclement weather or illness could ruin the party for everyone. It’s not pleasant to consider and I’d honestly rather sweep it all under the rug and go on with the merry-making (mmm cake!). But the voice at the back of my head won’t stop whispering what if?

 

It’s this voice that’s prompted me to take steps towards protecting my investments with liability insurance. Now, I’m not keen on laying out funds where I don’t have to, and I’m always the first to try and save a buck (coupons are my best friends!), but where any large investment is being made, it only makes sense for me to part with some of my hard coupon-saved dollars to protect that. I may never need it, true… but what if?

 

At Pythian, we spend a lot of our time exploring the world of what if? What if there’s an outage? What if there’s a breach? What will we lose? What will it cost? These are uncomfortable questions for a lot of organizations and especially uncomfortable when things are going well. If you haven’t had an outage or a data breach yet, it is easier to assume that this won’t happen. What if we invest in protection and that spend is wasted because nothing ever goes wrong? Naïve? Maybe. Comfortable? Oh yeah.

 

In a post-Snowden, post-Target, post-Home Depot world, it’s more important than ever for organizations to move out of the comfortable and really consider what the total impact of any one incident could be on their environment and take steps to insure themselves against it. Let’s spend some time being uncomfortable, considering more than just the Total Cost of Operations and think instead about the Total Cost of Incident.

Categories: DBA Blogs

FBDA -- 4 : Partitions and Indexes

Hemant K Chitale - Tue, 2016-04-05 10:47
Continuing our investigation of the FBDA architecture.

oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 5 23:25:10 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Tue Apr 05 2016 23:23:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name, def_tablespace_name, partitioning_type, partition_count, status
2 from user_part_tables
3 order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
DEF_TABLESPACE_NAME PARTITION PARTITION_COUNT STATUS
------------------------------ --------- --------------- --------
SYS_FBA_HIST_93250
FBDA RANGE 1 VALID


SQL>
SQL> set pages600
SQL> set long 10000
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93250') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93250')
--------------------------------------------------------------------------------

CREATE TABLE "HEMANT"."SYS_FBA_HIST_93250"
( "RID" VARCHAR2(4000),
"STARTSCN" NUMBER,
"ENDSCN" NUMBER,
"XID" RAW(8),
"OPERATION" VARCHAR2(1),
"ID_COLUMN" NUMBER,
"DATA_COLUMN" VARCHAR2(15),
"DATE_INSERTED" DATE,
"D_1729869_NEW_COL_1" VARCHAR2(5)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"
PARTITION BY RANGE ("ENDSCN")
(PARTITION "HIGH_PART" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA" )


SQL>


So, although my active table (TEST_FBDA) is not partitioned, the History table is Range Partitioned on the ENDSCN column, with a single partition !   All the rows in the History table are in the MAXVALUE partition.  I wonder if and when it will ever be split ?

What if I create a partitioned table and then define Flashback Archiving on it ?

SQL> create table test_fbda_partitioned (
2 id_column number,
3 data_column varchar2(15),
4 date_inserted date)
5 partition by range (id_column)
6 (partition p_100 values less than (101),
7 partition p_200 values less than (201),
8 partition p_300 values less than (301),
9 partition p_400 values less than (401),
10 partition p_max values less than (MAXVALUE))
11 /

Table created.

SQL> alter table test_fbda_partitioned flashback archive fbda;

Table altered.

SQL> insert into test_fbda_partitioned
2 select rownum, to_char(rownum), trunc(sysdate)
3 from dual connect by level < 701;

700 rows created.

SQL> commit;

Commit complete.

SQL> update test_fbda_partitioned
2 set data_column=data_column;

700 rows updated.

SQL> commit;

Commit complete.

SQL> col subobject_name format a15
SQL> select object_type, subobject_name, object_id
2 from user_objects
3 where object_name = 'TEST_FBDA_PARTITIONED'
4 order by 3,1;

OBJECT_TYPE SUBOBJECT_NAME OBJECT_ID
----------------------- --------------- ----------
TABLE 93342
TABLE PARTITION P_100 93343
TABLE PARTITION P_200 93344
TABLE PARTITION P_300 93345
TABLE PARTITION P_400 93346
TABLE PARTITION P_MAX 93347

6 rows selected.

SQL>
SQL> select table_name
2 from user_tables
3 where table_name like '%93342%'
4 order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_DDL_COLMAP_93342
SYS_FBA_HIST_93342
SYS_FBA_TCRV_93342

SQL>
SQL> select dbms_metadata.get_ddl('TABLE','SYS_FBA_HIST_93342') from dual;

DBMS_METADATA.GET_DDL('TABLE','SYS_FBA_HIST_93342')
--------------------------------------------------------------------------------

CREATE TABLE "HEMANT"."SYS_FBA_HIST_93342"
( "RID" VARCHAR2(4000),
"STARTSCN" NUMBER,
"ENDSCN" NUMBER,
"XID" RAW(8),
"OPERATION" VARCHAR2(1),
"ID_COLUMN" NUMBER,
"DATA_COLUMN" VARCHAR2(15),
"DATE_INSERTED" DATE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"
PARTITION BY RANGE ("ENDSCN")
(PARTITION "HIGH_PART" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA" )


SQL>


So, even though my active table (TEST_FBDA_PARTITIONED) is created as a Range Partitioned Table partitioned on ID_COLUMN, the corresponding History table is Range Partitioned on ENDSCN with a single MAXVALUE partition.
Therefore, Oracle ignores my partitioning definition when creating the History table.  The History table (seems to be ?) always Range Partitioned on ENDSCN and starts using the MAXVALUE partition up-front.  (When will this Partition be split ?  I could search MoS for Docs / Bugs, but I'll do that exercise later).

Now that we know that the History table doesn't use our Partition Key, we must wonder about Partition Pruning when running AS OF queries on the active table that need to access the History Table.  If we can't Partition Prune, can we Index the History table ?

SQL> select table_name                   
2 from user_tables
3 where table_name like 'SYS_FBA_HIST%'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
SYS_FBA_HIST_93250
SYS_FBA_HIST_93342

SQL> create index sys_fba_hist_93250_ndx_1 on sys_fba_hist_93250(id_column) tablespace fbda;

Index created.

SQL> create index sys_fba_hist_93342_ndx_1 on sys_fba_hist_93342(id_column) tablespace fbda;

Index created.

SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93250_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93250_NDX_1')
--------------------------------------------------------------------------------

CREATE INDEX "HEMANT"."SYS_FBA_HIST_93250_NDX_1" ON "HEMANT"."SYS_FBA_HIST_932
50" ("ID_COLUMN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"


SQL> select dbms_metadata.get_ddl('INDEX','SYS_FBA_HIST_93342_NDX_1') from dual;

DBMS_METADATA.GET_DDL('INDEX','SYS_FBA_HIST_93342_NDX_1')
--------------------------------------------------------------------------------

CREATE INDEX "HEMANT"."SYS_FBA_HIST_93342_NDX_1" ON "HEMANT"."SYS_FBA_HIST_933
42" ("ID_COLUMN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "FBDA"


SQL>


Yes !  Oracle does allow us to build custom indexes on the History tables.  (I believe I saw this somewhere in the documentation or a note).

Next Post (possibly) : More DML operations and whether auto-purging based on the defined RETENTION 3 DAY period.  (Note : The Retention Period is defined at the Flashback Archive level, not at the individual table (active/History) level.  If you need to have tables with different Retention Periods, you need to define different Flashback Archives, although they can all be in the same Tablespace).
.
.
.

Categories: DBA Blogs

Mind your rdbms/audit on 12c

Pythian Group - Tue, 2016-04-05 10:17

Recently we’ve ran into an interesting question from one of our clients. They were seeing messages of the following form in syslog:


"Aug 11 11:56:02 ***** kernel: EXT3-fs warning (device ******): ext3_dx_add_entry: Directory index full!"

I haven’t encountered this before, and did a bit of research. My initial suspicion ended up being correct, and it was due to too many files being created, somewhere in that file system. I had a look around, and eventually checked out the ORACLE_HOME of the ASM / Grid Infrastructure software, which is running version 12.1.0.2 on that host.

I snooped around using du -sh to check which directories or sub-directories might be the culprit, and the disk usage utility came to a halt after the “racg” directory. Next in line would be “rdbms”. The bulb lit up somewhat brighter now. Entering the rdbms/audit directory, I issued the common command you would if you wanted to look at a directories contents: “ls”.

Five minutes later, there was still no output on my screen. Okay, we found the troublemaker. So we’re now being faced with a directory that has potentially millions of files in it. Certainly we all are aware that “rm” isn’t really able to cope with a situation like this. It would probably run for a couple minutes until it’s done parsing the directory index, and then yell “argument list too long” at us. Alternatively, we could use find, combined with -exec (bad idea), -delete, or even pipe into rm using xargs. Looking around a bit on the good ol’ friend google, I came across this very interesting blog post by Sarath Pillai.

I took his PERL one-liner, adjusted it a wee bit since I was curious how many files we actually got in there and ran it on a sandbox system with a directory with 88’000 files in it:


perl -e 'my $i=0;for(<*>){$i++;((stat)[9]<(unlink))} print "Files deleted: $i\n"'

It completed in 4.5 seconds. That’s pretty good. In Sarath’s tests he was able to delete half a million files in roughly a minute. Fair enough.

After getting the OK from the client, we ran it on the big beast. It took 10 minutes.


Files deleted: 9129797

9.1 million files. Now here comes the interesting bit. This system has been actively using 12.1.0.2 ASM since May 6th, 2015. That’s only 3 months. That translates to 3 million files per month. Is this really a desirable feature? Do we need to start running Hadoop just to be able to mine the data in there?

Looking at some of the files, it seems ASM is not only logging user interactions there, but also anything and everything done by any process that connects to ASM.

As I was writing this, I happened to take another peek at the directory.


[oracle@cc1v3 audit]$ ls -1 | wc -l
9134657

Remember those numbers from before? Three million a month? Double that.

I suspect this was due to the index being full, and Linux has now re-populated the index with the next batch. Until it ran full again.

A new syslog entry just created at the same time seems to confirm that theory:

Aug 12 00:09:11 ***** kernel: EXT3-fs warning (device ******): ext3_dx_add_entry: Directory index full!

After running the PERL one-liner again, we deleted another vast amount of files:


Files deleted: 9135386

It seems that the root cause is the added time stamp to the file names of the audit files that Oracle writes in 12.1. The file names are much more unique, which gives Oracle the opportunity to generate so many more of them. Where in previous versions, with an adequately sized file system you’d probably be okay for a year or more; on 12.1.0.2, on an active database (and our big beast is very active) you have to schedule a job to remove them, and ensure it runs frequently (think 18+ million files in 3 months to put “frequently” into perspective).

Categories: DBA Blogs

Enabling Large Pages on Oracle Database 11g running on IBM AIX

Pythian Group - Mon, 2016-04-04 18:30

For implementing Large Pages on AIX first you will need to choose large page size at OS level.
On AIX you can have multiple large page sizes of 4KB, 64KB, 16MB, and 16GB.

In this example we will be using a large page size of 16MB.

Steps for implemenatation:

1- Based on MOS Doc ID 372157.1 first you need to enable Large Pages at OS level

# vmo -r  -o lgpg_size=16777216 -o lgpg_regions=<Total number of pages>
# vmo -o lru_file_repage=0
# vmo -p -o v_pinshm=1
# lsuser -a capabilities oracle
# chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
# bosboot -a

This needs a server reboot

For complete instruction please review note ID 372157.1

2- Setting parameters at instance level

On AIX databases you only need to set LOCK_SGA to TRUE:

alter system set lock_sga=TRUE scope=spfile;

Note: On AIX databases, USE_LARGE_PAGES parameter has NO impact.
These parameters are only valid for databases running on Linux, the value of this parameter even if set to FALSE will be ignored on AIX.

By default when Large Pages is available on AIX it will be used by database instances regardless of USE_LARGE_PAGES parameter value. You only need to set LOCK_SGA.

3- Restart the instance and confirm Large Pages is in use:

After setting lock_sga instance must be restarted.
As I explained above, when Large Pages is available at OS level it will be used by instance, but the key point in here is how to confirm whether Large Pages is in use or not.

How to check if Huge Pages is used by Oracle instance.

For Oracle 11g running on AIX, no informational message is written to the alert log as what we see in the alert log of databases running on Linux.

So for your database instance running on AIX do NOT expect following lines in the alert log:

****************** Large Pages Information *****************
Total Shared Global Region in Large Pages = xx MB (100%)
Large Pages used by this instance: xxx (xxx MB)
Large Pages unused system wide = x (xxx MB) (alloc incr 4096 KB)
Large Pages configured system wide = xxx (xxx MB)
Large Page size = 16 MB
***********************************************************

The only way you can make sure large pages is being used by instance is checking memory usage at OS level:

Consider SGA_TARGET in your instance is 8G
Total number of Large Pages (with size of 16M) will be 8G/16M + 1 which is : 8589934592 / 16777216 + 1 = 513

Check the number of large 16M pages in use at OS level before starting your instance:

$ vmstat -P all

System configuration: mem=98304MB

pgsz            memory                           page
----- -------------------------- ------------------------------------
           siz      avm      fre    re    pi    po    fr     sr    cy
   4K  4420992  2926616   487687     0     0     5  1280   2756     0
  64K   582056   581916      754     0     0     0     0      0     0
  16M     2791       87     2704     0     0     0     0      0     0

In this example number of 16M pages in use before starting instance is 87 pages from total available of 2791 pages.

We start the instance with SGA size of 8G:

SQL> startup
ORACLE instance started.

Total System Global Area 8551575552 bytes
Fixed Size                  2238616 bytes
Variable Size            2348812136 bytes
Database Buffers         6190792704 bytes
Redo Buffers                9732096 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 8G

SQL> show parameter lock_sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     TRUE

Then we check Large pages in use again :

$ vmstat -P all

System configuration: mem=98304MB

pgsz            memory                           page
----- -------------------------- ------------------------------------
           siz      avm      fre    re    pi    po    fr     sr    cy
   4K  4428160  2877041   420004     0     0     5  1279   2754     0
  64K   581608   530522    51695     0     0     0     0      0     0
  16M     2791      600     2191     0     0     0     0      0     0

As you can see the total number of 16M pages in use is now 600 pages, which is exactly 513 pages more than what it was before instance startup.
This proves that 16M pages have been used by our instance.

You can also check memory usage of your instance by checking one of the instance processes like pmon:

$ ps -ef|grep pmon
  oracle 14024886 31392176   0 14:05:34  pts/0  0:00 grep pmon
  oracle 41681022        1   0   Mar 11      -  3:12 ora_pmon_KBS

Then check memory used by this process is from 16M Pages:

$ svmon -P 41681022

-------------------------------------------------------------------------------
     Pid Command          Inuse      Pin     Pgsp  Virtual 64-bit Mthrd  16MB
41681022 oracle         2180412  2109504     1778  2158599      Y     N     Y

     PageSize                Inuse        Pin       Pgsp    Virtual
     s    4 KB               31820          0       1650       9975
     m   64 KB                2959        516          8       2961
     L   16 MB                 513        513          0        513

I hope this will be useful for you, and good luck.

Categories: DBA Blogs

More Fun With Oracle Timestamp Math

Pythian Group - Mon, 2016-04-04 15:23
Timestamp Math

Several years ago I wrote an article on Oracle date math.
Amazingly, that article was still available online at the time of this writing.

Working With Oracle Dates

An update to that article is long overdue.
While date math with the DATE data type is fairly well known and straight forward, date math with Oracle TIMESTAMP data is less well known and somewhat more difficult.

Data Types and Functions

Let’s begin by enumerating the data types and functions that will be discussed

Datetime and Interval Data Types

Documentation for Datetime and Interval Data Types

  • Timestamp
  • Timestamp with Time Zone
  • Interval Day to Second
  • Interval Year to Month
Datetime Literals

Documentation for Datetime Literals

  • Date
  • Timestamp
  • Timestamp with Time Zone
  • Timestamp with Local Time Zone
Interval Literals

Documentation for Interval Literals

  • Interval Day to Second
  • Interval Year to Month
Datetime/Interval Arithmetic

Documentation for Datetime/Interval Arithmetic

There is not a link to the heading, just scroll down the page until you find this.

Timestamp Functions

Documentation for Datetime Functions

There quite a few of these available. Most readers will already be familiar with many of these, and so only some of the more interesting functions related to timestamps will be covered.

  • extract
  • to_dsinterval
  • to_yminterval
Timestamp Internals

It is always interesting to have some idea of how different bits of technology work. In Working With Oracle Dates I showed how Date values are stored in the database, as well as how a Date stored in the database is somewhat different than a date variable.

Let’s start by storing some data in a timestamp column and comparing how it differs from systimestamp.

Test table for Timestamp Math Blog:

col c1_dump format a70
col c1 format a35
col funcname format a15

set linesize 200 trimspool on
set pagesize 60

drop table timestamp_test purge;

create table timestamp_test (
c1 timestamp
)
/

insert into timestamp_test values(systimestamp );
insert into timestamp_test values(systimestamp - 366);
commit;

select
'timestamp' funcname, c1, dump(c1) c1_dump
from timestamp_test
union all
select
'systimestamp' funcname, systimestamp, dump(systimestamp) systimestamp_dump
from dual
/

FUNCNAME        C1                                  C1_DUMP
--------------- ----------------------------------- ----------------------------------------------------------------------
timestamp       26-MAR-16 03.09.27.649491 PM -07:00 Typ=180 Len=11: 120,116,3,26,16,10,28,38,182,114,56
timestamp       26-MAR-15 03.09.27.000000 PM -07:00 Typ=180 Len=7: 120,115,3,26,16,10,28
systimestamp    26-MAR-16 03.09.27.687416 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,9,27,0,192,34,249,40,252,0,5,0,0,0,0,0

3 rows selected.

One of the first things you might notice is that the value for Typ is 180 for TIMESTAMP columns, but for SYSTIMESTAMP Typ=188.
The difference is due to TIMESTAMP being an internal data type as stored in the database, while SYSTIMESTAMP is dependent on the compiler used to create the executables.

Another difference is the length; the first TIMESTAMP column has a length of 11, whereas the SYSTIMESTAMP column’s length is 20. And what about that second TIMESTAMP column? Why is the length only 7?

TIMESTAMP with length of 7

An example will show why the second row inserted into TIMESTAMP_TEST has a length of only 7.

  1* select dump(systimestamp) t1, dump(systimestamp-1) t2, dump(sysdate) t3 from dual
15:34:32 ora12c102rac01.jks.com - jkstill@js122a1 SQL- /

T1                                       T2                                       T3
---------------------------------------- ---------------------------------------- ----------------------------------------
Typ=188 Len=20: 224,7,3,22,22,34,35,0,16 Typ=13 Len=8: 224,7,3,21,18,34,35,0      Typ=13 Len=8: 224,7,3,22,18,34,35,0
0,181,162,17,252,0,5,0,0,0,0,0


1 row selected.

T2 was implicitly converted to the same data type as SYSDATE because standard date math was performed on it.

The same thing happened when the second row was inserted TIMESTAMP_TEST.

Oracle implicitly converted the data to a DATE data type, and then implicitly converted it again back to a timestamp, only the standard date information is available following the previous implicit conversion.

You may have noticed that in this example the length of the data is 8, while that stored in the table was 7. This is due to the use of SYSDATE, which is an external data type, whereas any data of DATE data type that is stored in the database is using an internal data type which always has a length of 7.

SYSTIMESTAMP Byte Values

Let’s see if we can determine how each byte is used in a SYSTIMESTAMP value

The following SQL will use the current time as a baseline, and start a point 10 seconds previous, showing the timestamp value and the internal representation.

 

col t1 format a35
col t2 format a38
col dump_t1 format a70
col dump_t2 format a70

set linesize 250 trimspool on

/*
 using to_disinterval() allows performing timestamp math without implicit conversions

see https://en.wikipedia.org/wiki/ISO_8601
 for an explanation of the PTnS notation being used in to_dsinterval()

*/

alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

-- subtract 1 second from the current date
-- do it 10 times
select
 --systimestamp t1,
 --dump(systimestamp) dump_t1,
 systimestamp - to_dsinterval('PT' || to_char(level) || 'S') t2,
 dump(systimestamp - to_dsinterval('PT' || to_char(level) || 'S')) dump_t2
from dual connect by level <= 10
order by level desc
/


T2                                     DUMP_T2
-------------------------------------- ----------------------------------------------------------------------
26-MAR-16 03.34.55.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,55,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.56.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,56,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.57.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,57,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.58.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,58,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.34.59.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,59,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.00.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,0,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.01.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,1,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.02.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,2,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.03.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,3,0,152,108,205,20,252,0,5,0,0,0,0,0
26-MAR-16 03.35.04.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,35,4,0,152,108,205,20,252,0,5,0,0,0,0,0

10 rows selected.

 

From the output we can see that seconds are numbered 0-59, and that the 7th byte in the internal format is where the second is stored. We can also see that the Month is represented by the 3rd bytes, and the Day by the fourth 4th byte.

One would then logically expect the 5th bite to show us the hour. Glancing at the actual time of 3:00 PM it seems curious then the value we expect to be the hour is 19 rather than 15.

The server where these queries is being run has a Time Zone of EDT. Next I ran the same queries on a server with a TZ of PDT, and though the time in the timestamp appeared as 3 hours earlier, the value stored in the 5th byte is still 19. Oracle is storing the hour in UTC time, then using the TZ from the server to get the actual time.

Playing with Time Zones

We can modify the local session time zone to find out how Oracle is calculating the times.

The first attempt is made on the remote client where scripts for this article are developed. The TZ will be set for Ethiopia and then the time checked at the Linux command line and in Oracle.

 

# date
Sat Mar 26 13:16:12 PDT 2016

# TZ='Africa/Addis_Ababa'; export TZ

# date
Sat Mar 26 23:16:17 EAT 2016

# sqlplus jkstill/XXX@p1
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
SQL- !date
Sat Mar 26 23:16:40 EAT 2016

SQL-  l
 1 select
 2 systimestamp t1,
 3 dump(systimestamp) dump_t1
 4* from dual
23:16:50 ora12c102rac01.jks.com - jkstill@js122a1 
SQL- /
T1                                  DUMP_T1
----------------------------------- ----------------------------------------------------------------------
26-MAR-16 04.16.56.254769 PM -04:00 Typ=188 Len=20: 224,7,3,26,20,16,56,0,104,119,47,15,252,0,5,0,0,0,0,0

Setting the TZ on the client clearly has no effect on the time returned from Oracle. Now let’s try while logged on to the database server.

$ date
Sat Mar 26 16:20:23 EDT 2016

$ TZ='Africa/Addis_Ababa'; export TZ

$ date
Sat Mar 26 23:20:38 EAT 2016

$ sqlplus / as sysdba
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL- l
 1 select
 2 systimestamp t1,
 3 dump(systimestamp) dump_t1
 4* from dual
SQL- /

T1                                  DUMP_T1
----------------------------------- ----------------------------------------------------------------------
26-MAR-16 11.22.48.473298 PM +03:00 Typ=188 Len=20: 224,7,3,26,20,22,48,0,80,244,53,28,3,0,5,0,0,0,0,0

 

This experiment has demonstrated two things for us:

  1. Oracle is storing the hour as UTC time
  2. Setting the TZ on the client does not have any affect on the calculations of the time.
What About the Year?

Given the location of the month, it would be expected to find the year in the byte just previous the month byte. There is not just one byte before the month, but two. You will recall that SYSTIMESTAMP has a different internal representation than does the TIMESTAMP data type. Oracle is using both of these bytes to store the year.

Working with this timestamp from an earlier example, we can use the information in Oracle Support Note 69028.1 to see how this works.

 

T2                                     DUMP_T2
-------------------------------------- ----------------------------------------------------------------------
26-MAR-16 03.34.55.349007000 PM -04:00 Typ=188 Len=20: 224,7,3,26,19,34,55,0,152,108,205,20,252,0,5,0,0,0,0,0

 

For the timestamp of March 16 2016 the first two bytes of the timestamp are used to represent the year.

The Formula for AD dates is Byte 1 + ( Byte 2 * 256 ). Using this formula the year 2016 can be arrived at:

224 + ( 7 * 256) = 2016

For the TIMESTAMP data type, the format is somewhat different for the year; actually it works the same way it does for the DATE data type, in excess 100 notation.

 

SQL- l
1* select c1, dump(c1) dump_c1 from timestamp_test where rownum < 2
SQL- /

C1 DUMP_C1
------------------------------ ---------------------------------------------------
26-MAR-16 03.09.27.649491 PM Typ=180 Len=11: 120,116,3,26,16,10,28,38,182,114,56

 

The 2nd byte indicates the current year – 1900.

Decode All Timestamp Components

Now let’s decode all of the data in a TIMESTAMP. First we need some some TIMESTAMP test data

Creating the test data

The following SQL will provide some test data for following experiments.

We may not use all of the columns or rows, but they are available.

 

drop table timestamp_tz_test purge;

create table timestamp_tz_test (
 id integer,
 c1 timestamp,
 c2 timestamp with time zone,
 c3 timestamp with local time zone
)
/

-- create 10 rows each on second apart

begin
for i in 1..10
loop
 insert into timestamp_tz_test values(i,systimestamp,systimestamp,systimestamp );
 dbms_lock.sleep(1);
 null;
end loop;
commit;
end;
/

 

We already know that TIMESTAMP data can store fractional seconds to a billionth of a second.

Should you want to prove that to yourself, the following bit of SQL can be used to insert TIMESTAMP data into a table, with each row being 1E-9 seconds later than the previous row. This will be left as an exercise for the reader.

 

create table t2 as
select level id,
 to_timestamp('2016-03-29 14:25:42.5' || lpad(to_char(level),8,'0'),'yyyy-mm-dd hh24.mi.ssxff') c1
from dual
connect by level <= 1000
/

col dump_t1 format a70
col c1 format a35
col id format 99999999

select id, c1, substr(dump(c1),instr(dump(c1),',',-1,4)+1) dump_t1
from t2
order by id
/

 

Oracle uses 4 bytes at the end of a timestamp to store the fractional seconds.

The value of the least byte is as shown.

Each greater byte will be a power of 256.

The following SQL will make this more clear. Don’t spend too much time at first trying to understand the SQL, at it will become more clear after you see the results.

SQL to decode 1 row of TIMESTAMP data.

 

col id format 99
col t1 format a35
col dumpdata format a50
col tz_type format a10
col ts_component format a40
col label format a6
col real_value format a50

set linesize 200 trimspool on

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24.mi.ssxff tzr';


with rawdata as (
 select c2 t1, dump(c2) dump_t1
 from timestamp_tz_test
 where id = 1
),
datedump as (
 select t1,
 substr(dump_t1,instr(dump_t1,' ',1,2)+1) dumpdata
 from rawdata
),
-- regex from http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
datebits as (
 select level id, regexp_substr (dumpdata, '[^,]+', 1, rownum) ts_component
 from datedump
 connect by level <= length (regexp_replace (dumpdata, '[^,]+')) + 1
),
labeldata as (
 select 'TS,DU,CC,YY,MM,DD,HH,MI,SS,P1,P2,P3,P4' rawlabel from dual
),
labels as (
 select level-2 id, regexp_substr (rawlabel, '[^,]+', 1, rownum) label
 from labeldata
 connect by level <= length (regexp_replace (rawlabel, '[^,]+')) + 1
),
data as (
 select db.id, db.ts_component
 from datebits db
 union
 select 0, dumpdata
 from datedump dd
 union select -1, to_char(t1) from rawdata
)
select d.id, l.label, d.ts_component,
 case l.label
 when 'DU' then d.ts_component
 when 'CC' then 'Excess 100 - Real Value: ' || to_char(to_number((d.ts_component - 100)*100 ))
 when 'YY' then 'Excess 100 - Real Value: ' || to_char(to_number(d.ts_component - 100 ))
 when 'MM' then 'Real Value: ' || d.ts_component
 when 'DD' then 'Real Value: ' || d.ts_component
 when 'HH' then 'Excess 1 - Real Value: ' || to_char(to_number(d.ts_component)-1)
 when 'MI' then 'Excess 1 - Real Value: ' || to_char(to_number(d.ts_component)-1)
 when 'SS' then 'Excess 1 - Real Value: ' || to_char(to_number(d.ts_component)-1)
 when 'P1' then 'Fractional Second P1 : ' || to_char((to_number(d.ts_component) * POWER(256,3) ) / power(10,9))
 when 'P2' then 'Fractional Second P2 : ' || to_char((to_number(d.ts_component) * POWER(256,2) ) / power(10,9))
 when 'P3' then 'Fractional Second P3 : ' || to_char((to_number(d.ts_component) * 256 ) / power(10,9))
 when 'P4' then 'Fractional Second P4 : ' || to_char((to_number(d.ts_component) + 256 ) / power(10,9))
 end real_value
from data d
join labels l on l.id = d.id
order by 1
/

When the values for the Pn fractional second columns are added up, they will be equal to the (rounded) value shown in the timestamp.

 

 ID LABEL  TS_COMPONENT                             REAL_VALUE
--- ------ ---------------------------------------- --------------------------------------------------
 -1 TS     2016-03-31 09.14.29.488265 -07:00
  0 DU     120,116,3,31,17,15,30,29,26,85,40,13,60  120,116,3,31,17,15,30,29,26,85,40,13,60
  1 CC     120                                      Excess 100 - Real Value: 2000
  2 YY     116                                      Excess 100 - Real Value: 16
  3 MM     3                                        Real Value: 3
  4 DD     31                                       Real Value: 31
  5 HH     17                                       Excess 1 - Real Value: 16
  6 MI     15                                       Excess 1 - Real Value: 14
  7 SS     30                                       Excess 1 - Real Value: 29
  8 P1     29                                       Fractional Second P1 : .486539264
  9 P2     26                                       Fractional Second P2 : .001703936
 10 P3     85                                       Fractional Second P3 : .00002176
 11 P4     40                                       Fractional Second P4 : .000000296

13 rows selected.

Timezones are recorded in an additional two bytes in TIMESTAMP WITH TIMEZONE and TIMEAZONE WITH LOCAL TIMEZONE data types.

Decoding those two bytes is left as an exercise for the reader.

Timestamp Arithmetic

Now that we have had some fun exploring and understanding how Oracle stores TIMESTAMP data, it is time to see how calculations can be performed on timestamps.

Note: See this ISO 8601 Article to understand the notation being used in to_dsinterval().

Interval Day to Second

It is a common occurrence to add or subtract time to or from Oracle Dates.

How that is done with the Oracle DATE data type is fairly well known.

  • Add 1 Day
    • DATE + 1
  • Add 1 Hour
    • DATE + (1/24)
  • Add 1 Minute
    • DATE + ( 1 / 1440)
  • Add 1 Second
    • DATE + (1/86400)

Following is a brief refresher on that topic:

 

alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

select sysdate today, sysdate -1 yesterday from dual;

select sysdate now, sysdate - (30/86400) "30_Seconds_Ago" from dual;

select sysdate now, sysdate + ( 1/24 ) + ( 15/1440 ) + ( 42/86400) "1:15:42_Later" from dual;

SQL- @date-calc

Session altered.

TODAY YESTERDAY
------------------- -------------------
2016-03-30 13:39:06 2016-03-29 13:39:06
NOW 30_Seconds_Ago
------------------- -------------------
2016-03-30 13:39:06 2016-03-30 13:38:36

NOW 1:15:42_Later
------------------- -------------------
2016-03-30 13:39:06 2016-03-30 14:54:48

 

While this same method will work with timestamps, the results may not be what you expect. As noted earlier Oracle will perform an implicit conversion to a DATE data type, resulting in truncation of some timestamp data. The next example makes it clear that implicit conversions have converted TIMESTAMP to a DATA type.

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_date_format = 'DD-MON-YY';

select systimestamp today, systimestamp -1 yesterday from dual;

select systimestamp now, systimestamp - (30/86400) "30_Seconds_Ago" from dual;

select systimestamp now, systimestamp + ( 1/24 ) + ( 15/1440 ) + ( 42/86400) "1:15:42_Later" from dual;

SQL- @timestamp-calc-incorrect

TODAY                                                                       YESTERDAY
--------------------------------------------------------------------------- ---------
2016-03-31 11.35.29.591223 -04:00                                           30-MAR-16

NOW                                                                         30_Second
--------------------------------------------------------------------------- ---------
2016-03-31 11.35.29.592304 -04:00                                           31-MAR-16

NOW                                                                         1:15:42_L
--------------------------------------------------------------------------- ---------
2016-03-31 11.35.29.592996 -04:00                                           31-MAR-16

 

Oracle has supplied functions to properly perform calculations on timestamps. The previous example will work properly when ds_tointerval is used as seen in the next example.

 

col c30 head '30_Seconds_Ago' format a38
col clater head '1:15:42_Later' format a38
col now format a35
col today format a35
col yesterday format a38

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24.mi.ssxff tzr';


-- alternate methods to subtract 1 day
select systimestamp today, systimestamp - to_dsinterval('P1D') yesterday from dual;
select systimestamp today, systimestamp - to_dsinterval('1 00:00:00') yesterday from dual;

-- alternate methods to subtract 30 seconds
select systimestamp now, systimestamp - to_dsinterval('PT30S') c30 from dual;
select systimestamp now, systimestamp - to_dsinterval('0 00:00:30') c30 from dual;

-- alternate methods to add 1 hour, 15 minutes and 42 seconds
select systimestamp now, systimestamp + to_dsinterval('PT1H15M42S') clater from dual;
select systimestamp now, systimestamp + to_dsinterval('0 01:15:42') clater from dual;

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-30 18.10.41.613813 -04:00 2016-03-29 18.10.41.613813000 -04:00

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-30 18.10.41.614480 -04:00 2016-03-29 18.10.41.614480000 -04:00

NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-30 18.10.41.615267 -04:00 2016-03-30 18.10.11.615267000 -04:00

NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-30 18.10.41.615820 -04:00 2016-03-30 18.10.11.615820000 -04:00

NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-30 18.10.41.616538 -04:00 2016-03-30 19.26.23.616538000 -04:00

NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-30 18.10.41.617161 -04:00 2016-03-30 19.26.23.617161000 -04:00

 

Extract Values from Timestamps

The values for years, months, days, hours and seconds can all be extracted from a timestamp via the extract function. The following code demonstrates a few uses of this, along with examples of retrieving intervals from two dates.

The values in parentheses for the day() and year() intervals specify the numeric precision to be returned.

 

def nls_tf='yyyy-mm-dd hh24.mi.ssxff'

alter session set nls_timestamp_format = '&nls_tf';

col d1_day format 999999
col full_interval format a30
col year_month_interval format a10

with dates as (
   select
      to_timestamp_tz('2014-06-19 14:24:29.373872', '&nls_tf') d1
      , to_timestamp_tz('2016-03-31 09:42:16.8734921', '&nls_tf') d2
   from dual
)
select
   extract(day from d1) d1_day
   , ( d2 - d1) day(4) to second full_interval
   , ( d2 - d1) year(3) to month year_month_interval
   , extract( day from d2 - d1) days_diff
   , extract( hour from d2 - d1) hours_diff
   , extract( minute from d2 - d1) minutes_diff
   , extract( second from d2 - d1) seconds_diff
from dates
/


 D1_DAY FULL_INTERVAL                  YEAR_MONTH  DAYS_DIFF HOURS_DIFF MINUTES_DIFF SECONDS_DIFF
------- ------------------------------ ---------- ---------- ---------- ------------ ------------
     19 +0650 19:17:47.499620          +001-09           650         19           17   47.4996201

Building on that, the following example demonstrates how the interval value the represents the difference between dates d1 and d2 can be added back to d1 and yield a date with the same value as d1.

 

def nls_tf='yyyy-mm-dd hh24.mi.ssxff'

alter session set nls_timestamp_format = '&nls_tf';

col d1 format a30
col d2 format a30
col full_interval format a30
col calc_date format a30

with dates as (
   select
      to_timestamp('2014-06-19 14:24:29.373872', '&nls_tf') d1
      , to_timestamp('2016-03-31 09:42:16.873492', '&nls_tf') d2
   from dual
)
select
   d1,d2
   , ( d2 - d1) day(4) to second  full_interval
   , d1 + ( d2 - d1) day(4) to second calc_date
from dates
/


D1                             D2                             FULL_INTERVAL                  CALC_DATE
------------------------------ ------------------------------ ------------------------------ ------------------------------
2014-06-19 14.24.29.373872000  2016-03-31 09.42.16.873492000  +0650 19:17:47.499620          2016-03-31 09.42.16.873492000

 

PL/SQL Interval Data Types

 

The ISO 8601 Article previously mentioned will be useful for understanding how time durations may be specified with interval functions.

The following combination of SQL and PL/SQL is used to convert the difference between two timestamps into seconds. The code is incomplete in the sense that the assumption is made that the largest component of the INTERVAL is hours. In the use case for this code that is true, however there could also be days, months and years for larger value of the INTERVAL.

The following code is sampled from the script ash-waits-use.sql and uses PL/SQL to demonstrate the use of the INTERVAL DAY TO SECOND data type in PL/SQL.

 

var v_wall_seconds number
col wall_seconds new_value wall_seconds noprint

declare
	ash_interval interval day to second;
begin

	select max(sample_time) - min(sample_time) into ash_interval from v$active_session_history;


	select
		max(sample_time) - min(sample_time) into ash_interval
	from v$active_session_history
	where sample_time 
	between
		decode('&&snap_begin_time',
			'BEGIN',
			to_timestamp('1900-01-01 00:01','yyyy-mm-dd hh24:mi'),
			to_timestamp('&&snap_begin_time','yyyy-mm-dd hh24:mi')
		)
		AND
		decode('&&snap_end_time',
			'END',
			to_timestamp('4000-12-31 23:59','yyyy-mm-dd hh24:mi'),
			to_timestamp('&&snap_end_time','yyyy-mm-dd hh24:mi')
		);

	:v_wall_seconds := 
		(extract(hour from ash_interval) * 3600 )
		+ (extract(second from ash_interval) * 60 )
		+ extract(second from ash_interval) ;
end;
/


select round(:v_wall_seconds,0) wall_seconds from dual;

 

Similarly the to_yminterval function is used to to perform timestamp calculations with years and months.


col clater head 'LATER' format a38
col now format a35
col today format a35
col lastyear format a38
col nextyear format a38

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24.mi.ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24.mi.ssxff tzr';

-- alternate methods to add 1 year
select systimestamp today, systimestamp + to_yminterval('P1Y') nextyear from dual;
select systimestamp today, systimestamp + to_yminterval('01-00') nextyear from dual;


-- alternate methods to subtract 2 months
select systimestamp now, systimestamp - to_yminterval('P2M') lastyear from dual;
select systimestamp now, systimestamp - to_yminterval('00-02') lastyear from dual;

-- alternate methods to add 2 year, 4 months, 6 days ,1 hour, 15 minutes and 42 seconds
select systimestamp now, systimestamp + to_yminterval('P2Y4M')  + to_dsinterval('P2DT1H15M42S') clater from dual;
select systimestamp now, systimestamp + to_yminterval('02-04')  + to_dsinterval('2 01:15:42') clater from dual;

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-31 09.06.22.060051 -07:00   2016-03-30 09.06.22.060051000 -07:00

TODAY                               YESTERDAY
----------------------------------- --------------------------------------
2016-03-31 09.06.22.061786 -07:00   2016-03-30 09.06.22.061786000 -07:00


NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-31 09.06.22.063641 -07:00   2016-03-31 09.05.52.063641000 -07:00


NOW                                 30_Seconds_Ago
----------------------------------- --------------------------------------
2016-03-31 09.06.22.064974 -07:00   2016-03-31 09.05.52.064974000 -07:00

NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-31 09.06.22.066259 -07:00   2016-03-31 10.22.04.066259000 -07:00


NOW                                 1:15:42_Later
----------------------------------- --------------------------------------
2016-03-31 09.06.22.067600 -07:00   2016-03-31 10.22.04.067600000 -07:00

While date math with the DATE data type is somewhat arcane, it is not too complex once you understand how it works.

When Oracle introduced the TIMESTAMP data type, that all changed. Timestamps are much more robust than dates, and also more complex

Timestamps bring a whole new dimension to working with dates and times; this brief introduction to working with timestamp data will help demystify the process of doing math with timestamps.

Categories: DBA Blogs

Veridata and boot.properties file

DBASolved - Mon, 2016-04-04 12:30

I’m spending some time building out an Oracle WebLogic/Veridata system at home. This is mostly for testing and general understanding purposes. One of the things I’ve noticed with the WebLogic server requires the username and password before the WebLogic server will start. After providing the name and password, the terminal window has to stay open because, closing it will kill the WebLogic server that is running. This leads to two interesting question and simple resolutions.

1. How can I automatically pass the username and password to WebLogic?
2. How to I keep WebLogic running upon closing my terminal window?

The answers to the questions are quite simple. Let’s take a look at the first question.

How can I automatically pass the username and password to WebLogic?

This is done by using a “boot.properties” file. This file needs to be placed in the security directory under the server that you want to automatically log in. In my case, I had to create a security directory and the boot.properties file under the AdminServer directory. The below steps are what I used:

$ /opt/app/oracle/product/12.2.0.1/middleware/user_projects/domains/base_domain/
$ cd ./servers
$ cd ./AdminServer
$ mkdir security
$ touch boot.properties
$ vi boot.properties

 

Once the boot.properties file is open for editing, simply put in the username and password you want the WebLogic server to use. In this case, I’m using a standard password setup.

After creating the boot.properties, file, you can go back and start the WebLogic Server and should not be prompted for a userid and password. You can see this during the startup of the Weblogic server as seen in the below image. Additionally, if you go and look at the boot.properties file, the username and password will be encrypted as well.

This same approach can be done with the Veridata components of the configuration. You just have to remember to create a security directory under the Veridata server directory as you did for the Weblogic server.

Now for the second question:

How to keep WebLogic Server running upon closing my terminal window?

With the boot.properties file in place, you can start the WebLogic/Veridata server(s) by using the “nohup” option. “Nohup” will allow you to keep the server processes running in the background.

Using these two options, you can very easily automate the starting/restarting of the WebLogic/Veridata server(s).

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate, WebLogic
Categories: DBA Blogs

Veridata and boot.properties file

DBASolved - Mon, 2016-04-04 12:30

I’m spending some time building out an Oracle WebLogic/Veridata system at home. This is mostly for testing and general understanding purposes. One of the things I’ve noticed with the WebLogic server requires the username and password before the WebLogic server will start. After providing the name and password, the terminal window has to stay open because, closing it will kill the WebLogic server that is running. This leads to two interesting question and simple resolutions.

1. How can I automatically pass the username and password to WebLogic?
2. How to I keep WebLogic running upon closing my terminal window?

The answers to the questions are quite simple. Let’s take a look at the first question.

How can I automatically pass the username and password to WebLogic?

This is done by using a “boot.properties” file. This file needs to be placed in the security directory under the server that you want to automatically log in. In my case, I had to create a security directory and the boot.properties file under the AdminServer directory. The below steps are what I used:

$ /opt/app/oracle/product/12.2.0.1/middleware/user_projects/domains/base_domain/
$ cd ./servers
$ cd ./AdminServer
$ mkdir security
$ touch boot.properties
$ vi boot.properties

 

Once the boot.properties file is open for editing, simply put in the username and password you want the WebLogic server to use. In this case, I’m using a standard password setup.

After creating the boot.properties, file, you can go back and start the WebLogic Server and should not be prompted for a userid and password. You can see this during the startup of the Weblogic server as seen in the below image. Additionally, if you go and look at the boot.properties file, the username and password will be encrypted as well.

This same approach can be done with the Veridata components of the configuration. You just have to remember to create a security directory under the Veridata server directory as you did for the Weblogic server.

Now for the second question:

How to keep WebLogic Server running upon closing my terminal window?

With the boot.properties file in place, you can start the WebLogic/Veridata server(s) by using the “nohup” option. “Nohup” will allow you to keep the server processes running in the background.

Using these two options, you can very easily automate the starting/restarting of the WebLogic/Veridata server(s).

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate, WebLogic
Categories: DBA Blogs

Migrate a SQL Server environment with complex replication without reinitializing or rebuilding replication

Pythian Group - Mon, 2016-04-04 12:14

When you have a SQL Server environment where a very complex replication setup is in place, and you need to migrate/move (without upgrading), some or all the servers involved in the replication topology to new servers/Virtual Machines or to a new Data Center/Cloud, this Blog post is for you!

Let’s assume you also have Transactional and/or Merge publications and subscriptions in place, and you need to move the publisher(s) and/or distributor(s) to a new environment. You also have one or more of the following restrictions:

  • You are not sure if the schema at the subscribers is identical to the publisher (i.e.: different indexes, different columns, etc).
  • You cannot afford downtime to reinitialize the subscriber(s)
  • There are too many subscribers to reinitialize and you cannot afford the downtime if anything goes wrong.

Here are the general steps for this migration:
Prior the migration date:

  • New instance has to have same SQL Server version and edition plus patch level as old instance. Windows version and edition can be different but you need to ensure the version of Windows supports the version of SQL Server.
  • The directory structure for the SQL Server files should be identical in the new server as old server and same permissions:
    • Same path for SQL Server binaries
    • Same path and database files names in both servers for system databases
    • Same directories where user database files and T-logs reside
    • Same path for the replication directories (when applies)
  • Copy over any instance-level objects (Logins, Linked Servers and jobs) to new instance; leave jobs disabled if applies or stop SQL Server Agent on new server

On migration date:

  • Disable any jobs, backups and maintenance that should run during the migration window on old server
  • Stop all database activity on old instance or disable logins
  • Restart old instance and verify there is no activity
  • Synchronize all replication agents that are related to the server being migrated
  • Stop and disable replication agents related to the server being migrated
  • Stop both instances
  • Copy over all system database files from old to new server
  • Copy over all user database files from old server to new one
    • Alternatively, backup all user databases on old server before stopping service and copy the files to new server
  • Shutdown old server
  • Rename new server to the name of old server and change the IP of new server to old server’s IP
  • Start the new server
  • Verify that the name of the new instance is like the old server and it’s local
  • If you backed up the user databases previously, you need to restore them to same location and file names as in old server with RECOVERY and KEEP_REPLICATION
  • Verify that all user databases are online and publications + subscribers are there
  • Start all replication agents related to the migrated server and verify replication is working properly
  • Verify that applications are able to connect to the new instance (no need to modify instance name as it is the same as before and same IP)

At any case, it is strongly recommended to test the migration prior to the real cutover, even if the test environment is not identical to Production, just to get a feel for it. Ensure that you are including most replication scenarios you have in Production during your test phase.

The more scripts you have handy for the cutover date, the less downtime you may have.

It is extremely important to also have a good and tested rollback plan.

In future Blog posts I will discuss more complex replication scenarios to be migrated and rollback plans.

If you would like to make suggestions for future blogs, please feel free to add a comment and I will try to include your request in future posts.

Categories: DBA Blogs

Why You Should Consider Moving Your Enterprise Application to the Oracle Cloud

Pythian Group - Mon, 2016-04-04 10:32

 

If you’ve decided to migrate your Oracle enterprise applications to the public cloud, it’s a good idea to consider Oracle Cloud alongside alternatives such as Amazon Web Services (AWS) and Microsoft Azure.

Oracle has made big strides in the cloud lately with platform-as-a-service (PaaS) offerings for its middleware and database software, culminating in the release of its first infrastructure-as-a-service (IaaS) offering in late 2015.

Oracle has a clear advantage over the competition when it comes to running its own applications in the cloud: it has full control over product licensing and can optimize its cloud platform for lift-and-shift migrations. This gives you a low-risk strategy for modernizing your IT portfolio.

 

What to expect from Oracle Cloud IaaS

Because Oracle’s IaaS offering is quite new, it has yet to match the flexibility and feature set of Azure and AWS. For example, enterprise VPN connectivity between cloud and on-premises infrastructure is still very much a work in progress. Unlike AWS, however, Oracle provides a free software appliance for accessing cloud storage on-premises. In addition to offering an hourly metered service, Oracle also provides unmetered compute capacity with a monthly subscription. Some customers prefer this option because it allows them to more easily control their spending through a predictable monthly fee rather than a pure pay-as-you-go model.

At the same time, Oracle Cloud IaaS has a limited selection of instance shapes, there is no SSD storage yet or guaranteed input/output performance levels, and transferring data is more challenging for large-volume migrations.

 

What to expect from Oracle Cloud PaaS

Oracle’s PaaS offerings are quickly becoming among the most comprehensive cloud-based services for Oracle Database. They include:

 

Oracle Database Schema Service

This is the entry-level unmetered offering, available starting at $175 a month for a 5GB database schema limit. Tenants share databases but are isolated in their own schemas. This means you have no control over database parameters, only the schema objects created. This service is currently available only with Oracle Database 11g Release 2 (i.e., it is not yet included in the latest release of Oracle Database 12c).

 

Oracle Exadata Cloud Service

This is a hosted service with monthly subscriptions starting at $70,000 for a quarter rack with 28 OCPUs enabled and 42TB of usable storage provisioned. You have full root OS access and SYSDBA database access, so you have total flexibility in managing your environment. However, this means Oracle manages only the bare minimum—the external networking and physical hardware—so you may end up expending the same effort as you would managing Exadata on-premises.

 

Oracle Database Virtual Image Service

This is a Linux VM with pre-installed Oracle Database software. The license is included in the rate. It’s available metered (priced per OCPU per hour of runtime) and unmetered (priced per OCPU allocated per month). As you’ll need to manage everything up from the VM level, including OS management and full DBA responsibilities, the metered service is a particularly good option for running production environments that require full control over the database deployment.

 

Oracle Database-as-a-Service (DBaaS)

This is an extension of Virtual Image Service and includes additional automation for database provisioning during service creation, backup, recovery, and patching. While you are still responsible for the complete management of the environment, the embedded automation and tooling can simplify some DBA tasks.

I should point out that, with the exception of Oracle Database Schema Service, these are not “true” PaaS offerings; they function more like IaaS-style services but with database software licenses included. But this is on the way, as Oracle recently announced plans for a fully managed DBaaS offering  similar to the one available through AWS.

 

While Oracle’s cloud options are still quite new and require additional features for broad enterprise adoption, if this option sparks your interest, now is the time to take the first steps. If you want to learn more about the migration path to Oracle Cloud, check out our white paper, Migrating Oracle Databases to Cloud.

migratingtocloud

Categories: DBA Blogs

FBDA -- 3 : Support for TRUNCATEs

Hemant K Chitale - Mon, 2016-04-04 10:13
One of the points in the previous post was how ADD and DROP Column commands are supported in FBDA.

Let's look at support for TRUNCATEs.

(Just to prove that I am not using the Undo tablespace for the AS OF query, I drop the Undo tablespace) :

SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/NONCDB/PARAMETERFILE/spf
ile.267.896483727
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> create undo tablespace undotbs2;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2';

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> drop tablespace undotbs1 including contents and datafiles;

Tablespace dropped.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS2
SQL>


I now TRUNCATE my test table and then run an AS OF query against it.

SQL> connect hemant/hemant
Connected.
SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> truncate table test_fbda;

Table truncated.

SQL> select count(*) from test_fbda;

COUNT(*)
----------
0

SQL> select count(*) from test_fbda as of timestamp (sysdate-1);

COUNT(*)
----------
1000

SQL>


The AS OF query was satisfied by the FBDA History table  (and not from Undo).

However, because of yesterday's test of ADD and DROP columns (see yesterday's post), the table structure doesn't match AS OF yesterday.  So, I work around the difference by naming the columns.

SQL> insert into test_fbda select * from test_fbda as of timestamp (sysdate-1);
insert into test_fbda select * from test_fbda as of timestamp (sysdate-1)
*
ERROR at line 1:
ORA-00913: too many values


SQL>
SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> insert into test_fbda
2 select id_column, data_column, date_inserted
3 from test_fbda as of timestamp (sysdate-1);

1000 rows created.

SQL> commit;

Commit complete.

SQL>


Thus, when my query matches the table structure, I can use the data from the previous version of the query.

Note how I did NOT use the SYS_FBA_HIST_93250 table as I'd shown in my previous posts.  The History table is NOT to be accessed directly but I demonstrated queries against it it to show the underlying architecture in FBDA.
.
.
.



Categories: DBA Blogs

Oracle Cloud Machine - Move the Cloud to your Data Center

While public cloud computing would make a significant difference in your business, handing the governance and control to someone else isn't always simply an option. The cloud is generally perceived...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-04-03 [del.icio.us]

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs