Skip navigation.

DBA Blogs

Benchmark: TokuDB vs. MariaDB / MySQL InnoDB Compression

Pythian Group - Mon, 2014-09-15 09:55

As the amount of data companies are interested in collecting grows, life becomes all the more difficult for IT staff at all levels within an organization. SAS Enterprise storage devices that were once considered giants are now being phased out in favor of SSD Arrays with features such as de-duplication, tape storage has pretty much been abandoned and the same goes without saying for database engines.

For many customers just storing data is not enough because of the CAPEX and OPEX that is involved, smarter ways of storing the same data are required and since databases generally account for the greatest portion of storage requirements across an application stack. Lately they are used not only for storing data but also for storing logs in many cases. IT managers, developers and system administrators very often turn to the DBA and pose the time old question “is there a way we can cut down on the space the database is taking up?” and this question seems to be asked all the more frequently as time goes by.

This is a dilemma that cannot easily be solved for a MySQL DBA. What would the best way to resolve this issue be? Should I cut down on binary logging? Hmm… I need the binary logs in case I need to track down the transactions that have been executed and perform point in time recovery. Perhaps I should have a look at archiving data to disk and then compress this using tar and gzip? Heck if I do that I’ll have to manage and track multiple files and perform countless imports to re-generate the dataset when a report is needed from historical data. Maybe, just maybe, I should look into compressing the data files? This seems like a good idea… that way I can keep all my data, and I can just take advantage of a few extra CPU cycles to keep my data to a reasonable size – or does it?

Inspired by the time old dilemma I decided to take the latest version of TokuDB for test run and compare it to InnoDB compression which has been around a while. Both technologies promise a great reduction in disk usage and even performance benefits – naturally if data resides on a smaller portion of the disk access time and seek time will decrease, however this isn’t applicable to SSD disks that are generally used in the industry today. So I put together a test system using an HP Gen8 Proliant Server with 4x Intel® Xeon® E3 Processors, 4GB ECC RAM & the Samsung EVO SATA III SSD rated at 6G/s and installed the latest version of Ubuntu 14.04 to run some benchmarks. I used the standard innodb-heavy configuration from the support-files directory adding one change – innodb_file_per_table = ON. The reason for this is that TokuDB will not compress the shared tablespace hence this would affect the results of the benchmarks. To be objective I ran the benchmarks both on MySQL and MariaDB using 5.5.38 which is the latest bundled version for TokuDB.

The databases were benchmarked for speed and also for the space consumed by the tpcc-mysql dataset generated with 20 warehouses. So lets first have a look at how much space was needed by TokuDB vs. InnoDB (using both compressed and uncompressed tables):

 

Configuration GB TokuDB  2,7 InnoDB Compressed Tables  4,2 InnoDB Regular Tables  4,8

 

TokuDB was a clear winner here, of course the space savings depend on the type of data stored in the database however with the same dataset it seems TokuDB is in the lead. Seeing such a gain in storage requirements of course will make you wonder how much overhead is incurred in reading and writing this data, so lets have a look at the “tpm-C” to understand how many orders can be processed per minute on each. Here I have also included results for MariaDB vs. MySQL. The first graph shows the amount of orders that were processed per 10 second interval and the second graph shows the total “tpm-C” after the tests were run for 120 seconds:

 

Toku_Maria_MySQL

Figure 1 – Orders processed @ 10 sec interval

 

Interval MariaDB 5.5.38 MariaDB 5.5.38 InnoDB Compressed TokuDB on MariaDB 5.5.38 MySQL 5.5.38 MySQL 5.5.38 InnoDB Compressed TokuDB on MySQL 5.5.38 10 5300 529 5140 5667 83 5477 20 5743 590 5112 5513 767 5935 30 5322 596 4784 5267 792 5931 40 4536 616 4215 5627 774 6107 50 5206 724 5472 5770 489 6020 60 5827 584 5527 5956 402 6211 70 5588 464 5450 6061 761 5999 80 5679 424 5474 5775 789 6029 90 5759 649 5490 6258 788 5998 100 5288 611 5584 6044 765 6026 110 4637 575 4948 5753 720 5314 120 3696 512 4459 930 472 292 Toku_Maria_MySQL_2

Figure 2 - “tpm-C” for 120 test run

MySQL Edition “tpm-C” TokuDB on MySQL 5.5.38 32669.5 MySQL 5.5.38 32310.5 MariaDB 5.5.38 31290.5 TokuDB on MariaDB 5.5.38 30827.5 MySQL 5.5.38 InnoDB Compressed Tables 4151 MariaDB 5.5.38 InnoDB Compressed Tables 3437

 

Surprisingly enough however, the InnoDB table compression results were very low – perhaps this may have shown better results on regular SAS / SATA disks with traditional rotating disks. The impact on performance was incredibly high and the savings on disk space were marginal compared to those of TokuDB so once again again it seems we have a clear winner! TokuDB on MySQL outperformed both MySQL and MariaDB with uncompressed tables. The findings are interesting because in previous benchmarks for older versions of MariaDB and MySQL, MariaDB would generally outperform MySQL however there are many factors should be considered.

These tests were performed on Ubuntu 14.04 while the previous tests I mentioned were performed on CentOS 6.5 and also the hardware was slightly different (Corsair SSD 128GB vs. Samsung EVO 256GB). Please keep in mind these benchmarks reflect the performance on a specific configurations and there are many factors that should be considered when choosing the MySQL / MariaDB edition to use in production.

As per this benchmark, the results for TokuDB were nothing less than impressive and it will be very interesting to see the results on the newer versions of MySQL (5.6) and MariaDB (10)!

Categories: DBA Blogs

Change unknown SYSMAN password on #EM12c

DBASolved - Fri, 2014-09-12 17:52

When I normally start work on a new EM 12c environment, I would request to have a userid created; however, I don’t have a userid in this environment and I need access EM 12c as SYSMAN.  Without knowing the password for SYSMAN, how can I access the EM 12c interface?  The short answer is that I can change the SYSMAN password from the OS where EM 12c is running.

Note:
Before changing the SYSMAN password for EM 12c, make sure to understand the following:

  1. SYSMAN is used by the OMS to login to the OMR to store and query all activity
  2. SYSMAN password has to be changed at both the OMS and OMR to EM 12c to work correctly
  3. Do not modify the SYSMAN or any  other repository user at the OMR level (not recommended)

The steps to change an unknown SYSMAN password is as follows:

Tip: Make sure you know what the SYS password is for the OMR.  It will be needed to reset SYSMAN.

1. Stop all OMS processes

cd <oms home>/bin
emctl stop oms 

Image 1:
sysman_pwd_stop_oms.png

 

 

 

 

 

 

2. Change the SYSMAN password

cd <oms home>/bin
emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys password> -new_pwd <new sysman password>

In Image 2, notice that I didn’t pass the password for SYS or SYSMAN on the command line.  EMCTL will ask you to provide the password if you don’t put it on the command line.

Image 2:
sysman_pwd_change_pwd.png

 

 

 

 

 

 

 

3. Stop the Admin Server on the primary OMS and restart OMS

cd <oms home>/bin
emctl stop oms -all
emctl start oms

Image 3:
sysman_pwd_start_oms.png

 

 

 

 

 

 

 

 

 

4. Verify that all of OMS is up and running

cd <oms home>/bin
emctl status oms -details

Image 4:

sysman_pwd_oms_status.png
 

 

 

 

 

 

 

 

 

 

After verifying that the OMS is backup, I can now try to login to the OMS interface.

Image 5:
sysman_pwd_oem_access.png

 

 

 

 

 

 

 

 

 

 

 

As we can see, I’m able to access OEM as SYSMAN now with the new SYSMAN password.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Watch: 5 Best Practices for Launching Your Online Video Game

Pythian Group - Fri, 2014-09-12 07:24

Warner Chaves, Principal Consultant at Pythian, has had the privilege of working with several companies on their video game launches, and is best known for his work with the highly anticipated release of an action-adventure video game back in 2013. Through his experience, he’s developed a set of best practices for launching an online video game.

“You don’t want to have angry gamers on the launch of the game because they lost progress in the game,” he says. “Usually at launch, you will have really high peaks of volume, and there might be some pieces of the infrastructure that are not as prepared for that kind of load. There also might be some parts of the game that are actually more popular than what  you expected.”

Watch his latest video below, 5 Best Practices for Launching Your Online Video Game.

Categories: DBA Blogs

Log Buffer #388, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-12 07:22

In order to expand the knowledge about database features of any kind, blogs are inevitable these days. Whether its Oracle, MySQL, or SQL Server blogs writers are contributing like never before and this log buffer edition skims some of it.

Oracle:

The Oracle Utilities family of products use Oracle standard technology such as the Oracle Database and Oracle Fusion Middleware (a.k.a. Oracle WebLogic).

OBIEE SampleApp in The Cloud: Importing VirtualBox Machines to AWS EC2.

The default value for the INMEMORY_MAX_POPULATE_SERVERS parameter is derived from the PGA_AGGREGATE_LIMIT parameter.

Most customers of Oracle Enterprise Manager using JVM Diagnostics use the tool to monitor their Java Applications servers like Weblogic, Websphere, Tomcat, etc.

Taking Enterprise File Exchange to the Next Level with Oracle Managed File Transfer 12c.

SQL Server:

The concept of a synonym was introduced in SQL Server 2005. Synonyms are very simple database objects, but have the potential to save a lot of time and work if implemented with a little bit of thought.

This article summarizes the factors to consider and provide an overview of various options for HA and DR in cloud based SQL Server deployments.

Chris Date is famous for his writings on relational theory. Chris took on the role of communicating and teaching Codd’s relational theory, and reluctantly admits to a role in establishing SQL as the dominant relational language.

Introduction of how to design a star schema dimensional model for new BI developers.

Have you ever wondered why the transaction log file grows bigger and bigger? What caused it to happen? How do you control it? How does the recovery model of a database control the growing size of the transaction log? Read on to learn the answers.

MySQL:

A common migration path from standalone MySQL/Percona Server to a Percona XtraDB Cluster (PXC) environment involves some measure of time where one node in the new cluster has been configured as a slave of the production master that the cluster is slated to replace.

How to shrink the ibdata file by transporting tables with Trite.

OpenStack users shed light on Percona XtraDB Cluster deadlock issues.

There are a lot of tools that generate test data.  Many of them have complex XML scripts or GUI interfaces that let you identify characteristics about the data. For testing query performance and many other applications, however, a simple quick and dirty data generator which can be constructed at the MySQL command line is useful.

How to calculate the correct size of Percona XtraDB Cluster’s gcache.

Categories: DBA Blogs

Virtual Circuit Wait

Bobby Durrett's DBA Blog - Thu, 2014-09-11 15:28

On Monday we had some performance problems on a system that includes a database which uses shared servers.  The top wait was “virtual circuit wait”.  Here are the top 5 events for a 52 minute time frame:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class virtual circuit wait 1,388,199 17,917 13 50.98 Network db file sequential read 1,186,933 9,252 8 26.33 User I/O log file sync 1,185,620 6,429 5 18.29 Commit DB CPU 5,964 16.97 enq: TX – row lock contention 391 586 1499 1.67 Application

From other monitoring tools there was no sign of poor performance from the database end but virtual circuit wait is not normally the top wait during peak times.  Overall for the time period of this AWR report the shared servers didn’t seem busy:

Shared Servers Utilization Total Server Time (s) %Busy %Idle Incoming Net % Outgoing Net % 111,963 38.49 61.51 15.99 0.01

We have seen virtual circuit waits ever since we upgraded to 11g on this system so I wanted to learn more about what causes it.  These two Oracle support documents were the most helpful:

Troubleshooting: Virtual Circuit Waits (Doc ID 1415999.1)

Bug 5689608: INACTIVE SESSION IS NOT RELEASING SHARED SERVER PROCESS (closed as not bug)

Evidently when you return a cursor from a package and the cursor includes a sort step a shared server will be hung up in a virtual circuit wait state from the time the cursor is first fetched until the application closes the cursor.  Our application uses cursors in this way so it stands to reason that our virtual circuit wait times we saw in our AWR report represent the time it took for our web servers to fetch from the cursors and close them, at least for the cursors that included sort steps.  So, if our web servers slow down due to some other issue they could potentially take longer to fetch from and close the affected cursors and this could result in higher virtual circuit wait times.

Here is a zip of a test script I ran and its output: zip

I took the test case documented in bug 5689608 and added queries to v$session_wait to show the current session’s virtual circuit waits.

Here are the first steps of the test case:

CREATE TABLE TEST AS SELECT * FROM DBA_OBJECTS; 
     
create or replace package cursor_package as
cursor mycursor is select * from test order by object_name;
end;
/
       
begin
 open cursor_package.mycursor;
end;
/
 
create or replace procedure test_case is
l_row TEST%rowtype;
begin
if cursor_package.mycursor%isopen then
fetch cursor_package.mycursor into l_row;
end if;
end;
/

These steps do the following:

  1. Create a test table
  2. Create a package with a cursor that includes an order by to force a sort
  3. Open the cursor
  4. Create a procedure to fetch the first row from the cursor

At this point I queried v$session_wait and found no waits:

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

no rows selected

The next step of the test case fetched the first row and then I queried and found the first wait:

SQL> exec test_case;

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                          TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait           0

Note that time_waited is 0 which means the time was less than one hundredth of a second.  Next I made my sqlplus client sleep for five seconds using a host command and looked at the wait again:

SQL> host sleep 5

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                             TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait              507

Total time is now 507 centiseconds = 5 seconds, same as the sleep time.  So, the time for the virtual circuit wait includes the time after the client does the first fetch, even if the client is idle.  Next I closed the cursor and slept another 5 seconds:

SQL> begin
  2   close cursor_package.mycursor;
  3  end;
  4  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> 
SQL> host sleep 5

SQL> 
SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                                 TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait                  509

The time waited is still just about 5 seconds so the clock stops on the virtual circuit wait after the sqlplus script closes the cursor.  If the session was still waiting on virtual circuit wait after the close of the cursor the time would have been 10 seconds.

This was all new to me.  Even though we have plenty of shared servers to handle the active sessions we still see virtual circuit waits.  These waits correspond to time on the clients fetching from and closing cursors from called packages.  As a result, these wait times represent time outside of the database and not time spent within the database.  These waits tie up shared servers but as long as they are short enough and you have shared servers free they don’t represent a problem.

– Bobby

p.s. This is on hp-ux 11.31 ia64 Oracle 11.2.0.3

 

 












Categories: DBA Blogs

2002 Honda passport timing belt replacement

Ameed Taylor - Wed, 2014-09-10 19:14
The Honda Passport was a activity-utility car bought via the japanese maker from 1994 through 2002. It used to be changed in 2003 through the Honda Pilot, a crossover utility automobile that shared one of the most underpinnings of the Honda Odyssey minivan. not like the Pilot, which adopted the lead of the Toyota Highlander in placing a mid-dimension crossover body on the underpinnings of what used to be basically a car, the Passport was once constructed on a rear-wheel-power truck chassis with all-wheel force as an choice. The experience quality and coping with reflected its truck origins, so the Pilot was a striking step ahead when it replaced the Passport.

The Passport was once actually a re-badged Isuzu Rodeo, a truck-based SUV inbuilt Indiana, at a plant that Subaru and Isuzu shared on the time. the primary era Passport, sold from 1994 via 1997, offered a collection of a one hundred twenty-horsepower 2.6-liter four-cylinder engine, paired with a 5-pace handbook gearbox, or a a hundred seventy five-hp 3.2-liter V-6--and an available four-pace automated transmission. Rear-wheel power was same old, and all-wheel pressure might be ordered as an choice. Trim ranges have been base and EX.
2002 honda passport check engine light flashingIn 1998, a 2nd-era Passport used to be introduced. It used to be still based on a truck chassis, nevertheless it came with extra relief and safety options than the earlier adaptation, and was considerably extra refined. The 4-door game-utility vehicle came usual with a 205-hp three.2-liter V-6, matched with a 5-speed guide gearbox on base versions, though a four-speed computerized transmission was additionally on hand.

The second Passport was once offered in two trim ranges: the LX will be ordered with the 5-velocity guide, with four-wheel-pressure as an possibility, and the extra upscale EX came with the 4-velocity automatic, once more with both force possibility. while the spare tire on the base LX was established on a swinging bracket on the tailgate, the EX relocated it to a service beneath the cargo house. For the 2000 version year, the Honda Passport received a handful of updates, together with non-compulsory 16-inch wheels on the LX and available two-tone paint treatments.
2002 honda passport transmission dipstick locationWhen taking into account the Passport as a used car, patrons should comprehend that the 1998-2002 models have been recalled in October 2010 as a result of body corrosion in the basic house where the rear suspension was mounted. Any autos with out seen corrosion have been handled with a rust-resistant compound, but reinforcement brackets were to be installed in those with more extreme rust. In some cases, the damage was once so extreme that Honda simply repurchased the autos from their homeowners. Used-automotive shoppers taking a look at Passports must be sure to in finding out whether the car had been via a remember, and what--if anything else--was achieved.
2002 honda passport keyless remote
2002 honda passport o2 sensor location
2002 honda passport picture gallery
2002 honda passport transmission problems
2002 honda passport starter replacement
Categories: DBA Blogs

Index Growing Larger Than The Table

Hemant K Chitale - Wed, 2014-09-10 08:52
Here is a very simple demonstration of a case where an Index can grow larger than the table.  This happens because the pattern of data deleted and inserted doesn't allow deleted entries to be reused.  For every 10 rows that are inserted, 7 rows are subsequently deleted after their status is changed to "Processed".  But the space for the deleted entries from the index cannot be reused.

SQL>
SQL>REM Demo Index growth larger than table !
SQL>
SQL>drop table hkc_process_list purge;

Table dropped.

SQL>
SQL>create table hkc_process_list
2 (transaction_id number,
3 status_flag varchar2(1),
4 last_update_date date,
5 transaction_type number,
6 details varchar2(25))
7 /

Table created.

SQL>
SQL>create index hkc_process_list_ndx
2 on hkc_process_list
3 (transaction_id, status_flag)
4 /

Index created.

SQL>
SQL>
SQL>REM Cycle 1 -------------------------------------
> -- create first 1000 transactions
SQL>insert into hkc_process_list
2 select rownum, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 3
Table HKC_PROCESS_LIST 5

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>REM Cycle 2 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+1000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 7
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 3 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+2000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 11
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 4 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+3000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 15
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Latest State size -------------------------
> -- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 17
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>

Note how the Index grew from 3 blocks to 17 blocks, larger than the table that grew to 13 and seemed to have reached a "steady-state" at 13 blocks.

The Index is built on only 2 of the 5 columns of the table and these two columns are also "narrow" in that they are a number and a single character.  Yet it grows faster through the INSERT - DELETE - INSERT cycles.

Note the difference between the Index definition (built on TRANSACTION_ID as the leading column) and the pattern of DELETEs (which is on STATUS_FLAG).

Deleted rows leave "holes" in the index but these are entries that cannot be reused by subsequent
Inserts.  The Index is ordered on TRANSACTION_ID.  So if an Index entry for TRANSACTION_ID = n is deleted, the entry can be reused only for the same (or very close) TRANSACTION_ID.

Assume that an Index Leaf Block contains entries for TRANSACTION_IDs 1, 2, 3, 4 and so on upto 10.  If rows for TRANSACTION_IDs 2,3,5,6,8 and 9 are deleted but 1,4,7 and 10  are not deleted then the Leaf Block has "free" space for new rows only with TRANSACTION_IDs 2,3,5,6,8 and 9.  New rows with TRANSACTION_IDs 11 and above will take a new Index Leaf Block and not re-use the "free" space in the first Index Leaf Block.  The first Leaf Block remains with deleted entries that are not reused.
On the other hand, when the rows are delete from the Table Block, new rows can be reinserted into the same Table Block.  The Table is Heap Organised, not Ordered like the Index.  Therefore, new rows are permitted to be inserted into any Block(s) that contain space for those new rows -- e.g. blocks from which rows are deleted.  Therefore, after deleting TRANSACTION_IDs 2,3,5,6 from a Table Block, new TRANSACTION_IDs 11,12,13,14 can be re-inserted into the *same* Block.

.
.
.
Categories: DBA Blogs

Getting Started with Windows VDI by Andrew Fryer

Surachart Opun - Wed, 2014-09-10 05:55
Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server. VDI is a variation on the client/server computing model, sometimes referred to as server-based computing.
VDI is the new technology that gives lots of benefits.
• Efficient use of CPU and memory resources
• Reduced desktop downtime and increased availability
• Patches and upgrades performed in data center
• New users can be up and running quickly
• Data and applications reside in secure data centers
• Centralized management reduces operational expenses
Reference
Additional, VDI can be deployed with Microsoft Windows and suggest to learn What’s New in VDI for Windows Server 2012 R2 and 8.1
Anyway, I explained much more before starting to mention a book that was written by Andrew FryerGetting Started with Windows VDI - This book guides readers to build VDI by using Windows Server 2012 R2 and 8.1 quickly and easy to follow each chapter.

What Readers Will Learn:
  • Explore the various server roles and features that provide Microsoft's VDI solution
  • Virtualize desktops and the other infrastructure servers required for VDI using server virtualization in Windows Server Hyper-V
  • Build high availability clusters for VDI with techniques such as failover clustering and load balancing
  • Provide secure VDI to remote users over the Internet
  • Use Microsoft's Deployment Toolkit and Windows Server Update Services to automate the creation and maintenance of virtual desktops
  • Carry out performance tuning and monitoring
  • Understand the complexities of VDI licensing irrespective of the VDI solution you have opted for
  • Deploy PowerShell to automate all of the above techniques

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Partner Webcast – Managing Exadata with Oracle Enterprise Manager 12c

Oracle Enterprise Manager 12c is system management software that delivers centralized monitoring, administration, and life cycle management functionality for the complete Oracle IT infrastructure,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

2009 honda s2000 ultimate edition for sale

Ameed Taylor - Tue, 2014-09-09 18:40
Drive the S2000 tenderly and you presumably won't be satisfied with the buzzy powertrain and occupied ride. Tuned to perform on tight clips, the S2000 can feel rigid and jittery on open streets. Wind out the motor and push its points of confinement in corners, and you're in for a totally diverse, smile actuating background; that is the thing that the Honda S2000 is about.

Mazda's Miata feels very nearly large in correlation to the S2000. The cockpit is confined regardless of how little the tenants. The high shoulders of the S2000 keep the driver and traveler, and the controlling wheel sits low even at its most noteworthy alteration point. Strangely for Honda, the controls aren't laid out neatly (there's not a considerable measure of dash space to do so), and the enormous red Start catch appears to be more like a contrivance. There's a lot of dark plastic, as well, for the sake of sparing weight.

The 2009 Honda S2000 is one of the slightest reasonable large scale manufacture autos on the planet. There's practically no inside or trunk stockpiling, the cockpit's more confined than the mentor situates on a Boeing 757, and its evaluated above $30,000. It is an exemplary roadster sportscar with back wheel drive, a ragtop to open on sunny days, a six-pace manual transmission, and a rev-cheerful four-barrel motor.
2009 red honda convertible s2000
A year ago Honda presented the S2000 CR, the club-racer adaptation of the standard S2000. The CR gets a full-body flight optimized unit, superior Bridgestone tires, firmer suspension settings, a thicker hostile to move bar, and new wheels. A lightweight aluminum hardtop that cuts weight by around 90 pounds replaces the delicate top component. Inside, the CR gets different material seats with yellow sewing, another aluminum shifter handle, and carbon-fiber resemble the other much the same trim boards.

Standard supplies on the 2009 Honda S2000 incorporates electronic dependability control and non-freezing stopping devices, however side airbags—a gimmick now found on almost all new vehicles—aren't accessi

although the 2009 Honda S2000 has a dated design, the bottom edition stands out for its spectacular mix of fashion and performance, regardless of the overwhelming additions on the CR.

automobiles.com studies other exterior highlights embody trendy “excessive-intensity-discharge headlamps and 17-inch alloy wheels” that come usual on the 2009 Honda S2000. Edmunds resorts essentially the most distinguished criticism of the exterior of the 2009 Honda S2000, noting that whereas the brand new aerodynamic items on the CR “reduce excessive-velocity aerodynamic lift by way of about 70 p.c,” additionally they “cut back the car’s overall visual appeal with the aid of, oh, 79 %.” evaluations read through ebizsig.blogspot.com convey that the exterior styling of the 2009 Honda S2000 is a large success, and Kelley Blue e-book says the Honda S2000 “strikes an awfully un-Honda like, slightly depraved poise” that may “resemble an angry cobra about to strike.”
honda s2000 fiche technique 2009
Kelley Blue e book notes that “CR models include an aerodynamic physique kit,” together with “raise-reducing front and rear spoilers and a removable aluminum onerous high instead of the traditional cloth” model on the standard Honda S2000.
according to the reviewers at Edmunds, the “2009 Honda S2000 is a compact two-seat roadster that’s provided in two trims: same old and CR.” each trims share the same normal profile, which automobiles.com calls a “wedge-formed profile that stands except for different roadsters.”

ConsumerGuide approves of the internal structure on the 2009 Honda S2000, claiming that the “S2000 has a comfortable cockpit, so everything is shut at hand,” and whereas the “digital bar-graph tachometer and digital speedometer usually are not the sports activities-automotive norm,” they're “simple to learn.” Edmunds chimes in, noting that “just about all the controls you’ll ever want are set up inside a finger’s extension of the guidance wheel.” one of the most cooler interior features to find its method right into a manufacturing car is the “new top-power Indicator” on the 2009 Honda S2000 CR, a feature that cars.com says will flash “a inexperienced light when top power is reached.” Kelley Blue ebook gushes the 2009 Honda S2000’s “inside is stuffed with excellent surprises,” including a “giant pink start button on the sprint” and “the long heart console [that] sits up excessive, affording you the perfect perch on which to rest your arm.”
2009 honda s2000 performance specs
The 2009 Honda S2000 enjoys better handling because of the quicker guidance ratio and new tires, and the CR variation is a monitor-necessary contender that can hold its personal against more expensive European and American competition.

The EPA estimates that the 2009 Honda S2000, whether in standard or CR kind, will get 18 mpg within the city and 25 on the highway. Most cars as robust because the 2009 Honda S2000 pay a big penalty on the gasoline pump, however the small engine blended with lightweight development on the Honda S2000 yields a moderately frugal efficiency machine.

evaluations read by way of ebizsig.blogspot.com convey that the engine is happiest when operating flat-out. cars.com notes that “once it reaches 5,000 rpm or so, the S2000 lunges ahead like a rocket,” and Edmunds adds that “piloting the 2009 Honda S2000 takes some getting used to, on the grounds that height energy is delivered at nearly eight,000 rpm.” ConsumerGuide reviewers love the engine and find the Honda S2000 “offers a stunning provide of usable power across a extensive rpm vary, mixed with ultrahigh-revving excitement.” although two diverse versions of the 2009 Honda S2000 are on hand, Edmunds studies that the only engine offered is a “2.2-liter four-cylinder that churns out 237 hp at a lofty 7,800 rpm and 162 pound-feet of torque at 6,800 rpm.” Honda has tuned the engine on the Honda S2000 almost to the breaking point, with automobile and Driver commenting that “the S2000’s 2.2-liter four is mainly maxed out.”
modified honda s2000 turbo 2009 picture
evaluations learn by using ebizsig.blogspot.com additionally compliment the S2000’s transmission for its easy shifts and brief throws. Kelley Blue e book claims that the engine and transmission combination makes for “startlingly-quick efficiency,” whereas the chassis provides “outstanding nimbleness” to the 2009 Honda S2000 package deal. vehicles.com states that the four-cylinder engine on the S2000 Honda “mates with a six-speed handbook transmission” that ConsumerGuide says will supply “manageable take hold of motion” and a “slick, quick-throw gearbox.”

As excellent as the engine/transmission mixture is, coping with continues to be a trademark of the 2009 S2000. automobiles.com holds nothing back in praising the “razor-sharp steerage, disciplined coping with and athletic cornering ability” of the 2009 Honda S2000. Kelley Blue e book reviewers rave about the “just about flat cornering conduct and intensely crisp response that allows” the 2009 Honda S2000 “to barter the corners with positive tenacity.” The membership Racer is even more impressive, with automotive and Driver reporting it “is simply harder and sharper, with much less physique roll and tire scrubbing and extra nook composure and stability underneath braking.” sadly, the associated fee for all that efficiency is bad journey quality, and ConsumerGuide points out that “nearly every small bump and tar strip registers during the seats.” On the positive aspect, ConsumerGuide also comments that “braking is swift and simply modulated” whether or not you might be driving on the street or the monitor.
2009 honda s2000 horsepower
2009 honda s2000 owner's manual
2009 honda s2000 pictures
2009 honda s2000 price new
Categories: DBA Blogs

Adding additional agents to OEM12c

DBASolved - Mon, 2014-09-08 07:52

One question I get asked a lot is “how can I add additional agent software to OEM 12c”?  The answer is pretty easy; just download and apply to the software library.  Now what does that mean?  In this post, I’ll explain how to download additional agents for later deployments to other platforms.

After logging into OEM 12c, go to the Setup -> Extensibility -> Self Update (Image 1).

Image 1:

SelfUpdate_Menu.png

 

 

 

 

 

 

 

 

 

 

Once on the Self Update page (Image 2), there are a few things to notice.  The first thing is that under Status, the Connection Mode is Online.  This is an indicator that OEM has been configured and connected to My Oracle Support (MOS).  Additional items under the Status area is when was the last refresh, last download time and the last download type.  Right under the Status section there is a menu bar with actions that can be performed on this page.  Clicking the Check Updates button will check for any new updates in all the Types listed.  Since we want to focus on Agents, click on the folder for Agent Software.

Image 2:

SelfUpdate_Page.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After clicking on the Agent Software folder, it takes us to the Agent Software Updates page for Self Updates (Image 3).  On this page, it can be seen clearly that there are a lot of agent software available.  On this page, we can see the Past Activities where we can see what actions have been performed against a particular version of the agent.

Image 3:
AgentSoftwareUpdatePage.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the menu bar (Image 4), we can search the agent software by either description or by example.  These search options take text search terms.  If we know there is a new release, it can be searched my simply entering text like ’12.1.0.4’.

Image 4:
SelfUpdate_AgentUpdate_bar.png

 

As we can see in Image 5, searching for agents that are the version ’12.1.0.4’, we get a list of available agents with that version.  Notice the Status column of the table.  There are two types of status listed.  These are two of the three statuses available.  The third status is Downloading; which indicates that a new agent is downloading.  The two status listed in Image 5 are: Applied and Available.

Image 5:
AgentUpdateSearch.png

 

 

 

 

 

Let’s define the Agent Software Update Statuses a bit more.  They are as follows:

  1. Available = This version of the agent is available for the OS Platform and can be downloaded
  2. Download in progress = This version of the agent is being downloaded to the OMS
  3. Downloaded = This version of the agent has been downloaded to the OMS
  4. Applied = This version of the agent has been applied to the Software Library and ready to use for agent deployments

Now that we know what the Status column means, how can an agent be downloaded?

While on the Agent Software Updates page, select and highlight an OS Platform that an agent is needed for.  In this example, lets use “Microsoft Windows x64 (64-bit)” (Image 6). Notice the Status column and Past Activities section.  This agent is available for download.  Download the agent by clicking the download button in the menu bar.

Image 6:
AgentUpdate_Win64.png

 

 

 

 

 

 

 

 

 

 

 

 

After clicking the Download button, OEM will ask you when to run the job (Image 7).  Normally running it immediately is fine.

Image 7:
AgentDownloadJob.png

 

 

 

Once the Status is set Downloaded, the agent software needs to be applied to the Software Library before it can be used (Image 8). Highlight the agent that was just downloaded and click the Apply button.  This will apply the binaries to the software library.  Also notice the Past Activities section; here we can clearly see what has been done with these agent binaries.

Image 8:
AgentSoftwareDownloaded.png

 

 

 

 

 

 

 

 

 

 

 

 

 

Once the Apply button has been clicked, OEM presents a message letting you know that the Apply operation will store the agent software in the software library (Image 9).  Click OK when we are ready.

Image 9:
AgentUpdateApplyMsg.png

 

 

 

 

 

The agent software is finally applied to the Software Library and ready to use (Image 10).

Image 10:
AgentAppliedtoSWLib.png

 

 

 

 

 

 

 

 

With the agent now applied to the Software Library, it can be used to deploy out to, via push or pull, Microsoft Windows hosts.

Note: In my experience most deployments to Microsoft Windows host have to be done with there with Cygwin or Silent installed.  If you would like more information on the silent install approach, I wrote a post on it here.

Enjoy!!

 

about.me: http://about.me/dbasolved

 


Filed under: OEM
Categories: DBA Blogs

API Integration with Zapier (Gmail to Salesforce)

Kubilay Çilkara - Sun, 2014-09-07 11:42
Recently I attended a training session with +General Assembly  in London titled, What and Why of APIs. It was a training session focusing on usage of APIs and it was not technical at all. I find these type of training sessions very useful as they describe concepts and controlling ideas behind technologies rather than the hands-on, involved implementation details.

What grabbed my attention from the many different and very useful public and private API tools, 'thingies', introduced in this training session was Zapier. - www.zapier.com

Zapier looked to me as a platform for integrating APIs with clicks rather than code, with declarative programming. Is a way of automating the internet. What you get when you sign up with them is the ability to use 'Zaps', or create your own zaps. Zaps are integration of endpoints, like connecting Foursquare to Facebook or Gmail to Salesforce and syncing them. One of the Zaps available does that, connects your Gmail emails to Salesforce using the Gmail and Salesforce APIs and lets you sync between them. Not only that, but Zapier Zaps also put triggers on the endpoints which allow you to sync only when certain conditions are true. For example the Gmail to Salesforce Zap can push your email into a Salesforce Lead only when an email with a certain subject arrives to your gmail inbox. This is what a Zapier platform looks like:


An individual Zap looks like this and is nothing more than a mapping of the Endpoints with some trigger actions and filters.


The environment is self-documenting and very easy to use. All you do is drag and drop gmail fields and match them with the Lead, or other custom object Salesforce fields. Then you configure the sync to happen only under certain conditions/filters. Really easy to set-up. The free version runs the sync every 5 hours, well good enough for me. The paid version runs the sync every 5 minutes. 
There is even capability to track historical runs and trigger a manual run via the Zap menu. See below the 'Run' command to run a Zap whenever you like. 

In my case I used the tool to create a Zap to do exactly what I just described. My Zap creates a Salesforce Lead automatically in my Salesforce org whenever a 'special' email is sent to me. Great automation!
This is a taste of the 'platform cloud' tools out there to do API to API and App to App integrations with clicks and not code. With tools like Zapier all you really need is, imagination!
More links:
Categories: DBA Blogs

Watch Oracle DB Elapsed Time and Wall Time With Parallel Query

Watch Oracle Elapsed Time and Wall Time With Parallel Query
In my recent postings I wrote that when using the Oracle Database parallel query a SQL statement's wall time should be equal to its elapsed time divided by the number of parallel query slaves plus some overhead.

That may seem correct, but is it really true? To check I ran an experiment and posted the results here. The results are both obvious and illuminating.

If you don't want to read but just sit on the couch, have a beer and watch TV you're in luck! I took a clip from my Tuning Oracle Using An AWR Report online video seminar put it on youtube.  You can watch the video clip on YouTube HERE or simply click on the movie below.


The Math, For Review Purposes
In my previous recent postings I detailed the key time parameters; DB Time, DB CPU, non-idle wait time, elapsed time, parallelism and effective parallelism. To save you some clicking, the key parameters and their relationships are shown below.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = ( Elapsed Time / Parallelism ) + Parallelism Overhead

Wall Time = Elapsed Time / Effective Parallelism


Test Results: When Oracle Parallel Query was NOT involved.
If you want to see my notes, snippets, etc. they can be found in this text file HERE.

Here is the non-parallel SQL statement.

select /*+ FULL(big2) NOPARALLEL (big2) */ count(*)
into   i_var
from   big2 
where  rownum < 9000000

When the SQL statement was running, I was monitoring the session using my Realtime Session Sampler OSM tool, rss.sql. Since I knew the server process session ID and wanted to sample every second and wanted to see everything just for this session, this is the rss.sql syntax:
SQL>@rss.sql 16 16 827 827 % % 1
For details on any OSM tool syntax, run the OSM menu script, osmi.sql. You can download my OSM Toolkit HERE.

The rss.sql tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. The columns are sample number, sample time, session SID, session serial#, Oracle username, CPU or WAIT, SQL_ID, OraPub wait category, wait event, [p1,p2,p3].


We can see the session is consuming CPU and waiting. When waiting, the wait event is "direct path read", which is asynchronous (we hope) block read requests to the IO subsystem that will NOT be buffered in the Oracle buffer cache.

Now for the timing results, which are shown in the below table. I took five samples.  It's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY server process SID 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL_ID statement execution.

Here are the "no parallel" experimental results.
SQL> select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 35.480252 35.470015 9.764407 24.97 9.428506 34.152294
2 35.670021 35.659748 9.778554 25.15 9.774984 35.541861
3 35.749926 35.739473 9.774375 25.12 9.31266 34.126285
4 35.868076 35.857752 9.772321 25.32 9.345398 34.273479
5 36.193062 36.18378 9.712962 25.46 9.548465 35.499693
Let's check the math. For simplicity and clarity, please allow me to round and use only sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
35.5 = 9.5 + 25.5 = 35.0
The DB Time is pretty close (35.5 vs 35.0). Close enough to demonstrate the time statistic relationships.
Elapsed Time (EL_VSQL_S) = DB_TIME_S
35.5 = 34.2
The Elapsed Time is off by around 4% (35.5 vs 34.2), but still closely to demonstrate the time statistic relationships.
Wall Time (WALL_S) = Elapsed Time (EL_VSQL_S) / Effective Parallelism
35.5 = 35.5 / 1
Nice! The Wall Time results matched perfectly. (35.5 vs 35.5)

To summarize in a non parallel query (i.e., single server process) situation, the time math results are what we expected! (and hoped for)

Test Results: When Oracle Parallel Query WAS involved.
The only difference in the "non parallel" SQL statement above and the SQL statement below is the parallel hint. Below is the "parallel" SQL statement.
select /*+  FULL(big2) PARALLEL(big2,3)  */ count(*) into i_var from big2 where rownum<9000000>
When the "parallel" SQL statement was running, because Oracle parallel query was involved resulting in multiple related Oracle sessions, when monitoring using my rss.sql tool, I need to open the session ID (and serial#) to include all sessions. I still sampled every second. Here is the rss.sql syntax:
SQL>@rss.sql 0 9999 0 9999 % % 1
The tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. I manually inserted the blank lines to make it easier to see the different sample periods.


There is only one SQL statement being run on this idle test system. And because there is no DML involved, we don't see much background process activity. If you look closely above, sessions 168 (see third column) must be a log write process because the wait event is "log file parallel write". I checked and session 6 is a background process as well.

It's no surprise to typically see only four session involved. One session is the parallel query coordinator and the three parallel query slaves! Interestingly, the main server process session that I executed the query from is session number 16. It never showed up in any of my samples! I suspect it was "waiting" on an idle wait event and I'm only showing processes consuming CPU or waiting on a non-idle wait event. Very cool.

Now for the timing results. I took five samples.  Again, it's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY calling server process, which in this case is session 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL statement execution.

Here are the "parallel" experimental results.
 SQL>  select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 46.305951 132.174453 19.53818 .01 4.069579 4.664083
2 46.982111 132.797536 19.371063 .02 3.809439 4.959602
3 47.79761 134.338069 19.739735 .02 4.170921 4.555491
4 45.97324 131.809249 19.397557 .01 3.790226 4.159572
5 46.053922 131.765983 19.754143 .01 4.062703 4.461175
Let's check the math. So simplicity and clarity, please allow me to round and use sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
4.5 = 4.1 + 0
The DB Time shown above is kind of close... 10% off. (4.5 vs 4.1) But there is for sure timing error in my collection sript. I take the position, this is close enough to demonstrate the time statistic relationships. Now look below.
Elapsed Time (EL_VSQL_S)  = DB_TIME_S
131.7 != 4.5
Woah! What happened here? (131.7 vs 4.5) Actually, everything is OK (so far aways) because the DB Time is related to the session (Session ID 16), whereas the elapsed time is ALL the DB Time for ALL the processes involved in the SQL statement. Since parallel query is involved, resulting in four additional sessions (1 coordinator, 3 slaves) we would expect the elapsed time to be greater than the DB Time. Now let's look at the wall time.
Wall Time (WALL_S) = ( Elapsed Time (EL_VSQL_S) / Parallelism ) + overhead
46.1 = ( 131.8 / 3 ) + 2.2
Nice! Clearly the effective parallelism is greater than 3 because there is some overhead (2.2). But the numbers makes sense because:

1. The wall time is less than the elapsed time because parallel query is involved.

2. The wall time is close to the elapsed time divided by the parallelism. And we can even see the parallelism overhead.

So it looks like our time math is correct!


Reality And The AWR Or Statspack Report
This is really important. In the SQL Statement section of any AWR or Statspack Report, you will see the total elapsed time over the snapshot interval and perhaps the average SQL ID elapsed time per execution. So what is the wall time? What are users experiencing? The short answer is, we do NOT have enough information.

To know the wall time, we need to know the parallelism situation. If you are NOT using parallel query, than based on the time math demonstrated above, the elapsed time per execution will be close to what the user experiencing (unless there is an issue outside of Oracle). However, if parallelism is involved, you can expect the wall time (i.e, user's experience) to be much less than the elapsed time per execution shown in the AWR or Statspack report.

Another way of looking at this is: If a user is reporting a query is taking 10 seconds, but the average elapsed time is showing as as 60 seconds, parallel query is probably involved. Also, as I mentioned above, never forget the average value is not always the typical value. (More? Check out my video seminar entitled, Using Skewed Data To Your Advantage HERE.)

Thanks for reading!

Craig.
Categories: DBA Blogs

RAC Database Backups

Hemant K Chitale - Sun, 2014-09-07 08:20
In 11gR2 Grid Infrastructure and RAC


UPDATE : 13-Sep-14 : How to run the RMAN Backup using server sessions concurrently on each node.  Please scroll down to the update.


In a RAC environment, the database backups can be executed from any one node or distributed across multiple nodes of the cluster.

In my two-node environment, I have backups configured to go to an FRA.  This is defined by the instance parameter "db_recovery_file_dest" (and "db_recovery_file_dest_size").  This can be a shared location -- e.g. an ASM DiskGroup or a ClusterFileSystem.  Therefore, the parameter should ideally be the same across all nodes so that backups may be executed from any or multiple nodes without changing the backup location.

Running the RMAN commands from node1 :
[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 21:56:46 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000M
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 21:57:49 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
12 B F A DISK 26-NOV-11 1 1 YES TAG20111126T224849
13 B A A DISK 26-NOV-11 1 1 YES TAG20111126T230108
16 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222340
18 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222738
19 B F A DISK 16-JUN-14 1 1 NO TAG20140616T222742
20 B F A DISK 05-JUL-14 1 1 NO TAG20140705T173046
21 B F A DISK 16-AUG-14 1 1 NO TAG20140816T231412
22 B F A DISK 17-AUG-14 1 1 NO TAG20140817T002340

RMAN>
RMAN> backup as compressed backupset database plus archivelog delete input;


Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=111 RECID=77 STAMP=857685630
input archived log thread=2 sequence=37 RECID=76 STAMP=857685626
input archived log thread=2 sequence=38 RECID=79 STAMP=857685684
input archived log thread=1 sequence=112 RECID=78 STAMP=857685681
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220131_0.288.857685699 tag=TAG20140907T220131 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_111.307.857685623 RECID=77 STAMP=857685630
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_37.309.857685623 RECID=76 STAMP=857685626
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_38.277.857685685 RECID=79 STAMP=857685684
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_112.270.857685681 RECID=78 STAMP=857685681
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709 tag=TAG20140907T220145 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:06:15
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=113 RECID=81 STAMP=857686085
input archived log thread=2 sequence=39 RECID=80 STAMP=857686083
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220807_0.307.857686087 tag=TAG20140907T220807 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_113.309.857686085 RECID=81 STAMP=857686085
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_39.277.857686083 RECID=80 STAMP=857686083
Finished backup at 07-SEP-14

Starting Control File and SPFILE Autobackup at 07-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_07/s_857686089.277.857686097 comment=NONE
Finished Control File and SPFILE Autobackup at 07-SEP-14

RMAN>

Note how the "PLUS ARCHIVELOG" specification also included archivelogs from both threads (instances) of the database.

Let's verify these details from the instance on node2 :

[root@node2 ~]# su - oracle
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 22:11:00 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN>

RMAN> list backup of database completed after 'trunc(sysdate)-1';

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
24 Full 258.21M DISK 00:06:12 07-SEP-14
BP Key: 24 Status: AVAILABLE Compressed: YES Tag: TAG20140907T220145
Piece Name: +FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709
List of Datafiles in backup set 24
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/system.257.765499365
2 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/sysaux.256.765502307
3 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/undotbs1.259.765500033
4 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/undotbs2.257.765503281
5 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/users.261.765500215
6 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/partition_test.265.809628399
7 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/hemant_tbs.266.852139375
8 Full 1160228 07-SEP-14 +DATA3/racdb/datafile/new_tbs.256.855792859

RMAN>

Yes, today's backup is visible from node2 as it retrieves the information from the controlfile that is common across all the instances of the database.

How are the archivelogs configured ?

RMAN> exit


Recovery Manager complete.
-sh-3.2$
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 22:15:51 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 39
Next log sequence to archive 40
Current log sequence 40
SQL>
SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000M
SQL>

Both instances have the same destination configured for archivelogs and backups.
.
.
.
=======================================================
UPDATE : 13-Sep-14 :  Running the backup concurrently from both nodes 

There are two ways to have the RMAN Backup run from both nodes.
A.   Issue a seperate RMAN BACKUP DATAFILE or BACKUP TABLESPACE command from each node, such that the two nodes have an independent list of Datafiles / Tablespaces

B.  Issue a BACKUP DATABASE command from one node but with two channels open, one against each node.

Here, method A is easy to do but difficult to control as you add Tablespaces and Datafiles.  So, I will demonstrate method B.

I begin with ensuring that
a.  I have REMOTE_LOGIN_PASSWORDFILE configured so that I can make a SQLNet connection from node1 to node2  (RMAN requires the connect AS SYSDBA in 11g)
b.  I have a TNSNAMES.ORA entry configured to the instance on node2 (note that the service name is common across all [both] instances in the Cluster)

-sh-3.2$ hostname
node1.mydomain.com
-sh-3.2$ id
uid=800(oracle) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba),1021(dba)
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sat Sep 13 23:22:09 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter remote_login_passwordfile;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora.node1 Network Configuration File: /u01/app/oracle/rdbms/11.2.0/network/admin/tnsnames.ora.node1
# Generated by Oracle configuration tools.

RACDB_1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = RACDB)
)
)

RACDB_2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = RACDB)
)
)

-sh-3.2$

Next, I start RMAN and allocate two Channels, one for each Instance (on each Node in the Cluster) and issue a BACKUP DATABASE that is automatically executed across both Channels.

-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Sep 13 23:23:24 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> run
2> {allocate channel ch1 device type disk connect 'sys/manager@RACDB_1';
3> allocate channel ch2 device type disk connect 'sys/manager@RACDB_2';
4> backup as compressed backupset database plus archivelog delete input;
5> }

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=61 instance=RACDB_1 device type=DISK

allocated channel: ch2
channel ch2: SID=61 instance=RACDB_2 device type=DISK


Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=2 sequence=40 RECID=82 STAMP=857687640
input archived log thread=1 sequence=114 RECID=84 STAMP=858204801
input archived log thread=2 sequence=41 RECID=83 STAMP=857687641
input archived log thread=1 sequence=115 RECID=86 STAMP=858208025
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=42 RECID=85 STAMP=858208000
input archived log thread=1 sequence=116 RECID=87 STAMP=858209078
input archived log thread=2 sequence=43 RECID=88 STAMP=858209079
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.279.858209109 tag=TAG20140913T232445 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:26
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_42.296.858207997 RECID=85 STAMP=858208000
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_116.263.858209079 RECID=87 STAMP=858209078
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_43.265.858209079 RECID=88 STAMP=858209079
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.275.858209099 tag=TAG20140913T232445 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:56
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_40.309.857687641 RECID=82 STAMP=857687640
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_114.295.858204777 RECID=84 STAMP=858204801
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_41.293.857687641 RECID=83 STAMP=857687641
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_115.305.858208001 RECID=86 STAMP=858208025
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
channel ch1: starting compressed full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed full datafile backup set
channel ch2: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.293.858209175 tag=TAG20140913T232557 comment=NONE
channel ch2: backup set complete, elapsed time: 00:12:02
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.305.858209163 tag=TAG20140913T232557 comment=NONE
channel ch1: backup set complete, elapsed time: 00:13:06
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=1 sequence=117 RECID=90 STAMP=858209954
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=44 RECID=89 STAMP=858209952
channel ch2: starting piece 1 at 13-SEP-14
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.265.858209957 tag=TAG20140913T233915 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_117.309.858209953 RECID=90 STAMP=858209954
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.263.858209957 tag=TAG20140913T233915 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:03
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_44.295.858209951 RECID=89 STAMP=858209952
Finished backup at 13-SEP-14

Starting Control File and SPFILE Autobackup at 13-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_13/s_858209961.295.858209967 comment=NONE
Finished Control File and SPFILE Autobackup at 13-SEP-14
released channel: ch1
released channel: ch2

RMAN>

We can see that Channel ch1 was connected to Instance RACDB_1 and ch2 was connected to RACDB_2. Also, the messages indicate that both channels were running concurrently.
I also verified that the Channels did connect to each instance :

[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 1 23:24 ? 00:00:00 oracleRACDB_1 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 3 23:24 ? 00:00:04 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 4 23:24 ? 00:00:49 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]#
[root@node2 ~]# ps -ef |grep RACDB_2 | grep LOCAL=NO
oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle 6233 1 2 23:24 ? 00:00:24 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]#

As soon as I closed the RMAN (client) session, the two server processes also terminated.

This method (Method B) allows me to run an RMAN client session from any node in the Cluster and have RMAN server sessions running concurrently across all or some nodes of the Cluster, if I have not designated a single, specific node, as my RMAN Backups node.

Edit : I have demonstrated using ALLOCATE CHANNEL to run an adhoc, interactive, backup.  If you want to create a persistent script, you might want to use CONFIGURE CHANNEL and have the SYS password persisted in the configuration (saved in the controlfile) so that it is not in "plain text" in a script.

.
.
.

Categories: DBA Blogs

First AZORA usergroup meeting October 23

Bobby Durrett's DBA Blog - Fri, 2014-09-05 17:52

Just got the invitation to the first AZORA (Arizona Oracle user group) meeting on October 23.  Here is the link: url

It’s 2 pm at Oracle’s office, 2355 E Camelback Rd Ste 950, Phoenix, AZ.

I’m looking forward to it!

– Bobby

Categories: DBA Blogs

Pythian at Oracle OpenWorld 2014

Pythian Group - Fri, 2014-09-05 14:41

Calling all Pythian fans, clients, and partners! It’s that time of year again with Oracle OpenWorld 2014 fast approaching! Pythian is excited to be participating once again with our rockstar team of experts in all things Oracle including Database 12c, Oracle Applications (EBS, GoldenGate) and engineered systems, MySQL, and more. We are thrilled to have multiple Pythian folks presenting sessions as listed below with more attending in tow,  including our newest friends & colleagues formerly of BlackbirdIT. Keep a look out for our signature black “Love Your Data” t-shirts.

We’re also excited to again be co-hosting the Annual Bloggers Meetup with our good friends at the Oracle Technology Network. Keep your eyes peeled for a blog post from Alex Gorbachev, Pythian’s CTO, providing details including contest fun & reviews of past years of mayhem and madness.

It’s not Oracle OpenWorld without a conference within a conference. Queue Oaktable World and an action packed agenda for all the hardcore techies out there. Catch Alex and Jeremiah up on Tuesday.

Vasu Balla will also  be attending the Oracle DIS Partner Council Meeting and Oracle EBS ATG Customer Advisory Board, and helping share Pythian’s thought leadership.

 

Attention Pythian Partners & clients, if you’re attending please reach out to us for details on social happenings you won’t want to miss!

Pythian’s dynamic duo of Emilia (Partner Program Mgr/kutrovska@pythian.com/1 613 355 5038) & Vanessa (Dir. of BD/simmons@pythian.com/1 613 897 9444) are orchestrating this year’s efforts. We’ll be live tweeting up to the minute show action from @pythianpartners and are the best way to get a hold of any of the Pythian team.

See you there! #oow14 #pythianlife

 

 

Pythian Sessions at Oracle OpenWorld 2014

Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden
Presented by Paul Vallee 
(@paulvallee)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 310

Session ID UGF9199: “In June 2013, Edward Snowden triggered the most costly insider security leak in history, forcing organizations to completely rethink how they secure their infrastructure. In this session, the founder of Pythian discusses how he supervises more than 200 database and system administrators as they perform work on some of the world’s most valuable and mission-critical data infrastructures.”

 

24/7 Availability with Oracle Database Application Continuity
Presented by Jeremiah Wilton (@oradebugand Marc Fielding (@mfild)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 309

Session ID UGF2563: “Oracle Real Application Clusters (Oracle RAC) enables databases to survive hardware failures that would otherwise cause downtime. Transparent application failover and fast application notification can handle many failure scenarios, but in-flight transactions still require complex application-level state tracking. With application continuity, Java applications can now handle failure scenarios transparently to applications, without data loss. In this session, see actual code and a live demonstration of application continuity during a simulated failure.”

 

Time to Upgrade to Oracle Database 12c
Presented by Michael Abbey (@MichaelAbbeyCAN)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 307

Session ID UGF2870: “Oracle Database 12c has been out for more than a year now. There is a handful of off-the-shelf features of Oracle Database 12c that can serve the growing requirements of all database installations, regardless of the applications they support and the options for which an installation is licensed. This session zeros in on the baseline enhancements to the 12c release, concentrating on the likes of the Oracle Recovery Manager (Oracle RMAN) feature of Oracle Database; pluggable databases; and a handful of new opportunities to perform many resource-intensive operations by splitting work among multiple separate processes.”

 

Oracle RMAN in Oracle Database 12c: The Next Generation
Presented by René Antunez (@grantunez)
Sunday, Sep 28, 10:00 AM – 10:45 AM – Moscone South – 309

Session ID UGF1911: “The Oracle Recovery Manager (Oracle RMAN) feature of Oracle Database has evolved since being released, in Oracle8i Database. With the newest version of Oracle Database, 12c , Oracle RMAN has great new features that will enable you to reduce your downtime in case of a disaster. In this session, you will learn about the new features introduced in Oracle Database 12c and how you can take advantage of them from the first day you upgrade to this version.”

 

Experiences Using SQL Plan Baselines in Production
Presented by Nelson Calero (@ncalerouy)
Sunday, Sep 28, 12:00 PM – 12:45 PM – Moscone South – 250

Session ID UGF7945: “This session shows how to use the Oracle Database SQL Plan Baselines functionality, with examples from real-life usage in production (mostly Oracle Database 11g Release 2) and how to troubleshoot it. SQL Plan Baselines is a feature introduced in Oracle Database 11g to manage SQL execution plans to prevent performance regressions. The presentation explains concepts and presents examples, and you will encounter some edge cases.”

 

Getting Started with Database as a Service with Oracle Enterprise Manager 12c
Presented by René Antunez
(@grantunez)
Sunday, Sep 28, 3:30 PM – 4:15 PM – Moscone South – 307

Session ID UGF1941: “With the newest version of Oracle Database 12c, with Oracle Multitenant, we are moving toward an era of provisioning databases to our clients faster than ever, even leaving out the DBA and enabling the developers and project leads to provision their own databases. This presentation gives you insight into how to get started with database as a service (DBaaS) and the latest version of Oracle Enterprise Manager, 12c, and get the benefit of this upcoming database era.”

 

Using the Oracle Multitenant Option to Efficiently Manage Development and Test Databases
Presented by Marc Fielding (@mfild) and Alex Gorbachev (@alexgorbachev)
Wednesday, Oct 1, 12:45 PM – 1:30 PM – Moscone South – 102

Session ID CON2560: “The capabilities of Oracle Multitenant for large-scale database as a service (DBaaS) environments are well known, but it provides important benefits for nonproduction environments as well. Developer productivity can be enhanced by providing individual developers with their own separate pluggable development databases, done cost-effectively by sharing the resources of a larger database instance. Data refreshes and data transfers are simple and fast. In this session, learn how to implement development and testing environments with Oracle Multitenant; integrate with snapshot-based storage; and automate the process of provisioning and refreshing environments while still maintaining high availability, performance, and cost-effectiveness.”

 

Oracle Database In-Memory: How Do I Choose Which Tables to Use It For?
Presented by Christo Kutrovsky (@kutrovsky)
Wednesday, Oct 1, 4:45 PM – 5:30 PM – Moscone South – 305

Session ID CON6558: “Oracle Database In-Memory is the most significant new feature in Oracle Database 12c. It has the ability to make problems disappear with a single switch. It’s as close as possible to the fast=true parameter everyone is looking for. Question is, How do you find which tables need this feature the most? How do you find the tables that would get the best benefit? How do you make sure you don’t make things worse by turning this feature on for the wrong table? This highly practical presentation covers techniques for finding good candidate tables for in-memory, verifying that there won’t be a negative impact, and monitoring the improvements afterward. It also reviews the critical inner workings of Oracle Database In-Memory that can help you better understand where it fits best.”

 

Customer Panel: Private Cloud Consolidation, Standardization, & Automization
Presented by Jeremiah Wilton (@oradebug)
Thursday, Oct 2, 12:00 PM – 12:45 PM – Moscone South – 301

Session ID CON10038: “Attend this session to hear a panel of distinguished customers discuss how they transformed their IT into agile private clouds by using consolidation, standardization, and automation. Each customer presents an overview of its project and key lessons learned. The panel is moderated by members of Oracle’s private cloud product management team.”

 

Achieving Zero Downtime During Oracle Application and System Migrations – Co-presented with Oracle
Presented by Gleb Otochkin (@sky_vst) and Luke Davies (@daviesluke)
Thursday, Oct 2, 10:45 AM – 11:30 AM – Moscone West – 3018

Session ID CON7655: “Business applications—whether mobile, on-premises, or in the cloud—are the lifeline of any organization. Don’t let even planned outage events such as application upgrades or database/OS migrations hinder customer sales and acquisitions or adversely affect your employees’ productivity. In this session, hear how organizations today are using Oracle GoldenGate for Oracle Applications such as Oracle E-Business Suite and the PeopleSoft, JD Edwards, Siebel, and Oracle ATG product families in achieving zero-downtime application upgrades and database, hardware, and OS migrations. You will also learn how to use Oracle Data Integration products for real-time, operational reporting without degrading application performance. That’s Oracle AppAdvantage, and you can have it too.”

 

Categories: DBA Blogs

Loose Coupling and Discovery of Services With Consul — Part 2

Pythian Group - Fri, 2014-09-05 08:33
Creating a Consul Client Docker Image

In my previous post, I demonstrated how to create a cluster of Consul servers using a pre-built Docker image. I was able to do this because our use case was simple: run Consul.

In this post, however, we will be creating one or more Consul clients that will register services they provide, which can then be queried using Consul’s DNS and / or HTTP interfaces. As we are now interested in running Consul and an application providing a service, things are going to get a bit more complicated.

Before proceeding, I’m going to need to explain a little bit about how Docker works. Docker images, such as progrium/consul we used in the previous post, are built using instructions from a special file called Dockerfile. There are two related instructions that can be specified in this file which control the container’s running environment: that is, the process or shell that is run in the container. They are ENTRYPOINT and CMD.

There can be only one ENTRYPOINT instruction in a Dockerfile, and it has two forms: either an array of strings, which will be treated like an exec, or a simple string which will execute in ‘/bin/sh -c’. When you specify an ENTRYPOINT, the whole container runs as if it were just that executable.

The CMD instruction is a bit different. It too can only be specified once, but it has three forms: the first two are the same as ENTRYPOINT, but the third form is an array of strings which will be passed as parameters to the ENTRYPOINT instruction. It’s important to note that parameters specified in an ENTRYPOINT instruction cannot be overridden, but ones in CMD can. Therefore, the main purpose of CMD is to provide defaults for an executing container.

It’s probably becoming clear to you by now that Docker images are designed to run one process or shell. We want to run two processes, however: the Consul agent and an application. Thankfully, there is an image available called phusion/baseimage that provides runit for service supervision and management, which will make it easy for me to launch Consul and another service (such as nginx) within my containers.

SIDEBAR: There is quite a bit of debate about the intended / recommend use of Docker, and whether the process run in the container should be your application or an init process that will spawn, manage and reap children. If you’re interested in reading more about the pros and cons of each of these approaches, please refer to Jérôme Petazzoni’s post, phusion’s baseimage-docker page, and / or Google the topics of ‘separation of concerns’ and ‘microservices’.

Now that I’ve provided some background, let’s get into the specifics of the Docker image for my Consul clients. I’ll begin with the full contents of the Dockerfile and then describe each section in detail.

FROM phusion/baseimage:latest
MAINTAINER Bill Fraser 

# Disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh

# Install nginx
RUN \
  add-apt-repository -y ppa:nginx/stable && \
  apt-get update && \
  apt-get install -y nginx zip && \
  chown -R www-data:www-data /var/lib/nginx

# Clean up apt
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

# Define mountable directories.
VOLUME ["/data", "/etc/nginx/sites-enabled", "/var/log/nginx"]

# Add runit configuration for nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
ADD files/nginx/nginx-runit /etc/service/nginx/run
RUN chmod 755 /etc/service/nginx/run

# Install consul
RUN curl -s -L -O https://dl.bintray.com/mitchellh/consul/0.3.0_linux_amd64.zip
RUN unzip -d /usr/bin 0.3.0_linux_amd64.zip
RUN chmod 555 /usr/bin/consul

# Add service configuration
ADD files/consul/consul.json /etc/consul.d/consul.json
RUN chmod 644 /etc/consul.d/consul.json

# Add runit configuration for consul
ADD files/consul/consul-runit /etc/service/consul/run
RUN chmod 755 /etc/service/consul/run

# Expose nginx ports
EXPOSE 80 443

# Expose consul ports
EXPOSE 53/udp 8300 8301 8301/udp 8302 8302/udp 8400 8500

ENV HOME /root

ENTRYPOINT [ "/sbin/my_init" ]

The first section specifies that my image will be based on that of phusion/baseimage, and that I am the maintainer of my image. So far so good.

Next, I am removing the SSHD service from the container. This is part of phusion’s image and is not something I am interested in using for the purposes of my demonstration.

The next step is to install nginx and should look fairly straight forward. I have taken the liberty of installing zip at the same time, as I will be using it later on to install Consul.

The VOLUME instruction lets me define mount points that can be used for mounting volumes in the container, passed as arguments of the docker run command. I am not actually using this in my demonstration, it is just there to make you aware of the capability.

Next I am telling nginx not to daemonize itself, and am adding an nginx configuration for runit. The ADD instruction adds a local file to the image in the specified path. The runit configuration I am adding is pretty simple and looks like this:

#!/bin/sh
exec /usr/sbin/nginx -c /etc/nginx/nginx.conf 2>&1

Now that I am done with nginx, I want to install and configure Consul. I simply retrieve the binary package and extract it into /usr/bin in the image. I then use another ADD instruction to supply a configuration file for Consul. This file is in JSON format and tells Consul to register a service named ‘nginx’.

{
	"service": {
		"name": "nginx",
		"port": 80
	}
}

I then use an ADD instruction to supply a runit configuration for Consul in the same manner I did for nginx. Its content is as follows:

#!/bin/sh
if [ -f "/etc/container_environment.sh" ]; then
  source /etc/container_environment.sh
fi

# Make sure to use all our CPUs, because Consul can block a scheduler thread
export GOMAXPROCS=`nproc`

# Get the public IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`

exec /usr/bin/consul agent \
  -config-dir="/etc/consul.d" \
  -data-dir="/tmp/consul" \
  -bind=$BIND \
  ${CONSUL_FLAGS} \
  >>/var/log/consul.log 2>&1

With all of the hard stuff out of the way, I now define the nginx and Consul ports to EXPOSE to other containers running on the host, and to the host itself.

And last but not least, I set the HOME environment variable to /root and set the init process of /sbin/my_init as the ENTRYPOINT of my container.

This creates a good foundation. If we were to run the image as is, we would end up with nginx running and listening on port 80, and Consul would be running as well. However, we haven’t provided Consul with any details of the cluster to join. As you have probably guessed, that’s what CONSUL_FLAGS is for, and we’ll see it in action in the next section.

Creating Consul Clients With Vagrant

So far we’ve gone to the trouble of creating a Docker image that will run Consul and nginx, and we’ve supplied configuration to Consul that will have it register nginx as a service. Now we’ll want to create some clients with Vagrant and see querying of services in action.

Let’s start by modifying our Vagrantfile. Just as was done with the Consul servers, we’ll want to create an array for the nginx members and tell Vagrant to use the Docker provider. This time, however, some additional configuration will be necessary. The full Vagrantfile is now going to look like this:

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  JOIN_IP = ENV['JOIN_IP']
  
  # A hash of containers to define.
  # These will be the Consul cluster members.
  consul_members = [ "consul1", "consul2", "consul3" ]
  consul_members.each do |member|
    config.vm.define member do |consul_config|

      # Use Docker provider
      consul_config.vm.provider "docker" do |docker|
        docker.name = member
        docker.image  = 'progrium/consul'
        docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]
      end
    end
  end

  # Create an nginx container running the consul agent
  nginx_members = [ "nginx1", "nginx2", "nginx3" ]
  nginx_members.each do | member|
    config.vm.define member do |nginx_config|
      nginx_config.vm.provider "docker" do |docker|
        docker.name = member
        docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]
        docker.build_dir = "."
        docker.cmd = [ "/bin/bash" ]
        docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]
        docker.env = { "CONSUL_FLAGS" => "-node=#{member} -join=#{JOIN_IP}" }
      end
    end
  end
end

Note that this time we are not using docker.image to supply the name of an existing Docker image to use for our containers. Instead, we are going to use docker.build_args and docker.build_dir to build our own.

docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]

This is a list of extra arguments to pass to the docker build command. Specifically, I am naming the image bfraser/consul-nginx and telling Docker to remove intermediate containers after a successful build.

docker.build_dir = "."

This should be fairly self-explanatory: I am simply telling Docker to use the current working directory as the build directory. However, I have some files (including the Vagrantfile) that I do not want to be part of the resulting image, so it is necessary to tell Docker to ignore them. This is accomplished with a file called .dockerignore and mine looks like this:

.git
.vagrant
Vagrantfile

Next, I am using docker.cmd to pass /bin/bash as an extra parameter to the image’s ENTRYPOINT, which allows me to have a shell in the container. A little later, I will show you how this can be useful.

The next line:

docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]

is a list of extra arguments to pass to the ‘docker run‘ command. Specifically, I am providing a custom DNS server and instructing Docker to allocate a TTY and keep STDIN open even if not attached to the container.

Lastly, I am supplying a hash to docker.env which will expose an environment variable named CONSUL_FLAGS to the container. The environment variable contains additional parameters to be used when starting Consul.

With this configuration in place, we can now use Vagrant to create three additional containers, this time running Consul and nginx.

$ JOIN_IP=172.17.42.1 vagrant up --provider=docker

This time if we check the output of ‘consul members‘ we should see our host and six containers: three Consul servers and three nginx servers functioning as Consul clients.

$ consul members -rpc-addr=172.17.42.1:8400
Node     Address           Status  Type    Build  Protocol
nginx1   172.17.0.18:8301  alive   client  0.3.0  2
nginx2   172.17.0.19:8301  alive   client  0.3.0  2
laptop   172.17.42.1:8301  alive   server  0.3.0  2
consul2  172.17.0.9:8301   alive   server  0.3.0  2
consul3  172.17.0.10:8301  alive   server  0.3.0  2
consul1  172.17.0.8:8301   alive   server  0.3.0  2
nginx3   172.17.0.20:8301  alive   client  0.3.0  2
Querying Services

As I mentioned in ‘Where Does Consul Fit In?’ in my original post, Consul is a tool for enabling discovery of services, and it provides two interfaces for doing so: DNS and HTTP. In this section, I’ll show you how we can use each of these interfaces to query for details of services being provided.

First, let’s use the HTTP interface to query which services are being provided by members of the Consul cluster.

$ curl http://172.17.42.1:8500/v1/catalog/services | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    25  100    25    0     0  12722      0 --:--:-- --:--:-- --:--:-- 25000
{
    "consul": [],
    "nginx": []
}

This returns JSON-encoded data which shows that ‘consul’ and ‘nginx’ services are being provided. Great, now let’s query for details of the ‘nginx’ service.

$ curl http://172.17.42.1:8500/v1/catalog/service/nginx | python -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   362  100   362    0     0   210k      0 --:--:-- --:--:-- --:--:--  353k
[
    {
        "Address": "172.17.0.18",
        "Node": "nginx1",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    },
    {
        "Address": "172.17.0.19",
        "Node": "nginx2",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    },
    {
        "Address": "172.17.0.20",
        "Node": "nginx3",
        "ServiceID": "nginx",
        "ServiceName": "nginx",
        "ServicePort": 80,
        "ServiceTags": null
    }
]

We can see here that there are three nodes providing the nginx service, and we have details of the IP address and port they are listening on. Therefore, if we were to open http://172.17.0.18 in a web browser, we would see the ‘Welcome to nginx!’ page.

Notice how the REST endpoint changed between the first and second curl requests, from /v1/catalog/services to /v1/catalog/service/nginx. Consul provides extensive documentation of the various REST endpoints available via the HTTP API.

While the HTTP API is the most powerful method of interacting with Consul, if we are only interested in querying for information about nodes and services, it is also possible to use its DNS server for simple name lookups. Querying for details of the nginx service via the DNS interface is as simple as running the following:

$ dig @172.17.42.1 -p 8600 nginx.service.consul

; <> DiG 9.9.5-3-Ubuntu <> @172.17.42.1 -p 8600 nginx.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3084
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;nginx.service.consul.		IN	A

;; ANSWER SECTION:
nginx.service.consul.	0	IN	A	172.17.0.19
nginx.service.consul.	0	IN	A	172.17.0.18
nginx.service.consul.	0	IN	A	172.17.0.20

;; Query time: 1 msec
;; SERVER: 172.17.42.1#8600(172.17.42.1)
;; WHEN: Sat Aug 16 22:35:51 EDT 2014
;; MSG SIZE  rcvd: 146

As you can see, while it is certainly possible to develop a client to tightly integrate with Consul through its API, it is also easy to take advantage of its DNS interface and not have to write a client at all.

Attaching To A Docker Container

I have one last tip, which is especially useful if you are new to Docker: how to attach to your containers.

I mentioned earlier in this post that I was including the following line in my Vagrantfile:

docker.cmd = [ "/bin/bash" ]

What this does is pass /bin/bash as an extra parameter to the image’s ENTRYPOINT instruction, resulting in the /sbin/my_init process spawning a bash shell.

I also instructed Vagrant, via the docker.create_args line, to have Docker allocate a TTY and keep STDIN open even if not attached to the container. This means I can attach to my containers and interact with them through a bash shell as follows (note: press the ‘Enter’ key following the command to get the prompt):

$ docker attach nginx1

root@4b5a98093740:/# 

Once you are done working with the container, you can detach from it by pressing ^P^Q (that’s CTRL-P followed by CTRL-Q).

Summary

With that, we have reached the end of my demonstration. Thanks for sticking with me!

First I described the importance of loose coupling and service discovery in modern service-oriented architectures, and how Consul is one tool that can be used for achieving these design goals.

Then I detailed, by way of a demonstration, how Vagrant and Docker can be used to form a Consul cluster, and how to create a custom Docker image that will run both your application and a Consul agent.

And, last but not least, I showed how you can make use of Consul’s HTTP API and DNS interface to query for information about services provided.

Hopefully you have found these posts useful and now have some ideas about how you can leverage these technologies for managing your infrastructure. I encourage you to provide feedback, and would be very interested in any tips, tricks or recommendations you may have!

Categories: DBA Blogs

Log Buffer #387, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-05 08:04

Benefits of blogs transcends the technologies as they not only enable the bloggers to pen down their valued experiences but also provide readers to get first hand practical information. This Log Buffer Edition shares those benefits from Oracle, SQL Server and MySQL.

Oracle:

Cloud Application Foundation is the innovator’s complete and integrated modern cloud application infrastructure, built using best of breed components, such as Oracle WebLogic Server 12c, the industry’s best application server for building and deploying enterprise Java EE applications.

Migrating Existing PeopleSoft Attachments into the Managed Attachments Solution.

How to identify SQL performing poorly on an APEX application?

How can you keep your Oracle Applications and systems running at peak performance? What will it take to get more out of your Oracle Premier Support coverage?

Projects Create Accounting Performance Issues With 11.2.0.3 Database Version.

SQL Server:

If your log restores aren’t happening when they’re meant to, you want to know about it. You’ll be relying on restoring from logs should anything happen to your databases, and if you can’t restore to a certain point in time, you risk losing valuable data.

Agile data warehousing can be challenging. Pairing the right methodologies and tools can help.

Introduction to Azure PowerShell Modules for the SQL Server DBA.

The Clustered columnstore index generates “unable to find index entry” error and a memory dump after few DMLs on the table.

With any application organizations face consistent key challenges such as high efficiency and business value, complex configuration, and low total cost of ownership. Extending applications to the cloud in hybrid scenarios addresses many of these challenges.

MySQL:

Analyzing Twitter Data using Datasift, MongoDB and Pig

Cloud storage for MySQL Enterprise Backup (MEB) users

Tracing down a problem, finding sloppy code.

Cloud storage for MySQL Enterprise Backup (MEB) users.

MySQL Enterprise Backup (MEB) is a highly efficient tool for taking backups of your MySQL databases.

Categories: DBA Blogs

Implementation Training Initiative: ORACLE Sales Cloud and ORACLE HCM Cloud

Oracle Human Capital Management Cloud: Modern HR differentiates the business with a talent centric and consumer based strategy that leverages technology to provide a...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Partner Webcast – Oracle WebLogic 12c Enabling Development of Modern Applications: WebSocket and Maven Support

Cloud Application Foundation is the innovator’s complete and integrated modern cloud application infrastructure, built using best of breed components, such as Oracle WebLogic Server 12c, the...

We share our skills to maximize your revenue!
Categories: DBA Blogs