Feed aggregator

Optimize ADF HTTP Response Size with ChangeEventPolicy

Andrejus Baranovski - Tue, 2016-04-26 03:18
You should read this post, if you are looking how to reduce ADF HTTP response size. This can be important for ADF application performance tuning, to improve PPR request response time. By default in ADF 12.2.1, iterator is assigned with ChangeEventPolicy = ppr. This works great for UI component bindings refresh, no need to set individual partial triggers. On other side, this generates extra content in ADF HTTP response and eventually increases response size. I would recommend to use ChangeEventPolicy = ppr, only when its really needed - dynamic forms, with unknown refresh dependencies. Otherwise set ChangeEventPolicy = none, to generate smaller response.

I will demonstrate below the difference for ADF HTTP response with ChangeEventPolicy=ppr/none. First let's take a look into page load response size:


Page contains list component and form. Both are based on two different iterators, set with ChangeEventPolicy = ppr. This generates AdfPage.PAGE.updateAutoPPRComponents calls for each UI item, referencing attributes from the iterator. In complex screens, this adds significant amount of extra text to the response, could increase size even by half:


Partial response also contains same calls added to the response. Select list item:


Each item from the form will be referenced by JavaScript call, to register it for refresh:


Let's change it to ChangeEventPolicy = none. Set it for both iterators:


We should set refresh dependencies manually. Form block must be set with PartialTrigger, referencing list component - to refresh, when list selection changes:


Next/Previous buttons dependency also must be set, to change form row:


With manual refresh dependency changes, there are no extra calls added to ADF HTTP response, reducing overall response size:


Same applies for PPR response:


Download sample application - ChangeEventPolicyPPRApp.zip.

Please be patient!

Tim Hall - Tue, 2016-04-26 02:46

angry-1300616_640It’s extremely nice to have a big audience. It’s very flattering that people care enough about what I say to be bothered to read it. The problem with having a large audience is people can get a very demanding at times.

making backup better

Pat Shuff - Tue, 2016-04-26 02:07
Yesterday we looked at backing up our production databases to cloud storage. One of the main motivations behind doing this was cost. We were able to reduce the cost of storage from $3K/TB capex plus $300/TB/year opex to $400/TB/year opex. This is a great solution but some customers complain that it is not generic enough and latency to cloud storage is not that great. Today we are going to address both of these issues with the cloud storage appliance. First, let's address both of the typical customer complaints.

The database backup cloud service is just that. It backs up a database. It does it really well and it does it efficiently. You replace one of the backup library modules that translates writes of backup data to the cloud REST api rather than a tape driver. The software works well with commercial products like Symantec or Legato and integrates well into that solution. Unfortunately, the critics are right. The database backup cloud service does that and only that. It backs up Oracle databases. It does not backup MySQL, SQL Server, DB2, or other databases. It is a single use tool. A very useful single use tool but a single use tool. We need to figure out how to make it more generic and backup more than just databases. It would be nice if we could have it backup home directories, email servers, virtual machines, and other stuff that is used in the data center.

The second complaint is latency. If we are writing to an SSD or spinning disk attached to a server via high speed SCSI, iSCSI, or SAS, we should expect 10ms access time or less. If we are writing to a server half way across the country we might experience 80ms latency. This means that a typical read or write takes eight times longer when we read and write cloud storage. For some applications this is not an issue. For others this latency makes the system unusable. We need to figure out how to read adn write at 10ms latency but leverage the expandability of the cloud storage and lower cost.

Enter stage left the Oracle Cloud Storage Appliance. The appliance is a software component that listens on the data center internet using the NFS protocol and talks to the cloud services using the storage REST api. Local disks are used as a cache front end to store data that is written to and read from the network shares exposed by the appliance. These directories map directly to containers in the Oracle Storage Cloud Service and can be clear text or encrypted when stored. Data written from network servers is accepted and released quickly as it is written to local disk and slow tricked to the cloud storage. As the cache fills up, data is aged and migrated from the cache storage into cloud storage. The metadata representing the directory structure and storage location is updated to show that the data is no longer stored locally but stored in the cloud. If a read occurs from the file system, the meta data helps the appliance locate where the data is stored and it is presented to the network client from the cache or pulled from the cloud storage and temporarily stored in the local cache as long as there is space. A block diagram of this architecture is shown below

The concept of how to use this device is simple. We create a container in our cloud storage and we attach to it with the cloud storage appliance. This attachment is exposed via an nfs mount to clients on our corporate network and anyone on the client can read or write files in the cloud storage. Operations happen at local disk speed using the network security of the local network and group/owner rights in the directory structure. It looks, smells, and feels just like nfs storage that we would spend thousands of dollars per TB to own and operate.

For the rest of this blog we are going to go through the installation steps on how to configure the appliance. The minimum requirements for the appliance are

  • Linux 7 (3.10 kernel or later)
  • Docker 1.6.1 or later
  • two dual core x86 CPUs
  • 4 GB of RAM
We will be installing our configuration on a Windows desktop running VirtualBox. We will not go through the installation of Oracle Enterprise Linux 7 because we covered this a long time ago. We do need to configure the OS to have 4 GB of RAM and at least 2 virtual cores as shown in the images below.

We also need to configure a network. We configure two networks. One is for the local desktop console and the other is for the public internet. We could configure a third interface to represent our storage network but for simplicity we only configure two.

We can boot our Linux 7 system and will need to select the 3.10 kernel. By default it will want to boot to the 3.8 kernel which will cause problems in how the appliance works.

What we would like to do is remove the 3.8 kernel from our installation. This is done by removing the packages with the rpm -e command. We then update the grub.cfg file to list only the 3.10 kernels.

Once we have removed the kernels, we update the grub loader and enable additional options for the yum update.

The next step that we need to take is to install docker. This is done with the yum install command.

Once we have the docker package installed, we need to make sure that we have the nfs-client and nfs-server installed and started.

It is important to note that the tar bundle is not generally available. It does require product manager approval to get a copy of the software for installation. The file that I got was labeled oscsa-1.0.5.tar.gz. I had to unzup and untar this file after loading it on my Linux VirtualBox instance. I did not do a screen capture of the download but did go through the installation process.

We start the service with the oscsa command. When we start it it brings up a management web page so that we can make the connection to the cloud storage service. To see this page we need to start firefox and connect to the page.

One of the things that we need to know is the end point of our storage. We can find this by looking at the management console for our cloud services. If we click on the storage cloud service details link we can find it.

Once we have the end point we will need to enter this into the management console of the appliance as well as the cloud credentials.

We can add encryption and a container name for our network share and start reading and writing.

We can verify that everything is working from our desktop by mounting the nfs share or by using cloudberry to examine our cloud storage containers. In this example we use cloudberry just like we did when we looked at the generic Oracle Storage Cloud Services.

We can examine the properties of the container and network share from the management console. We can look at activity and resources available for the caching.

In summary, we looked at a solution to two problems offered by our database backup solution. The first was single purpose and the second was latency. By providing a network share to the data center we can not only backup or Oracle database but all of the databases by having the backup software write to the network share. We can backup other files like virtual machines, email boxes, and home directories. Disk latency operates at the speed of the local disk rather than the speed of the cloud storage. This software does not cost anything additional and can be installed on any virtual platform that supports Linux 7 with kernel 3.10 or greater. When we compare this to the Amazon Storage Gateway which requires 2x the processing power and $125/month to operate it looks significantly better. We did not compare it to the Azure solution because it is an iSCSI hardware solution and not easy to get a copy of for testing.

Improve Parsing and Query Performance – Fix Oracle’s Fixed Object Statistics

Pythian Group - Mon, 2016-04-25 20:50

What do I mean by ‘fix’ the the fixed object statistics?  Simply gather statistics to help the optimizer.

What are ‘fixed objects’?  Fixed objects are the x$ tables and associated indexes that data dictionary views are based on.  In this case we are interested in the objects that make up the v$sqlstats and v$sql_bind_capture views.

If you’ve never before collected statistics on Oracle fixed object, you may be wondering why you should bother with it, as everything appears to be fine in your databases.

After seeing an example you may want to schedule a time to collect these statistics.

Searching for SQL

Quite recently I was looking for recently executed SQL, based on the most recently captured bind variables.

select  sql_id, sql_fulltext
from v$sqlstats
where sql_id in (
   select  distinct sql_id
   from (
      select sql_id, last_captured
      from (
         select sql_id, last_captured
         from V$SQL_BIND_CAPTURE
         order by last_captured desc nulls last
      )
      where rownum <= 20
   )
)

I ran the query and was distracted for a few moments.  When I next looked at the terminal session where this SQL was executing, no rows had yet been returned.

Thinking that maybe ‘SET PAUSE ON’ had been run, I pressed ENTER.  Nothing.

From another session I checked for waits in v$session_wait.  Nothing there either.  If the session is not returning rows, and not registering and event in v$session_wait, then it must be on CPU.

This didn’t seem an ideal situation, and so I stopped the query with CTRL-C.

The next step was to run the query on a smaller and not very busy 11.2.0.2 database.  This time I saw that rows were being returned, but very slowly.

So now it was time to trace the execution and find out what was going on.

alter session set tracefile_identifier='JKSTILL';

set linesize 200 trimspool on

alter session set events '10046 trace name context forever, level 12';

select  sql_id, sql_fulltext
from v$sqlstats
where sql_id in (
   select  distinct sql_id
   from (
      select sql_id, last_captured
      from (
         select sql_id, last_captured
         from V$SQL_BIND_CAPTURE
         order by last_captured desc nulls last
      )
      where rownum <= 20
   )
)
/

alter session set events '10046 trace name context off';

exit

Coming back to this several minutes later, the resulting trace file was processed with the Method R Profiler to find out just where the time was going.

 

image

 

The ‘SQL*Net message from client’ event can be ignored; most of that time was accumulated waiting for me to come back and exit sqlplus.  While the script example shows that the 10046 trace was turned off and the session exited, I had forgot to include those two line for this first run.

No matter, as the interesting bit is the next line, ‘CPU: FETCH dbcalls’.  More than 6 minutes was spent fetching a few rows, so clearly something was not quite right. The SQL plan in the profile showed what the problem was, as the execution plan was far less than optimal. The following is the execution plan from AWR data:

 

  1  select *
  2  from TABLE(
  3     dbms_xplan.display_awr(sql_id => :sqlidvar, plan_hash_value => 898242479, format => 'ALL ALLSTATS LAST')
  4*    )
sys@oravm1 SQL- /

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------
SQL_ID 4h7qfxa9t1ukz
--------------------
select  sql_id, sql_fulltext from v$sqlstats where sql_id in (  select
distinct sql_id         from (          select sql_id, last_captured            from (
   select sql_id, last_captured from V$SQL_BIND_CAPTURE order by
last_captured desc nulls last           )               where rownum <= 20      ) )

Plan hash value: 898242479

-------------------------------------------------------------------------------------------
| Id  | Operation                 | Name         | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |              |        |       |     1 (100)|          |
|   1 |  FILTER                   |              |        |       |            |          |
|   2 |   FIXED TABLE FULL        | X$KKSSQLSTAT |      1 |  2023 |     0   (0)|          |
|   3 |   VIEW                    |              |      1 |     8 |     1 (100)| 00:00:01 |
|   4 |    COUNT STOPKEY          |              |        |       |            |          |
|   5 |     VIEW                  |              |      1 |     8 |     1 (100)| 00:00:01 |
|   6 |      SORT ORDER BY STOPKEY|              |      1 |    43 |     1 (100)| 00:00:01 |
|   7 |       FIXED TABLE FULL    | X$KQLFBC     |      1 |    43 |     0   (0)|          |
-------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$88122447
   2 - SEL$88122447 / X$KKSSQLSTAT@SEL$4
   3 - SEL$6        / from$_subquery$_002@SEL$5
   4 - SEL$6
   5 - SEL$FEF91251 / from$_subquery$_003@SEL$6
   6 - SEL$FEF91251
   7 - SEL$FEF91251 / X$KQLFBC@SEL$10

Note
-----
   - Warning: basic plan statistics not available. These are only collected when:
       * hint 'gather_plan_statistics' is used for the statement or
       * parameter 'statistics_level' is set to 'ALL', at session or system level


39 rows selected.

 

While useful, this plan is not giving much information about why this took so long.  If pressed I would just whip up a Bash and Awk one-liner  to parse the trace files and find out where this time was going.  In this case though I could just consult the Method R profile again.

 

image

 

Yikes!  There were 106.3E6 rows returned from from X$KQLFBC.

Collecting the Fixed Object Statistics

Rather than spend time analyzing this further, it seemed that here was a clear case for collecting statistics on fixed objects in the database.  The following SQL was run:

 

exec dbms_stats.gather_fixed_objects_stats

 

The next step was to rerun the query.  This time it ran so quickly I wondered if it had even worked.  As before, tracing had been enabled, and a profile generated from the trace. There was now quite an improvement seen in the execution plan:

 

image

 

The 0.0047 seconds required to return 442 rows from X$KQLFBC was quite a reduction from the previously seen time of nearly 396 seconds.

Why This Is Important

This issue came to light due to a custom query I was running. The optimizer will probably never run that same query, but it was clear that the fixed object statistics needed to be updated.

Now imagine your customers using your application; they may be waiting on the database for what seems like an eternity after pressing ENTER on a web form.  And what are they waiting on?  They may be waiting on the optimizer to evaluate a SQL statement and determine the best plan to use.  The reason for the waiting in this case would simply be that the DBA has not taken steps to ensure the optimizer has the correct information to effectively query the database’s own metadata.   Until the optimizer has the correct statistics, performance of query optimization will be sub-optimal.  In a busy system this may result in mutex waits suddenly showing as a top event in AWR reports.  Troubleshooting these waits can be difficult as there are many possible causes of them.

Do your customers, your database and yourself a favor – include updates of fixed tables statistics in your regular database maintenance schedule, and avoid a possible source of performance problems.

Categories: DBA Blogs

Searching in PeopleSoft

PeopleSoft Technology Blog - Mon, 2016-04-25 18:50

Many customers have asked us about PeopleSoft’s search strategy.  They may have seen that Oracle’s Secure Enterprise Search (SES) isn’t on the price list any longer, so they wonder about PeolpleSoft’s continued use of that product. 

First and foremost, Search is an important part of PeopleSoft’s overall user experience, and we are continuing to invest in a consumer grade search experience for the enterprise.  When we re-architected our approach to Search several releases ago, we built a PeopleSoft search framework that provides a great Search UI that enables users not only to search enterprise information, but to refine their search results with facets and filters.  We also made it possible for users to act directly on search results through our Related Actions framework.  In many cases, users do not even have to navigate to a transaction page to complete a task.  This provides a rich Search UI that users have come to expect.  Further improvements using PeopleSoft’s Fluid UI include Pivot Grids in search pages that allow users better visualization of search results.  Oracle’s Secure Enterprise Search is the engine for that search in PeopleSoft. 

In an effort to provide choice, PeopleSoft is working on providing an alternative search engine that customers can use in their PeopleSoft ecosystem.  We are planning on offering Elastic Search as an option with PeopleTools 8.56, and our applications that are delivered on 8.56 are planned to contain Elastic indexes.  Note that Oracle will continue to support SES with PeopleSoft for some time yet, and customers can continue to use SES if they have deployed it.  We think that Elastic will be an attractive option for many customers, however.  It is important to note that whichever search engine you use, the PeopleSoft search framework will work with either search engine, and the search UI will be essentially the same regardless of your choice.  Here are a few key points about the Elastic Search option for PeopleSoft:

  • Planning on Elastic availability in 8.56, back porting to 8.55
  • Leverage PeopleTools Search Framework
    - SES or Elastic is a deployment Choice
    - Deploy Elastic on the separate search host instead of SES
  • Take PeopleSoft Images for application fixes and Elastic indexes
  • Will require a one-time full index build with Elastic using the new process
  • We plan to provide a migration guide to help deploy Custom indexes on Elastic
  • PeopleTools DPK for Elastic infrastructure
    - Supported on Linux and Windows
  • We plan to provide a deployment guide to help with performance tuning, load balance, and failover

Our initial testing indicates that Elastic will require fewer resources and will perform better—and will be easier to install than SES.

There is also some discussion of our plans for Search in this PeopleSoft talk video.  The Search discussion is at the 5:50 point.

Oracle OpenWorld 2016 - Call for Papers Deadline is May 9!

WebCenter Team - Mon, 2016-04-25 15:07
If you’re an Oracle technology expert, conference attendees want to hear it straight from you. So don’t wait—proposals must be submitted by May 9. You have two weeks left to submit!

Oracle OpenWorld 2016

Wanted: Outstanding Oracle Experts

The Oracle OpenWorld 2016 Call for Proposals is now open. Attendees at the conference are eager to hear from experts on Oracle business and technology. They’re looking for insights and improvements they can put to use in their own jobs: exciting innovations, strategies to modernize their business, different or easier ways to implement, unique use cases, lessons learned, the best of best practices.

If you’ve got something special to share with other Oracle users and technologists, they want to hear from you, and so do we. Submit your proposal now for this opportunity to present at Oracle OpenWorld, the most important Oracle technology and business conference of the year.

We recommend you take the time to review the General Information, Submission Information, Content Program Policies, and Tips and Guidelines pages before you begin. We look forward to your submissions.

Submit Your Proposal

By submitting a session for consideration, you authorize Oracle to promote, publish, display, and disseminate the content submitted to Oracle, including your name and likeness, for use associated with the Oracle OpenWorld and JavaOne San Francisco 2016 conferences. Press, analysts, bloggers and social media users may be in attendance at OpenWorld or JavaOne sessions.

Submit Now.

Driving Content Experience at Modern Marketing Experience Conference 2016

WebCenter Team - Mon, 2016-04-25 13:49


Heading to the Modern CX Conference this week? Looking to participate in the Modern Marketing Experience 2016 conference? Or just happen to be in Las Vegas this week? Then don't miss connecting with us!

Where the best of marketing minds and thought leaders collect, Modern Marketing Conference 2016 is expecting thousands of attendees this year as they immerse in demos, solution showcases, industry keynotes, product strategy sessions, customer presentations, and more. The conference is taking place at MGM Grand, Las Vegas from April 26-28, 2016.

You, as marketers know, the importance of content. And yet, the reality is that while about a quarter of our budget is spent building content, 70% of it goes unused. How can we maximize content usage, consumption, distribution and optimal presentation? How do we get the most mileage from our content?

Join our Oracle executive, Mariam Tariq and industry thought leader,Tina Miletich, who is the CEO and founder of HEEDGroup a creative strategy consulting practice that designs and implements successful customer engagement initiatives as they discuss the importance of Marketing Asset Management and the role technology plays in driving brand consistency and content optimization. While Mariam has extensive experience in product management for digital experience solutions and sites experience, working on Oracle WebCenter solutions, Tina has had an extensive digitally focused career and her teams have won many awards including an Effie, several IAC awards, a Silver IDSA and short-listed for a Cannes Cyber Lion. Tina also teaches Digital Strategy at NYU - SPS in the Masters of Marketing Program. So, don't miss their session:

Create, Collaborate, and Distribute Marketing Assets to Ensure Brand Consistency
Thursday, April 28, 2016
8:00 a.m. - 8:45 a.m.
MGM Grand Conference Center, Room 318

And while at the conference, do check our solutions live in action by visiting us in the Exhibit Hall at the Digital Experience kiosk. We are there through the exhibit hours and during dedicated exhibition schedule.

So, see you there?


Part 2: Oracle E-Business Suite on Cloud FAQ

Steven Chan - Mon, 2016-04-25 13:03

Contributing Author: Terri Noyes

Running Oracle E-Business Suite on Oracle Cloud is simple, but it doesn't take too much effort to come up with several great questions about this new set of options.  We have published a document that answers those questions:

This document is being updated regularly, so it's worth checking it periodically.  The latest version covers the following common questions:

EBS environments on cloudWhat is Oracle E-Business Suite on Oracle Cloud?

What capabilities are currently provided?

What are the key benefits to using Oracle E-Business Suite on Oracle Cloud?

Is Oracle releasing Oracle E-Business Suite as a SaaS application?

Does Oracle provide managed services for Oracle E-Business Suite on Oracle Cloud?

Are there additional fees for using Oracle E-Business Suite with Oracle Cloud?

Do I need to purchase Oracle E-Business Suite product licenses in order to use E-Business Suite on Oracle Cloud?

How do I deploy Oracle E-Business Suite on Oracle Cloud?

How do I restrict access to my Oracle E-Business Suite instances on Oracle Cloud?

Can I customize and extend Oracle E-Business Suite on Oracle Cloud?

How do I migrate customizations of Oracle E-Business Suite on the cloud to my on-premises production system?

Can we use our own custom data with Oracle E-Business Suite on Oracle Cloud?

Can we use the same certified third-party products we currently use on-premises with Oracle E-Business Suite on Oracle Cloud?

Can I use the Database Cloud Service with Oracle E-Business Suite?

Can I use the Exadata Cloud Service with Oracle E-Business Suite?

What tools are used for managing Oracle E-Business Suite environments on Oracle Cloud?

Does Oracle support deploying production instances of Oracle E-Business Suite on the Oracle Compute Cloud Service?

In addition to the FAQ, I'll cover some of these questions in more detail in future articles on this blog.  Stay tuned!

Related Articles

Categories: APPS Blogs

When the default value is not the same as the default

Pythian Group - Mon, 2016-04-25 11:39

I was working on a minor problem recently where controlfile autobackups were written to the wrong location during rman backups. Taking controlfile autobackups is generally a good idea, even if you configure controlfile backups yourself. Autobackups also include an spfile backup, though not critical for restore, is still convenient to have. And autobackups are taken not only after backups, but more importantly every time you change the physical structure of your database, like adding or removing datafiles and tablespaces which would make a restore with an older controlfile a lot harder.

What happened in this case was that the CONTROLFILE AUTOBACKUP FORMAT parameter was changed from the default ‘%F’ to the value ‘%F’. Yes, the values are the same. But setting a value and not leaving it at the default changed the behaviour of those autobackups. Where by default ‘%F’ means writing to the flash recovery area, explicitly setting the format parameter to ‘%F’ will save the autobackup to the folder $ORACLE_HOME/dbs/.

See for yourself. This shows an autobackup while the parameter is set to the default and as expected, the autobackup is written to the flash recovery area. So that is the correct location but the filename is a bit off. It should be c-DBID-YYYYMMDD-SERIAL.

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

RMAN> backup spfile;

Starting backup at 18-APR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-APR-16
channel ORA_DISK_1: finished piece 1 at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_04_18/o1_mf_nnsnf_TAG20160418T172428_ckb62f38_.bkp tag=TAG20160418T172428 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-APR-16

Starting Control File and SPFILE Autobackup at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/autobackup/2016_04_18/o1_mf_s_909509070_ckb62gko_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 18-APR-16

Now we are setting the to format string to ‘%F’ and observe the autobackup is not written to the FRA but $ORACLE_HOME/dbs. At least it has the filename we were expecting.

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
new RMAN configuration parameters are successfully stored

RMAN> backup spfile;

Starting backup at 18-APR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 18-APR-16
channel ORA_DISK_1: finished piece 1 at 18-APR-16
piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2016_04_18/o1_mf_nnsnf_TAG20160418T172447_ckb62z7f_.bkpx tag=TAG20160418T172447 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 18-APR-16

Starting Control File and SPFILE Autobackup at 18-APR-16
piece handle=/u01/app/oracle/product/12.1.0.2/db_1/dbs/c-863887021-20160418-04 comment=NONE
Finished Control File and SPFILE Autobackup at 18-APR-16

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

This is like Schrödinger’s parameter, where you can either get the correct location or the correct name, but not both. To be fair, not assigning the right name to the autobackup in the FRA does not matter much because the files will be found during a restore anyway.

At this point it is good to remember how to use CLEAR to reset a parameter to it’s default instead of just setting the default value.

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
RMAN configuration parameters are successfully reset to default value

RMAN> SHOW CONTROLFILE AUTOBACKUP FORMAT;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

I have tested this in versions 10g, 11g and 12.1.0.2 with the same result. The behaviour is also not unknown. In fact, bug 4248670 was logged against this in 2005 but has not been resolved so far. My Oracle Support does mention the above workaround of clearing the parameter in note 1305517.1 though.

Categories: DBA Blogs

MySQL Query Best Practices

Pythian Group - Mon, 2016-04-25 11:30

You can get many returns from a Google search for “MySQL Query Best Practices” or “MySQL Query Optimization.” The drawback is that too many rules can provide confusing or even conflicting advice. After doing some research and tests, I outlined the essential and important ones below:

1) Use proper data types

1.1) Use the smallest data types if possible

MySQL tries to load as much data as possible into memory (innodb-buffer-pool, key-buffer), so a small data type means more rows of data in memory, thus improving performance. Also, small data sizes reduces disk i/o.

1.2) Use Fixed-length Data Types if Possible

MySQL can calculate quickly the position of a fixed-length column in a specific row of a table.

With the flexible-length data type, the row size is not fixed, so every time it needs to do a seek, MySQL might consult the primary key index. However, the flexible-length data type can save data size, and the disk space required.

In practice, if the column data size varies a lot, then use a flexible-length data type (e.g., varchar); if the data length is short or length barely changes, then use a fixed data type.

1.3) Use not null unless there is reason not to

It is harder for MySQL to optimize queries that refer to nullable columns, because they make indexes, index statistics, and value comparisons more complicated. A nullable column uses more storage space and requires special processing inside MySQL.

When a nullable column is indexed, it requires an extra byte per entry and can even cause a fixed-size index (e.g., an index on a single integer column) to be converted to a variable-sized one in MyISAM.

2)Use indexes smartly

2.1) Make primary keys short and on meaningful fields

A shorter primary key will benefit your queries, because the smaller your primary key, the smaller the index, and the less pages in the cache. In addition, a numeric type is prefered because numeric types are stored in a much more compact format than character formats and so it will make primary key shorter.

Another reason to make primary key shorter, is because we usually use primary key to join with the other tables.

It is a good idea to use a primary key on a meaningful field, because MySQL uses a cluster index on a primary key. We usually just need the info from primary key, and especially when joined with other tables, it will only search in the index without reading from the data file in disk, and benefit the performance. When you use a meaningful field as the primary key, make sure the uniqueness on the fields wouldn’t change, otherwise it might affect all the tables using this as foreign key when you have to change the primary key.

2.2) Index on the search fields only when needed

Usually we add indexes on the fields that frequently show up in a where clause — that is the purpose of indexing. But while an index will benefit reads, it can make writes slower (inserting/updating), so index only when you need it and index smartly.

2.3) Index and use the same data types for join fields

MySQL can do joins on different data types, but the performance is poor as it has to convert from one type to the other for each row. Use the same data type for join fields when possible.

2.4) Use a composite index if your query has has more than one field in the where clause

When the query needs to search on multiple columns of a table, it might be a good idea to create a compound index for those columns. This is because with composite index on multiple columns, the search will be able to narrow down the result set by the first column, then the second, and so on.

Please note that the order of the columns in the composite index affects the performance, so put the columns in the order of the efficiency of narrowing down the search.

2.5) Covering index for most commonly used fields in results

In some cases, we can put all the required fields into an index (i.e., a covering index) with only some of the fields in the index used for searching and the others for data only. This way, MySQL only need to access the index and there is no need to search in another table.

2.6) Partial index for long strings or TEXT, BLOB data types by index on prefix

There is a size limitation for indexes (by default, 1000 for MyISAM, 767 for InnoDB). If the prefix part of the string already covers most of the unique values, it is good to just index the prefix part.

2.7) Avoid over-indexing

Don’t index on the low cardinality values, MySQL will choose a full table scan instead of use index if it has to scan the index more than 30%.

If a field already exists in the first field of a composite index, you may not need an extra index on the single field. If it exists in a composite index but not in the leftmost field, you will usually need a separate index for that field only if required.

Bear in mind that indexing will benefit in reading data but there can be a cost for writing (inserting/updating), so index only when you need it and index smartly.

3) Others
3.1) Avoid SELECT *
There are many reasons to avoid select * from… queries. First, it can waste time to read all the fields if you don’t need all the columns. Second, even if you do need all columns, it is better to list the all the field names, to make the query more readable. Finally, if you alter the table by adding/removing some fields, and your application uses select * queries, you can get unexpected results.

3.2) Prepared Statements
Prepared Statements will filter the variables you bind to them by default, which is great for protecting your application against SQL injection attacks.

When the same query is being used multiple times in your application, you can assign different values to the same prepared statement, yet MySQL will only have to parse it once.

3.3) If you want to check the existence of data, use exists instead SELECT COUNT

To check if the data exists in a table, using select exists (select *…) from a table will perform better than select count from a table, since the first method will return a result once it gets one row of the required data, while the second one will have to count on the whole table/index.

3.4) Use select limit [number]

Select… limit [number] will return the only required lines of rows of data. Including the limit keyword in your SQL queries can have performance improvements.

3.5) Be careful with persistent connections

Persistent connections can reduce the overhead of re-creating connections to MySQL. When a persistent connection is created, it will stay open even after the script finishes running. The drawback is that it might run out of connections if there are too many connections remaining open but in sleep status.

3.6) Review your data and queries regularly

MySQL will choose the query plan based on the statistics of the data in the tables. When the data size changes, the query plan might change, and so it is important to check your queries regularly and to make optimizations accordingly. Check regularly by:

3.6.1) EXPLAIN your queries

3.6.2) Get suggestions with PROCEDURE ANALYSE()

3.6.3) Review slow queries

Categories: DBA Blogs

Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)

Hemant K Chitale - Mon, 2016-04-25 10:13
Reviewing my second blog post in this series, I found it strange that Partition P_100 (populated by Serial Inserts of 1 row, 100,000 rows, 500,000 rows and 500,000 rows) had such a High Water Mark.

For 1.1million rows of an Average Row Length of 11, the High Water Mark was 3,022 blocks.

In the fourth blog post, a simple ALTER TABLE MOVE PARTITION had brought the High Water Mark to 2,482 blocks !

This needs further investigation.

Let's compare a single Partition of a Partitioned Table with a Non-Partitioned Table for exactly the same data and same pattern of INSERT statements.

Starting with a new Partitioned Table.

SQL> l
1 create table new_part_tbl (id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7* partition p_max values less than (maxvalue))
SQL> /

Table created.

SQL>
SQL> insert into new_part_tbl values (51,'Fifty One');

1 row created.

SQL>
SQL> insert into new_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into new_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 3022

SQL>
SQL> REM Let's MOVE the Partition
SQL> alter table new_part_tbl move partition P_100;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','NEW_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'NEW_PART_TBL'
4 and partition_name = 'P_100'
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2484

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NEW_PART_TBL'
4 and segment_type = 'TABLE PARTITION'
5 and partition_name = 'P_100'
6 and owner = 'HEMANT'
7* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


As expected (see the first blog post), the Extents are still 8MB each.  But the High Water Mark has "magicallly" shrunk from 3,022 blocks to 2,484 blocks.

Let's create a Non-Partitioned Table with the same columns and rows.

SQL> create table non_part_tbl (id_column number(6), data_column varchar2(100));

Table created.

SQL> insert into non_part_tbl values (51,'Fifty One');

1 row created.

SQL> insert into non_part_tbl
2 select 25, 'New Row'
3 from dual
4 connect by level < 100001
5 /

100000 rows created.

SQL> insert into non_part_tbl
2 select 45, 'New Row'
3 from dual
4 connect by level < 500001
5 /

500000 rows created.

SQL> /

500000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','NON_PART_TBL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> REM Let's MOVE the Table
SQL> alter table non_part_tbl move;

Table altered.

SQL> select avg_row_len, num_rows, blocks
2 from user_tables
3 where table_name = 'NON_PART_TBL'
4 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100001 2512

SQL>
SQL> l
1 select extent_id, blocks
2 from dba_extents
3 where segment_name = 'NON_PART_TBL'
4 and segment_type = 'TABLE'
5 and owner = 'HEMANT'
6* order by 1
SQL> /

EXTENT_ID BLOCKS
---------- ----------
0 8
1 8
2 8
3 8
4 8
5 8
6 8
7 8
8 8
9 8
10 8
11 8
12 8
13 8
14 8
15 8
16 128
17 128
18 128
19 128
20 128
21 128
22 128
23 128
24 128
25 128
26 128
27 128
28 128
29 128
30 128
31 128
32 128
33 128
34 128

35 rows selected.

SQL>


The Non-Partitioned Table had a High Water Mark of 2,512 blocks.  This did not change with a MOVE.  The allocation of Extents is also expected in AutoAllocate.

Why, then, does the Partition behave differently ?  It started with a High Water Mark of 3,022 blocks which shrunk to 2,484 blocks after a MOVE ?

Is the Average Row Length or the actual data a factor ?  (Note : I am *not* using Table Compression).

To be explored further with a larger row size ...........

Possibly, to be explored with a different pattern of INSERT statements  ......

Possibly to be compared in 11.2 as well. ......
.
.
.

Categories: DBA Blogs

Machine Intelligence for Oracle Developers

Gerger Consulting - Mon, 2016-04-25 08:33
We've got a webinar tomorrow! Attend and learn how you can apply Numenta's open source machine intelligence technology to real world Enterprise IT problems.

185+ people have already signed up. Register here.




Categories: Development

DDL logging

Jonathan Lewis - Mon, 2016-04-25 07:05

I was presenting at the UKOUG event in Manchester on Thursday last week (21st April 2016), and one of the sessions I attended was Carl Dudley’s presentation of some New Features in 12c. The one that caught my eye in particular was “DDL Logging” because it’s a feature that has come up fairly frequently in the past on OTN and other Oracle forums.

So today I decided to write a brief note about DDL Logging – and did a quick search of my blog to see if I had mentioned it before: and I found this note that I wrote in January last year but never got around to publishing – DDL Logging is convenient, but doesn’t do the one thing that I really want it to do:

DDL Logging – 12c

One of the little new features that should be most welcome in 12c is the ability to capture all DDL executed against the database. All it takes is a simple command (if you haven’t set the relevant parameter in the parameter file):

alter system set enable_ddl_logging = true;

All subsequent DDL will be logged to two different places (in two formats)

  • $ORACLE_BASE/diag/rdbms/{database}/{instance}/log/ddl/log.xml
  • $ORACLE_BASE/diag/rdbms/{database}/{instance}/log/ddl_{instance}.log

Unfortunately the one thing I really wanted to see doesn’t appear – probably because it doesn’t really count as DDL –  it’s the implicit DDL due to inserting into not-yet-existing partitions of an interval partitioned table.

Note: If you’re using a container database with pluggable databases then the DDL for all the pluggable databases goes into the same log file.

Update – Licensing

The following text in the Oracle 12c Database Licensing document has just been brought to my attention:

Licensed Parameters
The init.ora parameter ENABLE_DDL_LOGGING is licensed as part of the Database Lifecycle Management Pack when set to TRUE. When set to TRUE, the database reports schema changes in real time into the database alert log under the message group schema_ddl. The default setting is FALSE.

The licensing document is also linked to from the 12c online html page for the parameter.

The 11g parameter definition makes no mention of licensing, and the 11g “New Features” manual don’t mention the feature at all, but the parameter does get a special mention in the 11g licensing document where it is described as being part of the Change Management Pack.

init.ora Parameters
The use of the following init.ora parameter is licensed under Oracle Change Management Pack:
■ ENABLE_DDL_LOGGING: when set to TRUE (default: FALSE)

 


backing up a database to the cloud

Pat Shuff - Mon, 2016-04-25 02:07
Up to this point we have talked about the building blocks of the cloud. Today we are going to look into the real economics of using some of the cloud services that we have been examining. We have looked at moving compute and storage to the cloud. Let's look at some of the reasons why someone would look at storage in the cloud.

Storage is one of those funny things that everyone asks for. Think of uses for storage. You save emails that come in every day. If you host your email system in your corporation, you have to consider how many emails someone can keep. You have to consider how long you keep files associated with email. At Oracle we have just over 100,000 employees and limit everyone to 2GB for email. This means that we need 200 TB to store email. If we increase this to 20 GB this grows to 2 PB. At $3K/TB we are looking at $600K capex to handle email messages. If we grow this to 2 PB we are looking at $6M for storage. This is getting into real money. Associated with this storage is a 10% support cost ($60K opex annually) as well as part of a full time employee to replace defective disks, tune and feed the storage system, allocate disks and partitions not only to our storage but other projects at a cost of $80K payroll annually. If we use a 4 year depreciation, our email boxes will cost us ($150K capex + $60K opex + $80K opex) $290K per year or $29/user just to store the email. If we expand the email limits to 20 GB we grow almost everything by a factor of 10 as well so the email boxes cost us $220/user annually (we don't need 10x the storage admins). Pile on top of this home directories that people want to save attachments into and this number explodes. We typically do want to give everyone 20 GB for a home directory since this stores documents associated with operation of the company. We typically want people storing these documents on a network share and not on a disk on their laptop. Storing data on their laptop opens up security and data protection discussions as well as access to data if the laptop fails. Putting it on a shared home directory allows the company to backup the files as well as define protection mechanisms for sensitive data. We have basically justified $250/user for email and home directories by allocating 22 GB to each employee.

The biggest problem is not user data, it is corporate data. Databases typically consume upwards of 400 GB to 40 TB. There is a database for human resources, payroll, customer service, purchase orders, general ledger, inventory, transportation management, manufacturing.... the list goes on. Backing up this data as it changes becomes an issue. Fortunately, programs like E-Business Suite, PeopleSoft, and JD Edwards aggregate all of these business functions into a small number of database instances so we don't have tens or hundreds of databases to backup. Some companies do roll out multiple databases for each project to collect and store data but these are typically done with low cost, low function databases like MySQL, Postgress, MongoDB, and SQL Server. Corporate data that large numbers of people use are typically an Oracle database, DB2, or SQL Server. Backing up this data is critical to a corporation. Database backups are typically done nightly to make sure that you can recover disk or server failures. Fortunately, you don't need to backup all 400 GB every night but can do incremental backups and copy only the data blocks that have changed from the previous night. Companies typically reserve late at night on the weekends to do a full backup because users are typically not working and few if any people are hitting the database at 2am on Sunday morning. The database can be taken offline for a couple of hours to backup 400 GB or a live backup can be taken with little risk since few if anyone is on the system at this time. If you have a typical computer with SCSI or SAS disks you can reasonably get 2G/second throughput so reading 400 GB will take 200 seconds. Unfortunately, writing is typically about half that speed so the backup should reasonably take 400 seconds which is 7 minutes. If your database is 4 TB then you increase this by a factor of 10 so it takes just over an hour to backup everything. Typically you want to copy this data to another data center or to tape and your write speeds get doubled again. The 7 minutes becomes 15 minutes. The hour becomes two hours.

When we talk about backing up database data, there are two schools of thought. The database data is contained in a table extent file. You can backup your data by replicating your file system or using database tools to backup your data. Years ago files were kept on raw disk partitions. Few people do this anymore and table extent files are kept on file systems. Replicating data from a raw partition is difficult so most people used tools like RMAN to backup database files on raw partitions. Database vendors have figured out how to optimize reads and writes to disk despite the file system structures that operating system vendors created. File system vendors have figured out how to optimize backup and recovery to avoid disk failures. Terms like mirroring, triple mirroring, RAID, and logical volume management come up when you talk about protecting data in a file system. Other terms like snap mirror and off-site cloning sneak into the conversation as well. Earlier when we talked about $3K/TB we are really talking about $1K/TB but we triple mirror the disks thus triple the cost of usable storage. This makes sense when we go down to Best Buy or Fry's and look at a 1 TB USB disk for $100. We could purchase this but the 2G/second transfer rate suddenly drops to 200K/second. We need to pay more for a higher speed communication bridge to the disk drive. We could drop the cost of storage to $100/TB but the 7 minute backup and recovery time suddenly grows to 70 minutes. This becomes a cost vs recovery time discussion which is important to have. At home, recovering your family photos from a dead desktop computer can take hours. For a medical practice, waiting hours to recover patient records impacts how the doctors engage with patients. Waiting hours on a ticket sales or stock trading web site becomes millions of dollars lost as people go to your competitors to transact business.

Vendors like EMC and NetApp talk about cloning or snap mirrors of disks to another data center. This technology works for things like email and home directories but does not work well for databases. Database files are written to multiple files at times. If you partition your data, the database might be moving data from one file to another as data ages. We might have high speed SSD disks for current data and low speed, low latency disks for data greater than 30 days old. If we start a clone of our SSD disks during a data move, the recent data will get copied to our mirror at another site. The database might finish re-partitioning the data and the disk management software starts backing up the lower speed disks. We suddenly get into a data consistency problem. The disk management software and the database software don't talk to each other and tell each other that they are moving data between file systems. Data that was in the high speed SSD disks is now out of sequence with the low speed disks at our backup site. If we have a disk failure on our primary site, restoring data from or secondary site will cause database failure. The only way to solve this problem is to schedule disk clones while the database is shut down. Unfortunately, many IT departments select a disk cloning solution since it is the best solution for mirroring home directories, email servers, and virtualization servers. Database servers have a slightly different backup requirement and require a different way of doing things.

The recommended way of backing up a database is to use archive tools like RMAN or commercially available products like ComVault or Legato. The commercial products provide a common backup process that knows how virtualization servers and databases like to be backed up. It allows you to backup SQL Server and an Oracle database with the same user interface and process. Behind the scenes these tools talk to RMAN and the SQL Server backup utilities but presents a uniform user interface to schedule and manage backups and restores.

Enough rambling about disks and backups. Let's start talking about how to use cloud storage for our disk replication. Today we are going to talk about database backup to the cloud. The idea behind our use of the cloud is pure economics. We would like to reduce our cost of storage from $3K/TB to $400/TB/year and get rid of the capex cost. The true problem with purchasing storage for our data center is that we don't want to purchase 10 TB a month because that is what we are consuming for backups. What we are forced to do is look 36 months ahead and purchase 400 TB of disk to handle the monthly data consumption or start deleting data after a period. For things like Census data and medical records the retention period is decades and not months. For some applications, we can delete data after 12 months. If we are copying incremental database backups, we can delete the incrementals once we do a full backup. In our 10 TB a month example we will have to purchase $1.2M in storage today knowing that we will only consume 10 TB this month and 10 TB next month. Using the cloud storage we can pay $300 this month, $600 next month, and grow this amount at $300/month until we get to the 400 TB that we will consume in 36 months. If we guestimated low we will need to purchase more storage again in two years. If we guestimated high we will have overpurchased storage and spend more than $3K/TB for what we are using. Using cloud storage allows us to consume storage at $400/TB/year. If we guess wrong and have metered storage there is no penalty. If we are using non-metered storage, we might purchase a little too much but only have to look forward 12 months rather than 36. It typically is easier to guess a year ahead rather than three years.

Just to clarify, we are not talking about moving all of our backup data to the cloud all at once. What we are talking about is doing daily incremental backups to your high speed disk attached to your database. After a few days we do a full backup to lower cost network storage. After a few weeks we copy these backups to the cloud. In the diagram below we do high speed backups for five days, backup to low speed disks for 21 days, backup to the cloud for periods beyond that. We show moving data to tape in the cloud beyond 180 days. The cost benefit is to take data that we probably won't read to a lower cost storage. Using $400/TB/year gives us a $2600/TB cost savings in capex. Using $12/TB/year tape gives us an additional $388/TB cost savings in opex.

The way that we get this storage tiering is to modify the RMAN backup libraries and move the tape interface from an on-site tape unit to disk or tape in the cloud. The library module can be download from the oracle tech network. More information on this service can be found at Oracle documentation or Backup whitepaper. You can also watch videos that describe this service

The economics behind this can be seen in a TCO analysis that we did. In this example we look at moving 30 TB of backup from on-premise disk to cloud backup. The resulting 4 year savings is $120K. This does not take into account tangential savings but only looks at physical cost savings of not purchasing 30 TB of disk.

Let's walk through what is needed to make this work. First we have to download the library module that takes RMAN read and write commands and translates them into REST api commands. This library exists for Oracle Storage Cloud Services as well as Amazon S3. The key benefits to the Oracle Storage Cloud is that you get encryption and parallelism for free as part of the service. With Amazon S3 you need to pay for additional parallel channels at $1500/channel as well as encryption in the database at $10K/processor license. The Oracle Storage Cloud provides this as part of the $33/TB/month database backup bundle.

Once we download the module, we need to install it with a java command. Note that this is where we tie the oracle home and SID to the cloud credentials. The data is stored in a database wallet as well as the encryption keys used to encrypt the backups.

Now that we have replaced the tape interface with cloud storage, we need to define a tape interface for RMAN and link the library into the process. When we read and write to tape we are actually reading and writing to cloud storage.

Once we have everything configured, we use RMAN or ComVault or Legato as we have for years. Accessing the tape unit is really accessing the cloud storage.

In summary, this is our first use case of the cloud. We are offsetting the cost of on-premise storage and reduce the cost of our database backups. A good rule of thumb is that we can drop the cost of backups from $3K/TB plus $300/TB/year to $400/TB/year. Once we have everything downloaded an installed, nothing looks or feels different from what we have been doing for years. When you start looking at purchasing more disks because you are running out of space, look at moving your backups from local disk to the cloud.

Data Integration Tips: Oracle Data Integrator 12c Password

Rittman Mead Consulting - Mon, 2016-04-25 00:12

Hey, everyone. It’s Sunday night, and we have just enough time for another Data Integration Tip from Rittman Mead. This one has originated from many years of Oracle Data Integrator experience—and a lost 12c password. Let me start first by stating there is never any blame placed when a password is lost, forgotten, or just never stored in a safe place. It happens more often than you might think! Unfortunately, there is no “Forgot password?” link in ODI 12c, which is why I wanted to share my approach to password recovery for these situations.

forgot-password

The Challenge: Lost 12c Password

There are typically two passwords used in Oracle Data Integrator 12c that are forgotten and difficult to recover:

  1. The Work Repository password, created during the setup of the ODI repositories.
  2. The SUPERVISOR user password.

Often there will be more than one ODI user with supervisor privileges, allowing the SUPERVISOR user account password to be reset and making everyone’s life a bit easier. With that, I’ll focus on the Work Repository password and a specific use case I ran into just recently. This approach will work for both lost 12c password instances and I have used it for each in the past.

work-repo-change-password

Now, yes, there is a feature that allows us to change the Work Repository password from within ODI Studio. But (assuming you do have the ability to edit the Work Repository object) as you can see in the image, you also need to know the “current password.” Therein lies the problem.

The Scenario

OK, here we go. The situation I ran into was related to an ODI 11g to 12c upgrade. During the upgrade, we cloned the master and work repositories and set them up on a new database instance in order to lessen the impact on the current 11g repositories. To make this work, a few modifications are required after cloning and before the ODI upgrade assistant can be run. Find more details on these steps in Brian Sauer’s post, Upgrade to ODI 12c: Repository and Standalone Agent.

  • Modify the Work repository connection from within the Master repository. The cloned Master repository is still pointed to the original ODI 11g Work Repository and the connection must be updated.
  • Update the SYSTEM.SCHEMA_VERSION_REGISTRY$ table to add an entry for the cloned ODI repository in the new database instance.
  • Detach the Work Repository from the original Master Repository.

Easy enough. The upgrade assistant completed successfully and everything was working great during testing, until we attempted to open the Work Repository object in ODI:

“Work repository is already attached to another master repository”

Uh-oh. It seems the last bullet point above was skipped. No worries. We have a simple solution to this problem. We can detach the Work Repository from the Master, then attach it once again. Interestingly enough, the action of detaching the repository cleans up the metadata and allows the Work Repository to be added to the cloned master with no problem.

Detaching is easy. Just confirm that you want to remove the Work Repository and poof, it’s gone. It’s the reattaching where we run into an issue…our lost 12c password issue (you knew I was going to bring that up, didn’t you?). Adding a Work repository requires a JDBC connection to a new or existing repository. In this case, we choose the existing repository in our cloned database. The same one we just detached from the Master. Just make sure that you choose to keep the repository contents or you’ll have a much bigger challenge ahead of you.

But then, out of nowhere, we’re prompted for the Work Repository password.

work-repo-password

Hmm…well, we set the ODI 11g repository up in 2011. Jim, who installed it for us, doesn’t work here any longer. “Hmm” is right!

Here’s the Tip

Before we go any further, full disclosure—this is most likely not considered a supported action in the eyes of Oracle, so beware. Warning SignAlso, I haven’t attempted to use the ODI SDK and a Groovy script to update a password, so that might be the way to go if you’re concerned about this being a hack. But desperate times require desperate measures, as they say.

In order to “recover” a password for the Work Repository, we must actually change it behind the scenes in the repository tables. There’s a great deal of metadata we can access via the repository schema, and the modification of this data via the schema is not typical nor recommended, but sometimes necessary.

Oracle Support has a Knowledge Base document, Oracle Data Integrator 11g and 12c Repository Description (Doc ID 1903225.1), which provides a nice data dictionary for the repositories. Looking at the ODI 12.2.1 version of the repository definition, we find that the table SNP_LOC_REPW in the Work Repository stores the value for the repository password in the column REP_PASSW. Now the password must be encoded to match the repository and environment, so it cannot simply be added to the table in plain text.

Encoding a password is something that Oracle Data Integrator developers and admins have been doing for years, most often when setting up a Standalone agent. As a part of the agent installation, there is a script called encode.sh (or encode.bat for Windows) that will accept a plain text password as a parameter and output the encoded string. Brilliant! Let’s try it out.

Browse to the ODI agent domain home and drill into the bin directory. From there, we can execute the encode command. A quick look at the script shows us the expected input parameters.

encode-syntax

The instance name is actually the Agent name. Ensure the agent is running and fire off the script:

[oracle@ODIGettingStarted bin]$ ./encode.sh -INSTANCE=OGG_ODI_AGENT
2016-04-24 22:00:50.791 TRACE JRFPlatformUtil:Unable to obtain JRF server platform. Probably because you are in JSE mode where oracle.jrf.ServerPlatformSupportFactory is not available which is expected.
2016-04-24 22:00:56.855 NOTIFICATION New data source: [OGG_ODI_REPO/*******@jdbc:oracle:thin:@//localhost:1521/ORCL]
2016-04-24 22:01:01.931 NOTIFICATION Created OdiInstance instance id=1
Enter password to encode:

Now you can enter a password to encode, hit return and boom! Here’s your encoded string.

Enter password to encode:
ejjYhIeqYp4xBWNUooF31Q==

Let’s take the entire string and write a quick update statement for the Work Repository SNP_LOC_REPW table. Even though I know there is only one Work Repository, I still use a where clause to ensure I’m updating the correct row.

update SNP_LOC_REPW
set REP_PASSW = 'ejjYhIeqYp4xBWNUooF31Q=='
where REP_NAME = ‘OGG_ODI_WREP’;

Commit the transaction and Bob’s your uncle! Now we can continue on with adding the Work Repository through ODI Studio. Just enter the password used in the encode.sh command and you’re in!

As I mentioned earlier, this same approach can be used to update the SUPERVISOR user password, or really any ODI user password (if they are stored in the repository). In this case, the use of encode.sh is the same, but this time we update the SNP_USER table in the Master repository. The column PASS stores the encoded password for each user. Just remember to change the password everywhere that the user is set to access ODI (agents, etc).

So there you have it. A quick, simple way to “recover” a lost ODI 12c password. Just be sure that this information doesn’t fall into the wrong hands. Lock down your ODI agent file directory to only those administrators who require access. Same goes for the repository schemas. And finally, use this approach in only the most dire situation of a completely lost 12c password. Thanks for reading and look here if you want more DI Tips. Enjoy your week!

The post Data Integration Tips: Oracle Data Integrator 12c Password appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

VR Research at OBUG

Oracle AppsLab - Sun, 2016-04-24 14:41

obug

As part of our push to do more international research, I hopped over to Europe to show some customers VR and gather their impressions and thoughts on use cases. This time it was at OBUG, the Oracle Benelux User Group, which was held in Arnhem, a refreshing city along the Rhine.

Given that VR is one of the big technologies of 2016, and is posed to play a major role in the future of user experience, we want to know how our users would like to use VR to help them in their jobs. But first we just need to know what they think about VR after actually using it.

The week prior, Tawny and I showed some VR demos to customers and fellow Oracle employees at Collaborate in Las Vegas, taking them to the arctic to see whales and other denizens of the deep (link) and for the few with some extra time, defusing some bombs in the collaborative game “Keep Talking and Nobody Explodes” (game; Raymond’s blog post from GDC).

The reaction to the underwater scenes are now predictable: pretty much everyone loves it, just some more than others. There’s a sense of wonder, of amazement that the technology has progressed to this point, and that it’s all done with a smartphone. Several people have reached out to try to touch the sea creatures that are swimming by their view, only to realize they’ve been tricked.

Our European customers are no different than the ones we met at Collaborate, with similar ideas of how it could be used in their businesses.

It’s certainly a new technology, and we’ll continue to seek out use cases, while thinking up our own. In the meantime, VR is lots of fun.Possibly Related Posts:

Partition Storage -- 4 : Resizing Partitions

Hemant K Chitale - Sun, 2016-04-24 10:38
Building on Posts 2 (Adding Rows) and 3 (Adding Partitions) where we saw Table Partitions using 8MB Extents ..... is there a way to "resize" Partitions to smaller Extents (and, maybe, lesser space consumed) without using Compression ?

Let's explore.

Beginning with Partitions P_100 and P_200 ....

SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 24576 3
MY_PART_TBL P_200 32768 4
MY_PART_TBL_NDX P_100 28672 43
MY_PART_TBL_NDX P_200 33792 48

SQL>
SQL> alter table my_part_tbl move partition p_100 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_100 storage (initial 64K next 64K)
2 /

Index altered.

SQL> alter table my_part_tbl move partition p_200 storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_200 storage (initial 64K next 64K)
2 /

Index altered.

SQL>
SQL> select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 and partition_name in ('P_100','P_200')
5 order by 1,2
6 /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34

SQL>
SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 3022 1100001
P_200 3668 1100001

SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks, num_rows
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 and partition_name in ('P_100','P_200')
5 order by 1
6 /

PARTITION_NA BLOCKS NUM_ROWS
------------ ---------- ----------
P_100 2482 1100001
P_200 2639 1100001

SQL>
SQL>
SQL> l
1 select partition_name, blocks, count(*)
2 from dba_extents
3 where owner = 'HEMANT'
4 and segment_name = 'MY_PART_TBL'
5 and segment_type = 'TABLE PARTITION'
6 and partition_name in ('P_100','P_200')
7 group by partition_name, blocks
8* order by 1,2
SQL> /

PARTITION_NA BLOCKS COUNT(*)
------------ ---------- ----------
P_100 8 16
P_100 128 19
P_200 8 16
P_200 128 20

SQL>


Partition P_100 has shrunk from 3 extents of 8MB adding up to 24,576KB to 35 extents adding up to 20,480KB. The High Water Mark has shrink from 3,022 blocks to 2,482 blocks (Remember : P_100 was populated with a Serial Insert.  Partition P_200 that had been populated with Parallel (DoP=4) insert has also shrunk from 32,768KB to 21,504KB and the High Water Mark from 3,668 blocks to 2,639 blocks.  The Extents are a combinaion of 64KB (the first 16, adding up to 1MB) and 1MB sizes.
Even the Index Partitions seem to have shrunk.

So, a MOVE/REBUILD (the REBUILD of the Index Partitons was required because I did a Partition MOVE without UPDATE INDEXES), could be used to shrink the Partitions with newer, smaller, Extents allocated.

But what about the case of SPLIT Partition, where Partitions SPLIT from an 8MB Partition resulted in 2 8MB Partitions, even for empty Partitions.

Here's a workaround.  Before SPLITting the P_MAX Partition, I resize it.

SQL> alter table my_part_tbl move partition p_max storage (initial 64K next 64K);

Table altered.

SQL> alter index my_part_tbl_ndx rebuild partition p_max storage (initial 64K next 64K);

Index altered.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (1001)
4 into (partition p_1000, partition p_max)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_1000
3 at (901)
4 into (partition p_900, partition p_1000)
5 /

Table altered.

SQL> alter table my_part_tbl
2 split partition p_900
3 at (801)
4 into (partition p_800, partition p_900)
5 /

Table altered.

SQL>
SQL> l
1 select segment_name, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4* order by 1,2
SQL>
SQL> /

SEGMENT_NAME PARTITION_NA BYTES/1024 EXTENTS
------------------------------ ------------ ---------- ----------
MY_PART_TBL P_100 20480 35
MY_PART_TBL P_200 21504 36
MY_PART_TBL P_300 8192 1
MY_PART_TBL P_400 8192 1
MY_PART_TBL P_600 8192 1
MY_PART_TBL P_680 8192 1
MY_PART_TBL P_700 8192 1
MY_PART_TBL P_800 64 1
MY_PART_TBL P_900 64 1
MY_PART_TBL P_1000 64 1
MY_PART_TBL P_MAX 64 1
MY_PART_TBL_NDX P_100 18432 33
MY_PART_TBL_NDX P_200 19456 34
MY_PART_TBL_NDX P_300 64 1
MY_PART_TBL_NDX P_400 64 1
MY_PART_TBL_NDX P_600 64 1
MY_PART_TBL_NDX P_680 64 1
MY_PART_TBL_NDX P_700 64 1
MY_PART_TBL_NDX P_800 64 1
MY_PART_TBL_NDX P_900 64 1
MY_PART_TBL_NDX P_1000 64 1
MY_PART_TBL_NDX P_MAX 64 1

22 rows selected.

SQL>


(Note : I have manually relocated Partition P_1000 in the listing).
Partitions P_600, P_680 and P_700 had been created by SPLIT PARTITION commands in the previous post, beginning with segment-created P_MAX partition.  However, after rebuilding P_MAX to 64KB Extents, subsequently SPLITted Partitions (P_800 to P_1000) are also 64KB.

Note : I am not advising that all have to Partitions be 64K.  (Observe how AutoAllocate did allocate 1MB Extents to P_100 and P_200 after the first 1MB of space usage (using 16 64KB Extents).
.
.
.


Categories: DBA Blogs

Video : Flashback Version Query

Tim Hall - Sat, 2016-04-23 10:15

Today’s video gives a quick run through of flashback version query.

If you prefer to read articles, rather than watch videos, you might be interested in these.

The star of today’s video is Tanel Poder. I was filming some other people, he saw something was going on, came across and struck a pose. I figured he knew what I was doing, but it’s pretty obvious from the outtake at the end of the video he was blissfully unaware, but wanted in on the action whatever it was! A true star!

Partition Storage -- 3 : Adding new Range Partitions with SPLIT

Hemant K Chitale - Sat, 2016-04-23 10:04
Building on the Partitioned Table in the previous two blog posts...

We know that the Table is a Range Partitioned Table.  With a MAXVALUE Partition, the only way to add new Partitions is to use the SPLIT PARTITION command.

First, let's review the Table, Partitions and Segments.

SQL> select table_name, num_rows
2 from user_tables
3 where table_name = 'MY_PART_TBL'
4 /

TABLE_NAME NUM_ROWS
---------------- ----------
MY_PART_TBL 2200004

SQL> select partition_name, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by 1
5 /

PARTITION_NA NUM_ROWS BLOCKS
------------ ---------- ----------
P_100 1100001 3022
P_200 1100001 3668
P_300 1 1006
P_400 1 1006
P_MAX 0 0

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the table has 5 partitions P_100 to P_MAX but only 4 have segments created after one or more rows have been populated.  P_MAX has no segment created for either the Table Partition or the Index Partition.

What happens if we SPLIT P_MAX (an empty, segmentless Partition) to create a new Partition ?

SQL> alter table my_part_tbl
2 split partition p_max
3 at (501)
4 into (partition p_500, partition p_max)
5 /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','MY_PART_TBL',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select partition_name, high_value, num_rows, blocks
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE NUM_ROWS BLOCKS
------------ ---------------- ---------- ----------
P_100 101 1100001 3022
P_200 201 1100001 3668
P_300 301 1 1006
P_400 401 1 1006
P_500 501 0 0
P_MAX MAXVALUE 0 0

6 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

8 rows selected.

SQL>


So, the process of creating Partition P_500 did not create a segment for it, because P_MAX which it was SPLIT from, was segmentless.  What happens if I split a Partition with 1 or more rows ?

SQL> insert into my_part_tbl
2 select 550, 'Five Hundred Fifty'
3 from dual
4 /

1 row created.

SQL> commit;
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

10 rows selected.

SQL>
SQL> alter table my_part_tbl
2 split partition p_max
3 at (601)
4 into (partition p_600, partition p_max)
5 /

Table altered.

SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

12 rows selected.

SQL>


So, the row for ID_COLUMN=550 created the segment for Partition P_MAX. Subsequently, SPLITting this Partition into P_600 and P_MAX resulted into two Partitions of 8MB each.
The row for ID_COLUMN=550 would be in the P_600 Partition and the P_MAX Partition would now be the empty Partition.  Yet, even P_MAX now takes an 8MB extent, unlike earlier.

Let's try doing such a SPLIT that, say P_700 is created empty but P_MAX inherits the row.

SQL> insert into my_part_tbl
2 select 900, 'Nine Hundred'
3 from dual
4 /

1 row created.

SQL> commit;

Commit complete.

SQL> alter table my_part_tbl
2 split partition p_max
3 at (701)
4 into (partition p_700, partition p_max)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

14 rows selected.

SQL> select count(*) from my_part_tbl partition (P_700);

COUNT(*)
----------
0

SQL>


Again, both Partitions (P_700 and P_MAX) have a segment of 8MB.
This means that, once a Segment for a Partition is created, any SPLIT of that Partition results into two Segments inheriting the same 8MB Extent Size, irrespective of the fact that one of the two may be empty.

SQL> alter table my_part_tbl
2 split partition p_700
3 at (681)
4 into (partition p_680, partition p_700)
5 /

Table altered.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


That is confirmation that SPLITting a Partition that has a segment (even if it is empty) results into two segmented partitions, even if both are empty.

Going back to Parttion P_500 (which is present but segmentless), what happens if we split it ?

SQL> alter table my_part_tbl
2 split partition p_500
3 at (451)
4 into (partition p_450, partition p_500)
5 /

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'MY_PART_TBL'
4 order by partition_position
5 /

PARTITION_NA HIGH_VALUE
------------ ----------------
P_100 101
P_200 201
P_300 301
P_400 401
P_450 451
P_500 501
P_600 601
P_680 681
P_700 701
P_MAX MAXVALUE

10 rows selected.

SQL>
SQL> select segment_name, segment_type, partition_name, bytes/1024, extents
2 from user_segments
3 where segment_name like 'MY_PART_%'
4 order by 1,2,3
5 /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL TABLE PARTITION P_100 24576 3
MY_PART_TBL TABLE PARTITION P_200 32768 4
MY_PART_TBL TABLE PARTITION P_300 8192 1
MY_PART_TBL TABLE PARTITION P_400 8192 1
MY_PART_TBL TABLE PARTITION P_600 8192 1
MY_PART_TBL TABLE PARTITION P_680 8192 1
MY_PART_TBL TABLE PARTITION P_700 8192 1
MY_PART_TBL TABLE PARTITION P_MAX 8192 1
MY_PART_TBL_NDX INDEX PARTITION P_100 28672 43
MY_PART_TBL_NDX INDEX PARTITION P_200 33792 48
MY_PART_TBL_NDX INDEX PARTITION P_300 64 1

SEGMENT_NAME SEGMENT_TYPE PARTITION_NA BYTES/1024 EXTENTS
-------------------- ------------------ ------------ ---------- ----------
MY_PART_TBL_NDX INDEX PARTITION P_400 64 1
MY_PART_TBL_NDX INDEX PARTITION P_600 64 1
MY_PART_TBL_NDX INDEX PARTITION P_680 64 1
MY_PART_TBL_NDX INDEX PARTITION P_700 64 1
MY_PART_TBL_NDX INDEX PARTITION P_MAX 64 1

16 rows selected.

SQL>


Splitting segmentless Partition P_500 into P_450 and P_500 did *not* result into new Segments.

 This has implications for your SPLIT Partition strategy.  If you need to do a recursive split to create, say, 90 1-day Partitions and you start with a Partition that has a segment (even if empty), you get 90 new segments as well.  Thus, the table would suddenly "grow" by 720MB without having inserted a single row on the day you create these 90 Partitions.  You may get some questions from IT Operations / Support about the sudden "growth" in 1 day.
On the other hand, starting with a segmentess Partition, you get 90 new segmentless Partitions.  Their segments will be created when they are populated.
.
.

.
Categories: DBA Blogs

Fishbowl Hackathon 2016 Summary – Oracle WebCenter Innovations with Slack, Google Vision, and Email

This post comes from Fishbowl’s president, Tim Gruidl. One of Tim’s biggest passions is technology innovation, and not only does he encourage others to innovate, he participates and helps drive this where he can. Tim likes to say “we innovate to help customers dominate”. Tim summarizes Fishbowl’s Hackathon event, held last Friday and Saturday at Fishbowl Solutions, in the post below.

TimWhat an event! I want to start by thanking Andy Weaver and John Sim (Oracle ACE)! Without their passion, drive, leadership and innovation, this event would not be possible.

What a great event to learn, build the team, interact with others and compete. We also created some innovative solutions that I’m sure at some point will be options to help our customers innovate and extend their WebCenter investments. This year, we had 3 teams that designed and coded the following solutions:

  • InSight Image Processing – Greg Bollom and Kim Negaard

They leveraged the Google Vision API to enable the submission of images to Oracle WebCenter and then leveraged Google Vision to pull metadata back and populate fields within the system. They also added the ability to pull in GPS coordinates from photos (taken from cameras, etc.) and have that metadata and EXIF data populate WebCenter Content.

Fishbowl Product Manager, Kim Negaard, discusses the Google Vision API integration with WebCenter

Fishbowl Product Manager, Kim Negaard, discusses the Google Vision API integration with WebCenter.

  • Slack Integation with WebCenter Portal and Content – Andy Weaver, Dan Haugen, Jason Lamon and Jayme Smith

Team collaboration is a key driver for many of our portals, and Slack is one of the most popular collaboration tools. In fact, it is currently valued at $3.6 billion, and there seems to be a rapidly growing market for what they do. The team did some crazy innovation and integration to link Slack to both WebCenter Portal and WebCenter Content. I think the technical learning and sophistication of what they did was probably the most involved and required the most pre-work and effort at the event, and it was so cool to see it actually working.

Team Slack integration presentation.

Team Slack integration presentation.

  • Oracle WebCenter Email NotesJohn Sim (Oracle ACE) Lauren Beatty and me

Valuable corporate content is stored in email, and more value can be obtained from those emails if the content can be tagged and context added in a content management system – Oracle WebCenter. John and Lauren did an awesome job of taking a forwarded email, checking it into WebCenter Content to a workspace, and using related content to build relationships. You can then view the relationships in a graphical way for context. They also created a mobile app to allow you to tag the content on the go and release it for the value of the org.

That's me explaining the email integration with Oracle WebCenter Content.

That’s me explaining the email integration with Oracle WebCenter Content.

Participants voted on the competing solutions, and it ended up being a tie between the Google Insight team and the Email Notes team, but all the solutions truly showed some innovation, sophistication, and completeness of vision. A key aspect of the event for me was how it supported all of Fishbowl’s company values:

Customer First – the solutions we build were based on real-life scenarios our customers have discussed, so this will help us be a better partner for them.

Teamwork – the groups not only worked within their teams, but there was cross team collaboration – Andy Weaver helped John Sim solve an issue he was having, for example.

Intellectual Agility – this goes without saying.

Ambition – people worked late and on the weekend – to learn more, work with the team and have fun.

Continuous Learning – we learned a lot about Slack, cloud, email, etc.

Overall, the annual Hackathon is a unique event that differentiates Fishbowl on so many fronts. From the team building, to the innovation keeping us ahead of the technology curve, to all the learnings – Hackathons truly are a great example of what Fishbowl is all about.

Thanks to all that participated, and remember, let’s continue to innovate so our customers can dominate.

Tim

The post Fishbowl Hackathon 2016 Summary – Oracle WebCenter Innovations with Slack, Google Vision, and Email appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Pages

Subscribe to Oracle FAQ aggregator