Feed aggregator

New Content on Our Oracle.com Page

Oracle AppsLab - Mon, 2016-03-21 16:18

Back in September, our little team got a big boost when we launched official content under the official Oracle.com banner.

I’ve been doing this job for various different organizations at Oracle for nine years now, and we’ve always existed on the fringe. So, having our own home for content within the Oracle.com world is a major deal, further underlining Oracle’s increased investment in and emphasis on innovation.

Today, I’m excited to launch new content in that space, which, for the record is here:

www.oracle.com/webfolder/ux/applications/successStories/emergingTech.html

We have a friendly, short URL too:

tinyurl.com/appslab

The new content focuses on the methodologies we use for research, design and development. So you can read about why we investigate emerging technologies and the strategy we employ, and then find out how we go about executing that strategy, which can be difficult for emerging technologies.

Sometimes, there are no users yet, making standard research tacits a challenge. Equally challenging is designing an experience from scratch for those non-existent users. And finally, building something quickly requires agility, lots of iterations and practice.

All-in-all, I’m very happy with the content, and I hope you find it interesting.

Not randomly, here are pictures of Noel (@noelportugal) showing the Smart Office in Australia last month.

RS3660_ORACLE 332

RS3652_ORACLE 419

The IoT Smart Office, just happens to be the first project we undertook as an expanded team in late 2014, and we’re all very pleased with the results of our blended, research, design and development team.

I hope you agree.

Big thanks to the writers, Ben, John, Julia, Mark (@mvilroxk) and Thao (@thaobnguyen) and to Kathy (@klbmiedema) and Sarahi (@sarahimireles) for editing and posting the content.

In the coming months, we’ll be adding more content to that space so stay tuned.Possibly Related Posts:

Apache Cassandra 2.1 Incremental Repair

Pythian Group - Mon, 2016-03-21 15:05

The “incremental repair” feature has been around since Cassandra’s 2.1. Conceptually the idea behind incremental repair is straightforward, but it can get complicated. The official Datastax document describes the procedure for migrating to incremental repair, but in my opinion, it doesn’t give a full picture. This post aims to fill in this gap by summarizing and consolidating the information of Cassandra incremental repair.

Note: this post assumes the reader has a basic understanding of Apache Cassandra, especially the “repair” concept within Cassandra.

 

1. Introduction

The idea of incremental repair is to mark SSTables that are already repaired with a flag (a timestamp called repairedAt indicating when it was repaired) and when the next run of repair operation begins, only previously unrepaired SSTables are scanned for repair. The goal of an “incremental repair” is two-fold:

1) It aims to reduce the big expense that is involved in a repair operation that sets out to calculate the “merkle tree” on all SSTables of a node;

2) It also makes repair network efficient because only rows that are marked as “inconsistent” will be sent across the network.

2. Impact on Compaction

“Incremental repair” relies on an operation called anticompaction to fulfill its purpose. Basically, anticompaction means splitting an SSTable into two: one contains repaired data and the other contains non-repaired data. With the separation of the two sets of SSTables, the compaction strategy used by Cassandra also needs to be adjusted accordingly. This is because we cannot merge/compact a repaired SSTable with an unrepaired SSTable together. Otherwise, we lose the repaired states.

Please note that when an SSTable is fully covered by a repaired range, no anticompaction will occur. It will just rewrite the repairedAt field in SSTable metadata.

SizeTiered compaction strategy takes a simple strategy. Size-Tiered compaction is executed independently on the two sets of SSTables (repaired and unrepaired), as the result of incremental repair Anticompaction operation.

For Leveled compaction strategy, leveled compaction is executed as usual on repaired set of SSTables, but for unrepaired set of SSTables, SizeTiered compaction will be executed.

For DateTiered compaction strategy, “incremental repair” should NOT be used.

3. Migrating to Incremental Repair

By default, “nodetool repair” of Cassandra 2.1 does a full, sequential repair. We can use “nodetool repair” with “-inc” option to enable incremental repair.

For Leveled compaction strategy, incremental repair actually changes the compaction strategy to SizeTiered compaction strategy for unrepaired SSTables. If a nodetool repair is executed for the first time on Leveled compaction strategy, it will do SizeTiered compaction on all SSTables because until the first incremental repair is done, Cassandra doesn’t know the repaired states. This is a very expensive operation and it is therefore recommended to migrate to incremental repair one node at a time, and follow the following procedure to migrate to incremental repair:

  1. Disable compaction on the node using nodetool disableautocompaction
  2. Run the default full, sequential repair.
  3. Stop the node.
  4. Use the tool sstablerepairedset to mark all the SSTables that were created before you disabled compaction.
  5. Restart cassandra
3.1 Tools for managing SSTable repaired/unrepaired state

Cassandra offers two utilities for SSTable repaired/unrepaired state management:

  • sstablemetadata is used to check repaired/unrepaired state of an SSTable. The syntax is as below:

             sstablemetadata <sstable filenames>

  • sstablerepairedset is used to manually mark if an SSTable is repaired or unrepaired. The syntax is as below. Note that this tool has to be used when Cassandra is stopped.

             sstablerepairedset [–is-repaired | –is-unrepaired] [-f <sstable-list> | <sstables>]

Please note that with utility sstablerepairedset, you can also stop incremental repair on Leveled compaction and restore the data to be leveled again with the “—is-unrepaired” option. Similarly, the node needs to be stopped first.

4. Other Considerations with Incremental Repair

There are some other things to consider when using incremental repair.

  • For Leveled compaction, once an incremental repair is used, it should be done so continuously. Otherwise, only SizeTiered compaction will be executed. It is recommended to run incremental repair daily and run full repairs weekly to monthly.
  • Recovering from missing data or corrupted SSTables require a non-incremental full repair.
  • “nodetool repair” –local option should be only used with full repair, not with incremental repair.
  • In C* 2.1, sequential repair and incremental repair does NOT work together.
  • With SSTable’s repaired states being tracked via it’s metadata, some Cassandra tools can impact the repaired states:
    1. Bulk loading will make loaded SSTables unrepaired, even if was repaired in a different cluster.
    2. If scrubbing causes dropped rows, new SSTables will be marked as unrepaired. Otherwise, SSTables will keep their original repaired state.
Categories: DBA Blogs

accessing cloud storage

Pat Shuff - Mon, 2016-03-21 15:02
Oracle cloud storage is not the first product that performs basic block storage in the cloud. The name is a little confusing as well. When you think of cloud storage, the first thing that you think of is dropbox, box.com, google docs, or some other file storage service. Oracle Cloud Storage is a different kind of storage. This storage is more like Amazon S3 storage and less like file storage in that it provides the storage foundation for other services like compute, backup, or database. If you are looking for file storage you need to look Document Cloud Storage Services which is more tied to processes and less tied to raw cloud storage. In this blog we will look at different ways of attaching to block storage in the cloud and look at the different ways of creating and consuming services. To start off with, there are two ways to consume storage in the Oracle Cloud, metered and un-metered. Metered is charged on a per-hourly/monthly basis and you pay for what you consume. If you plan on starting with 1 TB and growing to 120 TB over a 12 month period, you will pay on average for 60 TB over the year. If you consume this same service as an un-metered service you will pay for 120 TB of storage for 12 months since you eventually cross the 1 TB boundary some time over the year. With the metered services you also pay for the data that you pull back across the internet to your computer or data center but not the initial load of data to the Oracle Cloud. This differs from Amazon and other cloud services that charge both for upload and download of data. If you consume the resources in the Oracle Cloud by other cloud services like compute or database in the same data center, there is no charge for reading the data from the cloud storage. For example, if I use a backup software package to copy operating system or database backups to the Oracle Cloud Storage and restore these services into compute servers in the Oracle Cloud, there is no charge for restoring the data to the compute or database servers.

To calculate the cost of cloud storage from Oracle, look at the pricing information on the cloud web page. for metered pricing and for un-metered pricing.

If we do a quick calculation of the pricing for our example previously where we start with 1 TB and grow to 120 TB over a year we can see the price difference between the two solutions but also note how much reading back will eventually cost. This is something that Amazon hides when you purchase their services because you get charged for the upload and the download. for un-metered pricing and for metered pricing. Looking at this example we see that 120 TB of storage will cost us $43K per year with un-metered services but $36K per year for metered services assuming a 20% reading of the data once it is uploaded. If the read back number doubles, so does the cost and the price jumps to $50K. If we compare this cost to a $3K-$4K/TB cost of on-site storage, we are looking at $360K-$480K plus $40K-$50K in annual maintenance. It turns out it is significantly cheaper to grow storage into the cloud rather than purchasing a rack of disks and running them in your own data center.

The second way to consume storage cloud services is by using tape in the cloud rather than spinning disk in the cloud. Spinning disk on average costs $30/TB/month whereas tape averages $1/TB/month. Tape is not offered in an un-metered service so you do need to look at how much you read back because there is a charge of $5/TB to read the data back. This compares to $7/TB/month with Amazon plus the $5/TB upload and download charges.

Pythian at Collaborate 16

Pythian Group - Mon, 2016-03-21 14:27

Collaborate is a conference for Oracle power users and IT leaders to discuss and find solutions and strategies based on Oracle technologies. This many Oracle experts in one place only happens one per year, and Pythian is excited to be attending. If you are attending this year, make sure to register for some of the sessions featuring Pythian’s speakers, listed below.

Collaborate 16 is on April 10-14, 2016 at the Mandalay Bay Resort and Casino in Las Vegas, Nevada, US.

 

Pythian Collaborate 16 Speaker List:

 

Michael Abbey | Consulting Manager | Oracle ACE

Communications – the Good, the Bad, and the Best

Tues April 12 | 9:15 a.m. – 10:15 a.m. | North Convention, Room South Pacific D

Traditional DB to PDB: The Options

Tues April 12 | 2:15 p.m. – 3:15 p.m. | Room Jasmine A

Documentation – A Love/Hate Relationship (For Now)

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Palm A

 

Nelson Caleroa | Database Consultant | Oracle ACE

Exadata Maintenance Tasks 101

Tues April 12 | 10:45 a.m. – 11:45 am | Room Palm C

Evolution of Performance Management: Oracle 12c Adaptive Optimization

Tues April 12 | 3:30 p.m. – 4:30 p.m | Room Jasmine A

 

Subhajit Das Chaudhuri | Team Manager

Deep Dive Into SSL Implementation Scenarios for Oracle Application E-Business Suite

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Breakers E

 

Alex Gorbachev | CTO | Oracle ACE Director

Oaktable World: TED Talks

Wed April 13 | 12:00 p.m. – 12:30 p.m. | Room Mandalay Bay Ballroom

Oaktable World: Back of a Napkin Guide to Oracle Database in the Cloud

Wed April 13 | 4:15 p.m. – 5:15 p.m. | Room Mandalay Bay Ballroom

 

Gleb Otochkin | Principal Consultant

Two towers or story about data migration. Story about moving data and upgrading databases.

Mon April 11 | 4:30 p.m. – 5:30 p.m. | Room Jasmine A

 

Simon Pane | ATCG Principal Consultant | Oracle Certified Expert

Oracle Database Security: Top 10 Things You Could & Should Be Doing Differently

Mon April 11 | 2 p.m. – 3 p.m. | Room Palm A

Time to get Scheduling: Modernizing your DBA scripts with the Oracle Scheduler (goodbye CRON)

Tues April 12 | 10:45 a.m. – 11:45 a.m. | Room Palm A

 

Roopesh Ramklass | Principal Consultant

Oracle Certification Master Exam Prep Workshop

Sun April 10 | 9:00 a.m. – 3:00 p.m. | Room Jasmine C

Fast Track Your Oracle Database 12c Certification

Wed April 13 | 8:00 a.m. – 9:00 a.m. | Room Jasmine A

 

Categories: DBA Blogs

Next Five New Features in Oracle Database 12c for DBAs : Part II

Online Apps DBA - Mon, 2016-03-21 14:20

 This post is series of Oracle Database 12c new features, check out our previous post on Five New Features in Oracle Database 12c for DBAs : Part I here The Oracle 12C means different things to different people. It all depends on which areas you are looking at, as there are improvements in many areas. Summarized […]

The post Next Five New Features in Oracle Database 12c for DBAs : Part II appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle BPM 11g: Mapping Empty Elements

Jan Kettenis - Mon, 2016-03-21 13:39
In this blog article I explain what happens with mappings for which the source is empty, and you map it to an optional or mandatory element. The scenarios described in this article are based on SOA / BPEL 11g. In some next article I will describe what happens when you do the same in SOA 12c (which is not the same).

Let's assume we have a data structure like this:


And let's assume we have a BPEL that takes a message of the above type as input, and - using a couple of different scenarios - maps it to another element of the same type as output.

The table below shows what happens when you map empty data to a mandatory or optional element (i.e. minOccurs="0"), taking payload validation into consideration, as well as making use of the "ignoreMissingFromData" and "insertMissingToData" features of XPath mappings (only available in BPEL and not in BPM). In the below "null" means that the element is not there at all, "empty" means that the element is there but has no value. As you can see from the XSD an emtpy value is nowhere allowed (otherwise it should have an attribute xsi:nill with value "true").



As you can see, disabling payload validation will lead to corrupt data. But even with payload validation on you may get a result that might not be valid in the context of usage, like an empty mandatory or optional element. Unless empty is a valid value, you should make sure that optional elements are not there when they have no value.

To set "ignoreMissingFrom" and "insertMissingToData", right-mouse click the mapping and toggle the values:


When using the "ignoreMissingFromData" feature with a null optional element mapped to itself, the result is as on the left below. When also the "insertMissingToData" feature is used, the result is as on the right:


Mind that the "insertMissingToData" feature also leads to namespace prefixes for each element.

NEW: Sales Collateral on Social Customer Service & the Impact on CX

Linda Fishman Hoyle - Mon, 2016-03-21 12:58

A Guest Post by Amy Sorrells (pictured left), Oracle Product Management

Traditional marketing is trying to get people to notice and engage with your brand. Customer service is engaging with someone who is already invested in your brand. According to various sources, it is anywhere from five to 20 times more expensive to attract a new customer than to keep an existing one satisfied.

Today, social customer service is an absolute expectation from consumers and is playing a major role in the customer experience. The impact social service has on business is more than just resolving the issue—it’s driving brand awareness, loyalty, and business value.

Oracle Social Cloud certainly understands the importance of social service, which we underscore in our newly released paper, “Social’s Shift to Service: Why Customer Service Engagement is the New Marketing.” The paper states the business benefits of social customer service with insights from customers like Cummins Inc., General Motors, Southwest Airlines, Vodafone, Mothercare and more, as well as insights from leading analysts.

The most important takeaway: Social service is more than just resolving issues—it is driving brand reputation, loyalty and real business value:

“Customer service is not just about resolving issues; it’s about inspiring customer loyalty and engagement, and uncovering new insights. The hidden opportunity here is to identify problems or defects ahead of time—find insights that allow us to take ‘customer service’ to an entirely new level—learn, engage, empower, inspire and delight.” – Flavio Mello, Cummins Inc., Digital Communications Director

“The benefit for us was not only selling the vehicle, but also getting two million views of BatDad’s test drive and his impressions of the sales experience.” – Rebecca Harris, General Motors, Head of Social Center of Expertise, on a successful social engagement with a consumer named @BatDad

Better social service leads to overall better customer service and increased profits. A recent global American Express report shows 74% of consumers have spent more due to good customer service. A recent McKinsey study stated that companies that improve their customer service can see a 30-50% improvement in key measurements including likelihood to recommend and make repeat purchases.

“For word of mouth and referrals, social is a really important way that customers use to filter down to what’s important, so social it absolutely critical to us… We can say a product is great, but if a real customer who has bought our stuff says it’s great, there’s a lot more sincerity to that.” – Claire Dormer, Mothercare, Head of Content & Community

There is enormous untapped potential around customer service engagements—powered by social listening—from uncovering new customer and industry insights to creating brand advocates.

“Social media is the world’s biggest focus group… social is opening up a new area for communication so we’re getting lots of comments and feedback from customers, which means we can very quickly feed those insights into key areas of the business… and better understand our customers and their needs.” – Richard Bassinder, YBS, Social Media Manager

Social media listening, engagements and insights are critical to most every aspect of business today. But the role customer service plays with social will increase drastically as mobile-social usage continues to soar and consumers’ expectations rise. Social’s shift to service is a sign of our times and a customer experience priority for all businesses.

Tools vs Products

Floyd Teter - Mon, 2016-03-21 12:17
I have a garage full of neat tools.  Drill press, miter saw, band saw, table saw, power sander, Dremel, several Milwaukee power drills and portable hand saws, gauges, clamps, vise grips...yeah, the works. But I've learned something over the years; other than other people with a shared interest in nifty tools, nobody cares about the tools I have.  What they care about is the speed, quality and cost involved in making things with those tools.  I can own the niftiest hammer on the face of the planet, but few people will care; they care about the house I build, regardless of the coolness of the hammer.

This concept is not limited to traditional shop and construction tools.  Pull out your smartphone.  Take a look at the apps.  Nobody cares about what tools were used to build the app if it misses the mark on quality, speed, ease of use, or cost.

The same holds true for SaaS applications.  Customers don't care about the underlying platform...nor should they, when the idea is to make all that complexity transparent to them.  Customers care about care about speed, ease of implementation and use, quality (including reliability, depth of features and security), how well the application will perform their business process, and the information the application will provide about those executed transactions.

So, to put it bluntly, SaaS is not about the platform nor the development tools.  It's about ease of use, quality, and cost.  Let's stop talking about the technology and start talking about the things that matter.

REST API Now Supports Metadata in DOCS

WebCenter Team - Mon, 2016-03-21 11:21

Author: Victor Owuor, Senior Director, Oracle

It is our goal that Oracle Documents Cloud Service (DOCS) should be a platform for easily building cloud applications.   To make that possible, we provide a framework for embedding the DOCS user experience within your application.   We also offer a REST API for making calls to DOCS, allowing you to surface its capability within your user interface.  We are proud to announce that the REST API now supports metadata.

We will describe metadata with reference to an application for managing assets for a real estate listing site.   The application will manage the relevant assets within DOCS and surface those assets in UI that it will render separately.   As you would imagine, the application will need to store various images of the properties listed in the application.   For example, there may be a front picture and pictures of various rooms.  Additionally, there will be a need to track additional descriptive information about the assets.  It is that additional descriptive information that we refer to as metadata.  

The application might need to track an address for each property.  The address comprises a collection of:
  • A street address
  • A city
  • A state
  • A country
Each of those is referred to as a metadata field in DOCS.  The related fields are grouped in a metadata collection.  The application will define multiple collections and each folder or document could be associated with several collections.  For example, in additional to the address collection, properties on sale might also be associated with a forsale collection, including the following fields:
  • A sale price
  • Property taxes
  • Previous sale prices
In contrast, properties for a rental property would instead be associated with a for-rent collection, including the following fields:
  • Rental price
  • Lease term
DOCS allows an administrator to easily define metadata collections and the fields in it.   In the example above, an administrator would define the address collection as follows:
POST …/metadata/Address?fields=Street,City,State

He can trivially alter the address collection to include a country as follows:

PUT …/metadata/Address?addFields=Country

Once a collection and its fields are defined, any user can assign it to a folder or a document.  The calls for doing so are as follows:

POST …/folders/{folder id}/metadata/Address
POST …/files/{file id}/metadata/Address

 Of course, only users with contributor access to the folder or document may assign a collection.

Having assigned the collection, users may set values for the various fields in the collection as follows: 

POST …/folders/{folder id}/metadata?collection=Address&Zip=55347&City=Minneapolis
POST …/files/{file id}/metadata?collection=Address&Zip=55347&City=Minneapolis

Collections and values assigned to a folder are inherited by both its sub-folders and any documents within it.  The inherited value can be overridden by assigning a specific value for the metadata field to an item.

 All of the metadata properties would be of little value if you could not retrieve metadata values previously assigned to a document.   We allow you to do that in a simple call that is formatted as follows:

GET …/folders/{folder id}/metadata
GET …/files/{file id}/metadata

That call returns the metadata values in a JSON object that is contained within the HTTP response.  A sample response is shown below:

Additional information about the metadata feature is available here.


Oracle Support Accreditation

Joshua Solomin - Mon, 2016-03-21 11:05
Be More Productive with My Oracle Support

Looking for best practices to take your support experience to the next level?

If you frequently use My Oracle Support (MOS), My Oracle Support Communities, or Cloud Support portal for knowledge search and managing service requests, take advantage of the Oracle Support Accreditation learning program.

Over 19,000 users have already completed an accreditation learning path and exam to build a personalized support toolkit for their role. All accreditation learning resources and exam material are already included in your support contract.

Oracle Support Accreditation includes:

  • 14 learning paths to increase your expertise and efficiency in completing support activities.
  • Level 1 accreditation highlights core features of the portal applications and provides recommendations to increase productivity.
  • Level 2 focuses on individual software products to demonstrate best practices for your specific applications.

Learn key concepts about diagnostics, patching, and finding product information more easily and effectively, with tips for implementing them into your daily support activities based on feedback from Oracle product experts. When finished your personalized resource toolkit will help increase your productivity and streamline your support activities.

Is today your day to become an Oracle Support Accredited User?

Learn more: Oracle Support Accreditation Series Index, Document 1583898.1
Join the conversation in the My Oracle Support Community: Oracle Support Accreditation.

Enterprise Manager 13c And Database Backup Cloud Service

Fuad Arshad - Mon, 2016-03-21 10:35

The Oracle Database Cloud Service allows for backup of an Oracle Database to the Oracle Cloud using Rman. Enterprise Manager 13c provides a very easy way to configure Oracle Database Backup Cloud Service. This post will walk you thru setup of the Oracle Database Backup Cloud service as well as running backups from EM.


There is a new menu Item to configure the Database Backup Cloud Service (DBCS) in the Backup & Recovery Drop down.


This will show you how to setup the Database Backup Cloud Service. If nothing was configured before you will see the screenshot .

Once you click on the Configure Database Backup Cloud Service you will be asked for the Service (Storage) and the Identity Domain that you want the Backups to go to . This Identity Domain comes as part of the DBCS or as Part of DBaaS that can be purchased from Oracle Cloud


Once the Settings are saved . A popup will confirm that the setting have been saved.


After Saving the Settings Submit the Configuration Job . This will Download the Oracle Backup Module to the hosts as well as configure the Media Management Settings. The Job will provide details and confirm all configuration is complete, and will configure this on all nodes of a RAC which can save a lot of time.

We have now completed the setup and can validate by looking the Configure Cloud Backup Setup . This also has an option to test cloud backup as well.

. Lets ensure we have settings there and Checking in Backup Settings , The Media Management settings will shows the location of the Library , Environment and Wallet. The Database Backup Cloud Service requires all backups sent to it is encrypted.


You can also validate this by connecting to rman on the command line and running a "SHOW ALL"

As you can see we have confirmed that the media management setup is completed and well as run a job to download the Cloud Backup Module and configure it.
Now as a final Step we will configure a backup and run an Rman Backup to the Cloud. In the Backup and Recovery Menu Schedule a Backup . Fill out the pertinent setting and make sure ou either encrypt via a password or a wallet or both. The backup that i scheduled was encrypted using a password.

On the Second Page Select the Destination which is the Cloud in our case. and Schedule it


Validate that the setting are right and execute the Job. You can monitor the job by clicking the View Job. The New Job Interface in EM13c is really nice and allows you to see a graphical representation of execution time as well as a log of what is happening side by Side like below.

Once the Backup is completed you can not only see the backup thru EM but also using RMAN on the command line

There are a couple of things that i didn’t show during the process . Parallelism during a backups is important as is compression.
Enterprise Manager 13c allows for making the already simple process of setting up Backup’s to the Database Cloud Service much easier.

Monitor Oracle with Zabbix

Gerger Consulting - Mon, 2016-03-21 06:09
We've got a webinar tomorrow. Attend our webinar and learn how you can monitor your Oracle Database instances with the open source monitoring tool Zabbix. Sign up at this link. More than 125 people have already signed up!




Categories: Development

Oracle AppsUnlimited - Building your Own Machine on IaaS with EBS

Senthil Rajendran - Mon, 2016-03-21 04:11
Oracle Public Cloud gives multiple options to create a base image. There are options available from the market place which I will cover later. Here I will cover building your own machine image on IaaS with your own EBS Installation. If you are a customer running 12.1.3 this should interest you. This procedure can be used to build multiple development single node EBS images of your choice.

I will make the document generic to 12.2 and 12.1.3, so lets get started. Here are the high level steps

  • Build a local Oracle Linux image supported on Cloud , follow doc here and stop after rebooting the linux image.
  • Install EBS 12.2 or 12.1.3 with all the latest PSU,CPU, AD and TXK 
  • Run Pre-Clone on the Database Tier followed by Application Tier
  • Shutdown the Services
  • Follow doc here and complete the rest of the preparatory task for OPC
  • Upload the image
  • Spin-off a machine with the uploaded Image
  • Configure the Target system following Cloning documentation.
  • Finish post installation task if any
With the uploaded image , you can spin-off as many as EBS instances needed.

Reference Note for Cloning : Cloning Oracle E-Business Suite Release 12.2 with Rapid Clone (Doc ID 1383621.1)

Though this procedure has a lot of work on customer end , I would recommend to try out Market Place Images.


Index Speculation

Jonathan Lewis - Sun, 2016-03-20 17:32

There’s a current question on the OTN database forum as follows (with a little cosmetic adjustment):

I have a request for 3 indices as shown below. Does the 1st index suffice for 2 and 3?  Do I need all 3?

  • create index idx_atm_em_eff_ver_current_del on atm_xxx_salary (employee_key, effective_dt, salary_version_number, is_current, is_deleted);
  • create index idx_atm_em_ver_current on atm_xxx_salary (employee_key, salary_version_number, is_current);
  • create index .idx_atm_sal_current_del on atm_xxx_salary (employee_key, is_deleted, is_current);

In the absence of any information about the data and the application the correct answer is: “How could we possibly tell?”

On the other hand there’s plenty of scope for intelligent speculation, and that’s an important skill to practise because when we’re faced with a large number of options and very little information we need to be able to make best-guess choices about which ones are most likely to be worth the effort of pursuing in detail. So if we have to make some guesses about this table and the set of indexes shown, are there any reasonable guesses we might make.

I’ve highlighted the table name and leading column for the first index. The table seems to about salary and the leading column seems to identify an employee. In fact we see that all three indexes start with the employee_key and that may be what prompted the original question. Previous (“real-world”) experience tells me that employees are, generally, paid a salary and that salaries are likely to change (usually upwards) over time, and I note that another column in one of these indexes is effective_dt (date ?), and a third column (appearing in two of the indexes) is is_current.

This looks like a table of employee salaries recording their current and historic salaries, engineered with a little redundant information to make it easy to find the current salary. (Perhaps there’s a view of current_salary defined as is_current = ‘Y’ and is_deleted = ‘N’.)

It’s harder to speculate with any confidence on the columns is_deleted and salary_version_number;  why would a salary row be marked as deleted – is this something that happens when an employee leaves or an employee is deleted (or, following the pattern, has their is_deleted flag set to ‘Y’); why does a salary have a version number – does the table contain it’s own audit trail of errors and corrections, perhaps a correction is effected by marking the incorrect entry as deleted and incrementing its version number to generate the version number for the correct entry. Possibly the notional primary key of the table is (employee_key, effective_dt, is_deleted, salary_version_number).

The level of complexity surrounding these two columns could send further speculation in completely the wrong direction, but let’s follow the line that these two columns see very little action – let’s assume that most of the data is not “deleted” and virtually none of the data needs “versioning”. How does this assumption help us with the original question.

The largest employer in the world is the America Department of Defence with 3.2 million employees (following by the People’s Liberation Army of China with only 2.3 million employees), so an “employees” table is not really likely to be very big. How often does an employee have a salary review and change ? Would once per year be a reasonable figure to pluck from the air ? How many employess stay at the same company for 40 years – how many rows per employee would you end up with, and how scattered would they be through the salary table ?

Under any reasonable estimate it seems likely that if you created the first index (5 columns) then all the salary rows for a given employee are likely to be contained in a single leaf block, so if all the searches were driven by employee then that single index would allow exactly the correct set of table rows to be identified from one index leaf block access plus a little extra CPU.

Of course it’s possible that, with different circumstances, the size and clustering factor of the first index would be so much greater than the size and clustering factors of the other two that a query that would use one of the smaller indexes won’t use the larger index -but in this case the most significant contributor to the optimizer’s cost is likely to be the clustering_factor and given our assumption of the slow appearance over time of the new salaries for an employee the clustering factor of all three indexes is likely to be the same (probably very similar to the number of rows in the salary table).

Having got this far, it’s worth considering whether or not the salary table should actually be an index-organized table – it looks like an obvious candidate; how many other columns are there likely to be in a salary table ? Of course it’s worth thinking about other queries that might access a salary table without reference to the employees table at that point, perhaps a secondary index on (is_current, employee_key) might be appropriate, but in the absence of any other information we’ve reached the point where speculation needs to be backed up by some hard facts.

Bottom Line:

I wouldn’t guarantee that the first index makes the other two indexes redundant but it seems highly likely that it should and it’s probably worth spending some time looking at the requirements and numbers a little more closely – especially if you’re the US DoD or the Chinese People’s Liberation Army.

 

 

 

 


Step by Step installation oracle 12c database on Linux 6 (centos)

Learn DB Concepts with me... - Sun, 2016-03-20 16:24
Assumptions :

  • You have a some flavor of Linux operating system installed (I have used centos 6 in this example).
  • If you cant afford a separate machine you can use Virtual box or stemware software to visualize your desktop or laptop.
  • Assuming that you have downloaded oracle 12 software onto linux machine. If not you can download from this link Software-Download
  • You have full/required privileges on you Linux host.

Oracle Installation Prerequisites


In order to perform the installtion of oracle 12c software on Linux box you need to perform some pre-reqs, which can be done automatically or through manual updates.Please follow below instructions.

Automatic Setup

If you plan to use the "oracle-rdbms-server-12cR1-preinstall" package to perform all your prerequisite setup, issue the following command.

# yum install oracle-rdbms-server-12cR1-preinstall -y


It will be a good option to to do an update.


# yum update






************* ***********

MANUAL SETUP

************* ***********


If you have not used the "oracle-rdbms-server-12cR1-preinstall" package to perform all prerequisites, you will need to manually perform the following setup tasks.


Add or amend the following lines in the "/etc/sysctl.conf" file.

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Run the following command to change the current kernel parameters.

/sbin/sysctl -p

Add the following lines to the "/etc/security/limits.conf" file.

oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

MANUALLY INSTALL PACKAGES FROM INTERNET OR FROM CD DRIVE(below is to install from INTERNET) :

# From Public Yum or ULN
 Yum install binutils,  compat-libcap1,  compat-libstdc++-33,  compat-libstdc++-33.i686,  gcc,  gcc-c++,  glibc,  glibc.i686,  glibc-devel,  glibc-devel.i686,  ksh,  libgcc,  libgcc.i686,  libstdc++,  libstdc++.i686,  libstdc++-devel,  libstdc++-devel.i686,  libaio,  libaio.i686,  libaio-devel,  libaio-devel.i686,  libXext,  libXext.i686,  libXtst,   libXtst.i686,  libX11,   libX11.i686,  libXau,  libXau.i686,   libxcb,  libxcb.i686,  libXi,  libXi.i686,  make,  sysstat,  unixODBC,  unixODBC-devel
Create the new groups and users as per your requirement. For my case just to keep it simple lets use 3 groups & oracle user.

groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper

useradd -u 54321 -g oinstall -G dba,oper oracle.

Set SELINUX to permissive or diable it if this is test env.
Set secure Linux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.

SELINUX=permissive

Create the directories in which the Oracle software will be installed.

mkdir -p /u01/app/oracle/product/12.1/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01

LOGIN AS ORACLE USER AND 
Add the following lines at the end of the "/home/oracle/.bash_profile" file.

# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=ol6-121.localdomain
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1/db_1
export ORACLE_SID=orcl

export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Change your directory to location where you have downloaded oracle software.




Start the Oracle Universal Installer (OUI) by issuing the following command in the database directory.

./runInstaller



I don't want any update to uncheck to receive updates.





Lets create a server class.





To keep this simple I am going to select typical installation.



Make sure you have selected the home path correct. 




After This step you will be notified if you have any pre-req failures. Make sure you have cleared them all. Missing ksh package can be ignored as this is a known bug. Oracle is expecting a specific version of ksh & i have a latest pkg. Assuming that you have all cleared up.




Now select install.


Now you will be prompted to execute shell scripts before the installation of software is complete. I have missed that prompt screen but it will ask you to execute below 2 shell scripts as root user. see below screen



After executing hit OK and it will continue to install oracle DB software.




You will see this screen after installation is complete.





That's it you have completed your oracle 12c database software installation. You can query as below




Please drop your comments below if you found this blog helpful to you.

Categories: DBA Blogs

Oracle AppsUnlimited - Building your Own Machine on IaaS

Senthil Rajendran - Sun, 2016-03-20 12:55
To enable Oracle EBS on cloud and to install a fresh EBS Instance of your own choice then you will have to create a image. This image can be build using Oracle Linux (x86, 64-bit) releases 6.4 and 6.6 with kernel 2.6.36 or later.

High Level Steps

  • Source the ISO from E-Delivery
  • Using Oracle Virtual Box create the VM
  • Install Oracle Linux in the VM
  • Add/Enable Sudo to OPC user and specify the keys
  • Change Network Settings
  • Convert VM Image into Cloud Machine Image
  • Upload/Associate the Machine Image to Oracle Cloud
  • Create a VM on Oracle Cloud

Detailed Steps

Source the ISO from E-Delivery and Using Oracle Virtual Box create the VM













Reboot the Linux Image


  • Add OPC user , create authorized_keys from  http://192.0.0.192/latest/meta-data/public-keys/{index}/openssh-key
  • Enable SUDO for OPC User
  • Disable SELinux
  • Stop the iptables service
  • Ensure that there are no hard-coded MAC addresses
  • Update /etc/sysconfig/network-scripts/ifcfg-eth0 with the below lines
    • DEVICE=eth0
    • BOOTPROTO=dhcp
    • ONBOOT=yes
  • Update /etc/sysconfig/network with the below lines
    • NETWORKING=yes
    • HOSTNAME=localhost.localdomain
    • IPV6_AUTOCONF=no
    • NOZEROCONF=yes
  • Shutdown the VM
  • Create the Cloud Image using the below VBox command
    • VBoxManage internalcommands converttoraw OEL6.vdi OEL6.img
    • cp --sparse=always OEL6.img OEL6sp.img
    • tar -czSf OEL6_Cloud_Image.tar.gz OEL6sp.img
  • Access Compute Account and upload OEL6_Cloud_Image.tar.gz



Create an Instance






Access the Instance using the Public IP and the Key. 

With the above procedure you can create your own image and upload it to IaaS.
In addition to the above linux setup , you can install any packages or application on to it and upload it. I will cover that as a separate post.

Happy Clouding !!!

Learning curve (Oracle 12c Multitenant, Oracle Cloud & Golden Gate)

Syed Jaffar - Sun, 2016-03-20 12:50
First thing first. After almost 8 yrs of successful tenure at my previous company, I have moved on to new challenges from 1-Mar-2016. Joined eProseed KSA as Technical Director where my prime responsibility is to involve in pre-sales, technical planning, motivating teams and hands-on technically in critical projects. I must say, this is what I was looking for a very long time and I am sure I gonna enjoy my new role very much.

Over the past couple of weeks, I have been busy exploring the following concepts, though they are not very new to more people:

  • Oracle 12c Multitenant
  • Oracle Cloud
  • Golden Gate
  • Enterprise Manager Cloud Control 13c

Also involved in additional task, which I can't reveal due to NDA, but will reveal later on.

Started a new Whatsapp group Trend Oracle Cloud with more than 60 members as of now. 

Hope everyone of you doing great. Stay tuned for more updates.


Links for 2016-03-19 [del.icio.us]

Categories: DBA Blogs

Don’t Know cron

Michael Dinh - Sat, 2016-03-19 20:54

Learn something new every day !!!

Did you know the Date and Day from cron is OR condition and not AND?

Wanted to schedule cron to run every 3rd Friday.

This ended up running on the 19th which is Saturday

$ crontab -l|head -1
### Schedule below will run Every Friday OR Date 15-21
41 18 15-21 * 5 /bin/date > /tmp/date.out

$ date
Sat Mar 19 18:40:05 PDT 2016

$ date
Sat Mar 19 18:41:17 PDT 2016

$ ll /tmp/date.out
-rw-r--r--. 1 oracle oinstall 29 Mar 19 18:41 /tmp/date.out

$ cat /tmp/date.out
Sat Mar 19 18:41:01 PDT 2016

OOPS!

++++++++++

Shell script will need to check for day and exit if not the correct day.

$ date
Sat Mar 19 18:43:13 PDT 2016

$ crontab -l|head -1
44 18 15-21 * * /home/oracle/t.sh > /tmp/date.out

++++++++++
$ cat t.sh
#!/bin/sh -ex
# Exit if not Friday
if [[ $(date +%u) -ne 5 ]] ; then
    exit
fi
date
++++++++++

$ date
Sat Mar 19 18:44:26 PDT 2016

$ ll /tmp/date.out
-rw-r--r--. 1 oracle oinstall 0 Mar 19 18:44 /tmp/date.out

$ ./t.sh
++ date +%u
+ [[ 6 -ne 5 ]]
+ exit

Option 2: check day from cron.

$ date
Sun Mar 20 04:44:39 PDT 2016

$ crontab -l|head -1
45 04 15-21 * * [ $(date +\%u) -eq 7 ] && /home/oracle/t2.sh > /tmp/date.out

++++++++++
$ cat t2.sh
date
++++++++++

$ date
Sun Mar 20 04:45:01 PDT 2016

$ ll /tmp/date.out
-rw-r--r--. 1 oracle oinstall 29 Mar 20 04:45 /tmp/date.out

$ cat /tmp/date.out
Sun Mar 20 04:45:01 PDT 2016

Tested on:
oracle@arrow:tiger:/home/oracle
$ uname -an
Linux arrow.localdomain 3.8.13-68.3.2.el6uek.x86_64 #2 SMP Tue Jun 9 17:07:32 PDT 2015 x86_64 x86_64 x86_64 GNU/Linux

oracle@arrow:tiger:/home/oracle
$ cat /etc/oracle-release
Oracle Linux Server release 6.6
oracle@arrow:tiger:/home/oracle
$

++++++++++

Updated: Mar 26, 2016

$ crontab -l|head -1
27 20 15-25 * * /usr/bin/test `date +\%a` = Fri && /home/oracle/t2.sh > /tmp/t2.sh.log 2>&1

Both && and || logic produce identical results for the correct day.

$ date
Fri Mar 25 21:21:26 PDT 2016

$ test `date +\%a` = Fri;echo $?
0

pwd if test = 0

$ test `date +\%a` = Fri && pwd; echo $?
/home/oracle
0

$ test `date +\%a` != Fri;echo $?
1

pwd if test != 0

$ test `date +\%a` != Fri || pwd; echo $?
/home/oracle
0

Noticed the difference in return code as mention in reference: 

http://docstore.mik.ua/orelly/unix3/upt/ch25_02.htm

Using && returns 1 while using || returns 0 for incorrect day.

It’s not Monday; hence, pwd did not return values.

$ test `date +\%a` = Mon && pwd; echo $?
1

$ test `date +\%a` != Mon || pwd; echo $?
0

 

The right side of && (pwd) will only be evaluated if the left side exit status = 0.

$ test `date +\%a` = Mon;echo $?
1

pwd if test = 0

$ test `date +\%a` = Mon && pwd; echo $?
1

The right side of || (pwd) will only be evaluated if the left side exit status is != 0.

$ test `date +\%a` != Mon;echo $?
0

pwd if test != 0

$ test `date +\%a` != Mon || pwd; echo $?
0

 


Partner Webcast - Oracle Developers Tools Update

Oracle's developer tools strategy is to offer the best possible developer tools choices to support diverse needs.Oracle offers a complete and integrated set of application development and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator