Skip navigation.

Feed aggregator

Oracle JET 2.0.1 - Upgrade for CRUD Sample

Andrejus Baranovski - Tue, 2016-05-10 08:39
Oracle JET 2.0.1 was released in April and I decided to upgrade my CRUD sample (based on ADF BC REST services) implemented with previous JET version. There is migration guide, describing main points to consider, while moving to the next JET version. I'm going to add few more points. Hopefully you will find it useful.

Here is CRUD sample UI running with Oracle JET 2.0.1 and interacting with ADF BC REST:


This sample comes with advanced validation rule integration, which is being enforced in ADF BC and propagated to JET UI (read my previous posts about this):


As per migration guide, you must update reference for Alta CSS (2.0.1):


Next update module names in main.js:


CRUD app is rendering paged table out of collection table datasource. Somehow it worked previously, without specifying ojs/ojcollectiontabledatasource module, now it must be specified, otherwise there is error about constructor:


There are changes on UI side. I have wrapped table/form into div with oj-flex-items-pad (includes padding). Table and form and places into separate div's, with oj-flex-item class:


To enable table pagination, I moved it out of div, in previous version it worked inside separate div:


Form group elements are wrapped into oj-flex/oj-flex-item class, this creates better layout for form items:


Update method was changed to handle ADF BC validation error in slightly different way. I'm showing error first and next resetting collection data - to return original value into the table.

Download sample application - JETCRUDApp_v6.zip.

Speaker Scores

Jonathan Lewis - Tue, 2016-05-10 06:10

I published a note this morning that I drafted in January 2015, and I didn’t notice that it had gone back in time to publish itself on the date that I first drafted it – and it’s already been tweeted twice so I can’t move it. So this is a temporary link to pop it to the head of the queue while leaving it where it first appeared.


KeePass 2.33

Tim Hall - Tue, 2016-05-10 02:30

I just noticed KeePass 2.33 was released a couple of days ago. You can download it here.

You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

Cheers

Tim…

KeePass 2.33 was first posted on May 10, 2016 at 9:30 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

A week in the cloud… (Just to clarify)

Tim Hall - Tue, 2016-05-10 02:04

AWSA comment on yesterday’s post by Andy C makes me think I should clarify a couple of things I mentioned in yesterday’s post.

“Infrastructure as a Service (IaaS), not really what I consider the cloud.”

For *me* the definition of cloud must include some value-add in relation to ease of use. I’ve used IaaS on Azure, AWS and Oracle Cloud. In all cases I’m left to do the same administration stuff I had to when I was on a physical box or a VM on a local virtual machine. For *me* it only becomes cloud when I have Platform as a Service (PaaS) or Software as a Service (SaaS), where the administration is simplified greatly through tooling. IaaS is just another hosting provider. It’s not really cloud in *my* book. That’s not to say it’s not cool or useful. It’s just not cloud to *me*.

Notice the heavy use of *me* and *my*. This is not the law or even some text book definition. It’s just the way I feel about it, having used a number of hosting companies for business and personal use prior to “the cloud”. You are allowed to think differently, and certainly cloud providers do.

Exadata as a Service

Pat Shuff - Tue, 2016-05-10 01:07
For the last four days we have been focusing on Database as a Service in the cloud. We focused on Application Express, or Schema as a Service, in the last three days and looked at pricing and how to get APEX working in the Oracle Public Cloud, Amazon AWS, and Microsoft Azure. With the Oracle Public Cloud we have three options for database in the cloud at the platform as a service layer; Schema as a Service, Database as a Service, and Exadata as a Service. We could run this in compute as a service but have already discussed the benefits of offloading some of the database administration work with platform as a service (backup, patching, restarting services, etc).

The question that we have not adequately addressed is how you choose between the three services offered by Oracle. We touched on one of the key questions, database size, when we talked about Schema as a Service. You can have a free database in the cloud if your database is smaller than 25 MB. It will cost you a little money, $175/month, if you have a database smaller than 5 GB. You can grow this to 50 GB and stay with the Schema as a Service. If your database is larger than 50 GB you need to look at Database as a Service or Exadata as a Service. You also need to look at these alternatives if you are running an application in a Java container and need to attach to the database through the standard port 1521 since Schema as a Service only supports http(s) connection to the database. If you can query the database with a REST api call, Schema as a Service is an option but is not necessarily tuned for performance. Products like WebLogic or Tomcat or other Java containers can buffer select statements in cache and not have to ask the same question over and over again from the database. For example, if we census data and are interested in the number of people who live in Texas, we get back roughly 27 million rows of data from the query. If we want to drill down and look at how many people live in San Antonio, we get back 1.5 million rows. If our Java code were smart enough and our application server had enough buffer space, we would not need to read the 27 million rows back when we want to just look at the 1.5 million rows relating to San Antonio. The database can keep the data in memory as well and does not need to read the data back from disk to make the select statement to find the state or city rows that match the query.

Let's take a step back and talk about how a database works. We create a table and put information in columns like first name, last name, street address, city, state, zip code, email address, and phone number. This allows us to contact each person either through snail mail, email, or phone. If we allocate 32 bytes for each field we have 8 fields and each row takes up 256 bytes to identify each person. If we store data for each person who lives in Texas we consume 27 million rows. Each row takes up 256 bytes. The whole table will fit into 6.9 GB of storage. This data is stored in a table extent or file that we save into the /u02/data directory. If we expand our database to store information about everyone who lives in the United States we need 319 million rows. This will expand our database to 81.7 GB. Note that we have crossed the boundary for Schema as a Service. We can't store this much information in a single table so we have to look at Database as a Service or Exadata as a Service. Yes, we can optimize our database by using less than 32 bytes per column. We can store zip codes in 16 bytes. We can store phone numbers in 16 bytes. We can store state information in two bytes. We can also use compression in the database and not store the characters "San Antonio" in a 32 byte field but store it in an alternate table once and correlate it to the hexadecimal number 9c. We then store 9c into the state field which tells us that the city name is stored in another table. This saves us 1.5 million times 31 bytes (one to store the 9c) or 46 MB of storage. If we can do this for everyone in Texas shrink the storage by 840 MB. This is roughly 13% of what we had allocated for all of the information related to people who live in Texas. If we can do this for the city, state, and zip code fields we can reduce the storage required by 39% or shrink the 81.7 GB to 49.8 GB. This is basically what is done with a technology called Hybrid Columnar Compression (HCC). You create a secondary table that correlates the 9c value to the character string "San Antonio". You only need to store the character string once and the city information shrinks from 32 bytes to 1 byte. When you read back the city name, the database or storage that does the compression returns the string to the application server or application.

When you do a select statement the database looks for the columns that you are asking for in the table that you are doing a select from and returns all of the data that matches the where clause. In our example we might use

select * from census where state = 'Texas';
select * from census where city = 'San Antonio';
We can restrict what we get back by not using the "*" value. We can get just the first_name and last_name and phone number if that is all we are interested in. The select statement for San Antonio will return 1.5 million rows times 8 columns times 32 bytes or 384 MB of data. A good application server will cache this 384 MB of data and if we issue the same select statement again in a few seconds or minutes we do not need to ask the database again. We issue a simple request to the database asking it if anything has changes since the last query. If we are running on a slow internet connection as we find in our homes we are typically running at 3 MB/second download speeds. To transfer all of this data will take us 128 seconds or about two minutes. Not reading the data a second time save us two minutes.

The way that the database finds which 384 MB to return to the application is done similarly. It looks at all of the 81.7 GBs that store the census data and compares the state name to 'Texas' or hex value of corresponding to the state name. If the compare is the same, that row is put into a response buffer and transmitted to the application server. If someone comes back a few seconds later and requests the information correlating to the city name 'San Antonio', the 81.7 GB is read from disk again and and the 384 MB is pulled out to return to the application server. A smart database will cache the Texas data and recognize that San Antonio is a subset of Texas and not read the 81.7 GB a second time but pull the data from memory rather than disk. This can easily be done by partitioning the data in the database and storing the Texas data in one file or disk location and storing the data correlating to California in another file or disk location. Rather than reading back 81.7 GB to find Texas data we only need to read back 6.9 GB since it has been split out in storage. For a typical SCSI disk attached to a computer, we read data back at 2.5 GB/second. To read back all of the US data it takes us 33 seconds. It we read back all of the Texas data it takes us 2.76 seconds. We basically save 30 seconds by partitioning our data. If we read the Texas data first and the San Antonio data second with our select statements, we can cache the 6.9 GB in memory and not have to perform a second read from disk saving us yet another 33 seconds (or 3 seconds with partitioned data). If we know that we will be asking for San Antonio data on a regular basis we setup an index or materialized view in the database so that we don't have to sort through the 6.9 GB of data but access the 384 MB directly but read just the relevant 384 MB of data the first time and reduce our disk access times to 0.15 seconds. It is important to note that we have done two simple things that reduced our access time from 33 seconds to 0.15 seconds. We first partitioned the data and the way that we store it by splitting the data by state in the file system. We second created an index that helped us access the San Antonio data in the file associated with Texas without having to sort through all of the data. We effectively pre-sort the data and provide the database with an index. The cost of this is that every insert command to add a new person to San Antonio requires not only updating the Texas table but updating the index associated with San Antonio as well. When we do an insert of any data we must check to see if the data goes into the Texas table and update the index at the same time whether the information correlates to San Antonio or not because the index might change if data is inserted or updated in the middle of the file associated with the Texas information.

Our original question was how do we choose between Schema as a Service, Database as a Service, and Exadata as a Service. The first metric that we used was table size. If our data is greater than 25 MB, we can't use the free APEX service. If our data is greater than 50 GB, we can't use the paid APEX or Schema as a Service. If we want to use features like compression or partitioning, we can't use the Schema as a Service either unless we have sys access to the database. We can create indexes for our data to speed requests but might or might not be able to setup compression or partitioning since these are typically features associated with the Enterprise Edition of the database. If we look at the storage limitations of the Database as a Service we can currently store 4.8 TB worth of data in the database. If we have more data than that we need to go to Exadata as Service. The Exadata service comes in different flavors as well and allows you to store up to 42 TB with a quarter rack, 84 TB with a half rack, and 168 TB with a full rack. If you have a database larger than 168 TB, there are no solutions in the cloud that can store your data attached to an active database. You can backup your data to cloud storage but you can not have an active database attached to it.

If we look a little deeper into the Exadata there are multiple advantages to going with Exadata as a Service. The first and most obvious is that you are suddenly working on dedicated hardware. In most cloud environments you share processors with other users as well as storage. You do not get a dedicated bandwidth from processor to disk but must time share this with other users. If you provision a 16 core system, it will typically consume half of a 32 core system that has two sockets. This means that you get a full socket but have to share the memory and disk bandwidth with the next person running in the same server. The data read from the disk is cached in the disk controller's cache and your reads are optimized until someone else reads data from the same controller and your cached data gets flushed to make room. Most cloud vendors go with commodity hardware for compute and storage so they are not optimized for database but for general purpose compute. With an Exadata as a Service you get hardware optimized for database and you get all of the processors in the quarter, half, or full rack. There is no competing for memory bandwidth or storage bandwidth. You are electrically isolated from someone in the other quarter or half rack through the Infiniband switch. Your data is isolated on spindles of your own. You get the full 40 GB/second to and from the disk. Reading the 81.7 GB takes 2.05 seconds compared to 32.68 seconds through a standard SCSI disk controller. The data is partitioned and stored automatically so that when we ask for the San Antonio data, we only read back the 384 MB and don't need to read back all of the data or deal with the index update delays when we write the data. The read scans all 81.7 GB and returns the results in 0.01 seconds. We effectively reduce the 33 seconds it took us previously and dropped it to 10 ms.

If you want to learn more about Exadata and how and why it makes queries run faster, I would recommend the following books

or the following youtube video channels or the following web sites

The Exadata as a Service is a unique offering in the cloud. Amazon and Microsoft have nothing that compares to it. Neither company offers dedicated compute that is specifically designed to run a database in the cloud with dedicated disk and dedicated I/O channels. Oracle offers this service to users of the Enterprise Edition of the database that allows them to replicate their on-premise data to the cloud, ingest the data into an Exadata in the cloud, and operate on the data and processes unchanged and unmodified in the cloud. You could take your financial data that runs on a 8 or 16 core system in your data center and replicate it to an Exadata in the cloud. Once you have the data there you can crunch on the data with long running queries that would take hours on your in house system. We worked with a telecommunications company years ago that was using an on-premise transportation management system and generated an inventory load list to put parts on their service trucks, work orders for the maintenance repair staff, and a driving list to route the drivers on the optimum path to cover the largest number of customers in a day. The on-premise system took 15-16 hours to generate all of this workload and was prone to errors and outages requiring the drivers to delay their routes or parts in inventory to be shipped overnight for loading in the morning onto the trucks. Running this load on an Exadata dropped the analytics to less than an hour. This allowed trucks to be rerouted mid-day to higher profit customers to handle high priority outages as well as next day delivery of inventory between warehouses rather than rush orders. Reducing the analytics from 15 hours to less than an hour allowed an expansion of services as well as higher quality of services to their customer base.

Not all companies have daily issues like this and look for higher level processing once a quarter or once or twice a year. Opening new retail outlets, calculating taxes due, or provisioning new services that were purchased as Christmas presents are three examples of predictable, periodic instances where consuming a larger footprint in the cloud rather than investing in resources that sits idle most of the year in your data center. Having the ability to lease these services on an monthly or annual basis allows for better utilization of resources not only in your data center but reduces the overall spend of the IT department and expanding the capabilities of business units to do things that they normally could not afford.

Exadata as a Service is offered in a non-metered configuration at $40K per month for a quarter rack (16 cores and 144 TB of disk), $140K per month for a half rack (56 cores and 288 TB of disk), or $280K per month for a full rack (112 cores and 576 TB of disk). The same service is offered on a metered basis for $80K for a quarter rack, $280K for a half rack, and $560K for a full rack (in the same configuration as the non-metered service). One of the things that we recommend is that you analyze the cost of this service. Is it cheaper to effectively lease a quarter rack at $80K for a month and get the results that you want, effectively lease a quarter rack at $480K for a year, or purchase the hardware, database license, RAC licenses, storage cell licenses, and other optional components to run this in your data center. We will not dive into this analysis because it truly varies based on use cases, value to your company for the use case, and cost of running one of these services in your data center. It is important to do this analysis to figure out which consumption model works for you.

In summary, Exadata as a Service is a unique service that no other cloud vendor offers. Having dedicated hardware to run your database is unique for cloud services. Having hardware that is optimized for long, complex queries is unique as well. Exadata is one of the most popular hardware solutions offered by Oracle and having it available on a monthly or annual basis allows customers to use the services at a much lower cost than purchasing a box or larger box for their data center. Having Oracle manage and run the service frees up your company to focus on the business impact of the hardware and accelerated database rather than spend month to administer the server and database. Tomorrow we will dive into Database as a Service and see how a generic database in the cloud has a variety of use cases and different cost entry points as well as features and functions.

Links for 2016-05-09 [del.icio.us]

Categories: DBA Blogs

Telstra SMS API Swagger Enabled and deployable on Bluemix Sydney Public Instance

Pas Apicella - Mon, 2016-05-09 22:56
The following demo below can be used to expose the Telstra SMS Public API https://dev.telstra.com/content/sms-api-0

https://github.com/papicella/TelstraSMSAPIPublic

You can deploy this to Bluemix by simply using the "Deploy to Bluemix" button as shown below.


Once deployed you have a Swagger UI enabled REST endpoints to consume as shown below.

Application once deployed on Bluemix


Swagger UI 



More Information

https://dev.telstra.com/content/sms-api-0
http://bluemix.net


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Taxonomy isn’t just for frogs anymore. What taxonomy means in document management.

 

taxonomyfrogTaxonomy can be a nebulous term. It has existed for years, having probably its most common roots in the sciences, but has blossomed to apply its practices to a plethora of other fields.  The wide application of taxonomy shows how useful and effective it is, yet its meaning can be unclear due to its diversity.  We identify with taxonomy in library sciences with the Dewey Decimal System and we identify with taxonomy in the scientific use when we talk about animals (Kingdom: Animalia; Phylum: Chordata; Class: Amphibia; Clade: Salientia; Order: Anura (frog)).  These are familiar uses to us.  We learned of them early on in school.  We’ve seen them around for years—even if we didn’t identify them as taxonomies.  But what is taxonomy when we talk about subjects, like documents and data, that aren’t so tangible?  As a Business Solutions Architect at Fishbowl Solutions, I encounter this question quite a bit when working on Oracle WebCenter Content document management projects with customers.

The historical Greek term taxonomy means “arrangement law.”  Taxonomy is the practice in which things, in this case documents, are arranged and classified to provide order for users.  When it comes to documents, we give this order by identifying field names, field values, and business rules and requirements for tagging documents with these fields.  These fields then describe the document so that we can order the document, know more about it, and do more with it.

Here’s an example:lilypadtax

  • Document Type: Policy
  • Document Status: Active
  • Document Owner: Administrator
  • Lifecycle: Approved
  • Folder: HR
  • Sub-Folder: Employee Policies
  • And so on…

Defining taxonomy for documents provides a host of business and user benefits for document management, such as:

  • A classification and context for documents. It tells users how a document is classified and where it “fits in” with other documents. It gives the document a name and a place. When a document is named and placed, it enables easier searching and browsing for users to find documents, as well as an understanding of the relationship of one document to another. Users know where it will be and how to get it.
  • A simplified experience. When we have order, we reduce clutter and chaos. No more abandoned or lost documents. Everything has a place. This simplifies and improves the user experience and can reduce frustration as well. Another bonus: document management and cleanup is a simple effort. Documents out of order are easy to identify and can be put in place. Documents that are ordered can be easily retrieved, for instance for an archiving process, and managed.frogelement
  • An arrangement that makes sense for the business. Using taxonomy in a document management system like Oracle’s WebCenter Content allows a company to define its own arrangement for storing and managing documents that resonates with users. Implementing a taxonomy that is familiar to users will make the document management system exponentially more usable and easier to adopt. No more guessing or interpreting arrangement or terminology—users know what to expect, terms are common, they are in their element!
  • A scalable framework. Utilizing a defined and maintained taxonomy will allow users to adopt the common taxonomy as they use the document management system, but will also allow for business growth as new scope (documents, processes, capabilities, etc.) is added. Adding in a new department with new documents? Got it. Your scalable taxonomy can be reused or built upon. Using a comprehensive taxonomy that is scalable allows for an enterprise approach to document management where customizations and one-offs are minimized, allowing for a common experience for users across the business.
  • A fully-enabled document management system. Lastly, defining a taxonomy will allow for full utilization of your OracleWebCenter Content, or other, document management system.   Defining a taxonomy and integrating it with your document management system will enable building out:
    • logical folder structures,
    • effective browse and search capabilities,
    • detailed profiles and filters,
    • advanced security,
    • sophisticated user interfaces and more.

Clearly, a taxonomy is the solution to providing necessary order and classification to documents. It creates a common arrangement and vocabulary to empower your users, and your document management system, to work the best for you.  Now hop to it!

This blog is the first in a series discussing taxonomy topics.  Watch for the next blog entitled “Taxonomy is a Sleeper. The reasons from A to ZZZs that taxonomy hasn’t been a part of your most important projects—but should be!”

Carrie McCollor is a Business Solutions Architect at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our site to learn more about what we do.

The post Taxonomy isn’t just for frogs anymore. What taxonomy means in document management. appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Did You Know: Oracle EBS R12.2 #1 – Managing OHS

Pythian Group - Mon, 2016-05-09 14:21

For a long time now I’ve wanted to start a blog series outlining the not-so-obvious things that have changed in the new Oracle E-Business Suite R12.2. Here comes the new “Did You Know” series specifically for Oracle E-Business Suite! Lets start this series with Apache, aka Oracle HTTP Server.

People are already aware that OHS10g/OC4J is replaced with OHS11g/Weblogic in R12.2.X. On the surface it looks like a simple change, but a lot changed under the hood. In Oracle EBS 11i/R12.1, one could change apache port, but just updating the Context XML file and running autoconfig.

In R12.2, we have to change OHS stuff like web port by logging into the EM console url and then running $AD_TOP/bin/adSyncContext.pl script to sync OHS config parameters to the Context XML file. This script is only needed for OHS config for now. Any changes for Weblogic als need to be done in weblogic console url, not in Context XML file. But these changes in weblogic console are automatically propagated to xml file by the adRegisterWLSListeners.pl script.

You can get more details of the procedures by reviewing the MOS notes below.

  • E-Business Suite 12.2 Detailed Steps To Change The R12.2 Default Port To 80 (Doc ID 2072420.1)
  • Managing Configuration of Oracle HTTP Server and Web Application Services in Oracle E-Business Suite Release 12.2 (Doc ID 1905593.1)
Categories: DBA Blogs

Oracle Field Service Cloud Announces Significant, End-to-End Upgrade

Linda Fishman Hoyle - Mon, 2016-05-09 13:24

Oracle issued an announcement last week trumpeting its latest Field Service Cloud release. This release delivers comprehensive enhancements from mobility to ease of use. It gives companies the ability to more easily connect field service technicians to the back-office tools and resources they need to complete their work in the field. Organizations can now power more connected customer service experiences, faster response times, and reduced service delivery costs.

If you have questions, contact Christine Friscic, Oracle product manager (pictured left), at christine.friscic@oracle.com.

A Retrospective on Implementing Common Course Management Systems

Michael Feldstein - Mon, 2016-05-09 11:19

By Phil HillMore Posts (405)

At e-Literate we mostly avoid blogging about our consulting work through MindWires Consulting, but we have an opportunity with our work for California’s Online Education Initiative (OEI) to share information with the higher education community on a topic of growing importance. The OEI is California Community College System’s approach to help individual colleges collaborate with their online courses and programs, including a OEI Course Exchange to be launched this Fall in pilot mode that will “allow students to register for online courses across participating colleges without requiring students to complete separate application and matriculation processes”.

Last year we at MindWires helped OEI select a Common Course Management System (yes, they use the CMS language instead of LMS) by providing market analysis and facilitating the group decision-making process. This year they asked us to review similar efforts at other consortia in the US and Canada. The point of a CCMS is not the technology platform itself but rather what the common e-learning infrastructure could allow a consortium to do – address issues such as course redesign, professional development, student support, etc. Even though the OEI was based on selecting a common system from the beginning and has experienced significant adoption already, there is still a great deal of value in learning from others that have gone before. We are releasing the result of this work with the report – “A Retrospective on Implementing Course Management Systems: Motivations, Benefits, Drawbacks and Recommendations“.

While I contributed to the report, O’Neal Spicer from MindWires and Michelle Pilati from OEI are the primary authors. They interviewed staff and leadership from 10 different organizations that have implemented a common CMS, deliberately chose not to do so, or support such groups. The interviewed groups include:

  • The State University of New York
  • Utah Education Network
  • Mississippi Virtual Community College
  • University of Wisconsin System
  • Virginia’s Community Colleges
  • Colorado Community Colleges Online
  • WICHE Cooperative for Education Technologies (WCET)
  • British Columbia Campus (BC Campus)
  • Great Plains Interactive Distance Education Alliance (IDEA)
  • Connecticut Distance Learning Consortium

The report explores what the initial motivations were for these groups to select a common platform, the benefits that have been realized, the drawbacks consortia face, the challenges in implementation, and recommendations for those groups considering similar choices.

You can view and download the full report here: 

Download (PDF, 1.89MB)

.

Update: Fixed page number error

The post A Retrospective on Implementing Common Course Management Systems appeared first on e-Literate.

Compression -- 7 : Updating after BASIC Compression

Hemant K Chitale - Mon, 2016-05-09 08:57
In the first blog post on compression, I had shown BASIC Compression and also the effect of executing an UPDATE on a table with BASIC compression.

To expand on the them of UPDATEs of BASIC compression blocks ....

SQL> select count(*) from source_data;

COUNT(*)
----------
367156

SQL> create table target_comp row store compress basic as select * from source_data where 1=2;

Table created.

SQL> select pct_free from user_tables where table_name = 'TARGET_COMP';

PCT_FREE
----------
0

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_comp
2 select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
0 4452

SQL>
SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 0 4452

SQL>


So we have a table with 1.1million rows and no Row Chaining.

What happens if we update about 20% of the rows ?

SQL> begin
2 for rec in (select rowid from target_comp where rownum < 220001)
3 loop
4 update target_comp set owner=owner where rowid=rec.rowid;
5 commit;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL> analyze table target_comp compute statistics;

Table analyzed.

SQL> select chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

CHAIN_CNT BLOCKS
---------- ----------
202189 7882

SQL> exec dbms_stats.gather_table_stats('','TARGET_COMP');

PL/SQL procedure successfully completed.

SQL> select num_rows, chain_cnt, blocks from user_tables where table_name = 'TARGET_COMP';

NUM_ROWS CHAIN_CNT BLOCKS
---------- ---------- ----------
1101468 202189 7882

SQL>


I have updated 220,000 rows without actually increasing the notional length of each row (I set OWNER=OWNER).  Yet, The CHAIN_CNT is now 202K and the table's HighWaterMark has expanded from 4,452 blocks to 7,882 blocks.  A significant increase !
(YMMV may vary in your tests !)

It was Jonathan Lewis who suggested getting the Chain Count (or LIST CHAINED ROWS) to understand the impact of UPDATEs on a table with BASIC compression.
.
.
.


Categories: DBA Blogs

Tip of the day: Always put this in your .bashrc

RDBMS Insight - Mon, 2016-05-09 05:52

if you like to scp:

# If not running interactively, don't do anything
[[ $- == *i* ]] || return

Otherwise scp will fail without error – it’s a known bug.

Categories: DBA Blogs

RI Locks

Jonathan Lewis - Mon, 2016-05-09 05:24

RI = Referential Integrity: also known informally as parent/child integrity, and primary (or unique) key/foreign key checking.

I’m on a bit of a roll with things that I must have explained dozens or even hundreds of times in different environments without ever formally explaining them on my blog. Here’s a blog item I could have done with to response to  a question that came up on the OTN database forum over the weekend.

What happens in the following scenario:


-- session 1

create table parent (
        id        number(8,0),
        constraint par_pk primary key(id)
);

create table child  (
        id_p      number(8,0) not null references parent,
        id_c      number(8,0) not null,
        constraint child_pk primary key(id_p, id_c)
)
;

insert into parent values(1);

-- session 2
insert into child values(1,1);

Since the parent row corresponding to the child row doesn’t (yet) seem to exist as far as session 2 is concerned you might expect session 2 to respond immediately with an error message like:

ERROR at line 1:
ORA-02291: integrity constraint (TEST_USER.SYS_C0017926) violated - parent key not found

In fact, although the end-user is not allowed to see the uncommitted parent row, the user’s process can see the uncommitted row and will wait until session 1 commits or rolls back – so if you examine v$lock for the current locks for the two sessions you’d see something like this:

  1  select  sid, type, id1, id2, lmode, request, ctime, block
  2  from    V$lock
  3  where   sid in (select sid from V$session where username = 'TEST_USER')
  4  and     type != 'AE'
  5  order by
  6*         sid, type desc
  7  /

       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------- -- ---------- ---------- ---------- ---------- ---------- ----------
         3 TX     327709      12584          6          0        283          1
           TM     143734          0          2          0        283          0
           TM     143732          0          3          0        283          0

       250 TX     589829      12877          6          0        240          0
           TX     327709      12584          0          4        240          0
           TM     143734          0          3          0        240          0
           TM     143732          0          3          0        240          0


7 rows selected.

In the above, SID 250 is session 2: it’s holding a transaction lock (TX) in mode 6 because it has acquired an undo segment and has generated some undo, it’s also waiting for a transaction lock in mode 4 (share) and – checking id1 and id2 – we can see that the transaction table entry it’s waiting for is held by session 3 in mode 6 (and we also note that the lock held by session 3 is marked as a blocker).

If session 3 commits (thus releasing the transaction lock) session 250 will continue processing the insert; if session 3 rolls back session 250 will raise error ORA-02291 and roll back its insert statement. (Note: if this were a multi-statement transaction it would only be the insert into child that would be rolled back; that’s another one of those details that is important but often isn’t stated explicitly, leaving people believing that the entire transaction would be rolled back.)

Updates and deletes can produce the same effects. Imagine that we have just created the two tables, and then run the following:


-- session 1
insert into parent values(1);
commit;
delete from parent where id = 1;

-- session 2
insert into child values(1,1);

Again session 2 will wait for session 1 to commit or roll back. In this case if session 1 commits session 2 will raise Oracle error ORA-02291, if session 1 rolls back session 2 will continue with the insert.

Deadlocks

Whenever you can demonstrate a way of producing a wait chain you can also manage to produce a deadlock. Consider the following (starting, again, from empty tables);


-- (1) session 1
insert into parent values(1);

-- (2) session 2
insert into parent values(2);

-- (3) session 1
insert into child values(2,2);

-- (4)session 2
insert into child values(1,1);

Session 1 will start waiting for session 2 to commit (or rollback) at step 3, then session 2 will start to wait for session 1 at step 4 – with the result that session 1 will recognise the deadlock after about three seconds and rollback its last statement, raising exception ORA-00060 and dumping a trace file. (Note: session 1 will not, as many people think, roll back the entire transaction, it will only roll back the statement that allowed the deadlock to develop). Session 2 will still be waiting for session 1 to commit or rollback its insert into parent. Contrary to the popular claim, Oracle will not “resolve” the deadlock, it will simply break the deadlock leaving one session waiting for the other session to respond appropriately to the deadlock error.

For reference, here’s the deadlock graph (from a 12c trace file) produced by session 1 (SID = 3) for this demo:


Deadlock graph:
                                          ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name                             process session holds waits  process session holds waits
TX-00010017-000026C7-00000000-00000000          6       3     X             33     250           S
TX-000A000D-000026F8-00000000-00000000         33     250     X              6       3           S

session 3: DID 0001-0006-00000004       session 250: DID 0001-0021-00000041
session 250: DID 0001-0021-00000041     session 3: DID 0001-0006-00000004

Rows waited on:
  Session 3: no row
  Session 250: no row

When you see a deadlock graph with TX waits of type S (share, mode 4) it’s a very good bet that the wait has something to do with indexes – which may mean referential integrity as discussed here, but may mean collisions on primary keys, and may mean something to do with simple collisions on index-organized tables. You’ll notice that the “Rows waited on:” section shows no row – unfortunately in earlier versions of Oracle you may find a spurious row entry here because the wait information from some other (block) wait has been left in the relevant columns in v$session.


Bitcoin for Oracle Developers

Gerger Consulting - Mon, 2016-05-09 04:53
What is Bitcoin? What is The Blockchain? Why should an Oracle developer care? How is it going to impact Enterprise IT? How can you benefit from this upcoming new technology? Where do you even start to learn about Bitcoin?

Attend our free webinar presented by Balaji S. Srinivasan, one of the most prominent figures in the Bitcoin community, to get started with Bitcoin and learn how you can turn any database into a bitcoin-generating revenue stream.



About the Presenter
Balaji S. Srinivasan is the CEO and cofounder of 21.co, which has raised more than $116M from Andreessen Horowitz, Khosla Ventures, Cisco, Qualcomm, and Peter Thiel. He is also a board partner at Andreessen Horowitz. Dr. Srinivasan holds a BS, MS, and PhD in Electrical Engineering and an MS in Chemical Engineering from Stanford. He was an NDSEG, NSF, and VIGRE fellow, was named to the MIT TR35 and Founders Fund F50 lists, and taught data mining, statistics, and computational biology in the Department of Statistics at Stanford University.



Categories: Development

A week in the cloud…

Tim Hall - Mon, 2016-05-09 03:42

cloudIt’s been just over a week since I moved my website over to “the cloud”. Well, an EC2 virtual machine on Amazon Web Services, so it’s Infrastructure as a Service (IaaS), not really what I consider the cloud.

Fixing broken links to Oracle documentation… Again…

Tim Hall - Mon, 2016-05-09 02:59

Broken-LinkWith my recent website move I thought I better check for broken links, in case I had screwed anything up during the transfer. The last few times I’ve done this I’ve used SiteCrawl, which seems to do a decent job.

After the scan had finished I looked through the results and had a couple of broken internal links and 171 broken external links. Can you guess where the vast majority of the broken external links were pointing to? Yes, it was Oracle documentation. A quick search on my blog reveals about six rants I’ve posted about this over the years. There have been many more incidents of course.

Fixing this stuff is very time consuming and boring. What’s worse, it robs me of time I could spend on creating new content.

I’m guessing most content producers don’t go back and check for broken links. Oracle certainly don’t because I see them all the time in whitepapers, articles and forum entries. The result of this is a sea of helpful content produced by the community that are littered with broken links to Oracle content. It lowers the value of the community content, through no fault of the content producer.

Oracle. Please stop doing this! You are constantly devaluing the content produced by us in the community!

Cheers

Tim…

Update: You should probably read this by Kim Berg Hansen.

Fixing broken links to Oracle documentation… Again… was first posted on May 9, 2016 at 9:59 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

APEX in Azure

Pat Shuff - Mon, 2016-05-09 01:07
Today we are going to look and see what it takes to get Schema as a Service running in the Microsoft Azure Cloud. Our last two entries looked at Schema as a Service, or Application Express, or APEX, running in the Oracle Public Cloud for free, $175, $900, or $2000/month or running in the Amazon RDS Service for $50 - $2700/month. The idea behind Schema as a Service is that you are leasing a database instance on a compute server in the cloud. You get automated backup, html and htmls interfaces into the database, tools to load and unload data, interfaces to run ad-hoc queries or scripted queries, and REST apis to access our data. The pricing on the Oracle Public Cloud is based on how much storage that you consume for your database. The cost on Amazon RDS is based on the compute power allocated to the database. Storage is charged separately and the more you consume, the more you get charged.

Azure uses the Amazon model for pricing in that it charges on a processor shape. Automated backup and the APEX interface/libraries are not included and requires a separate step to install and configure the software on top of the Oracle database. The default operating installation is Windows Server and you are billed monthly for this shape as well. Azure does not offer backup or any other part of a managed service but only provided a virtual image with the Oracle database pre-installed on a Windows Server in the cloud. The pricing for this software is ShapeCoresRAMDiskStandard EditionEnterprise EditionShape price A0 Basic or Standard0.250.75 GB19 GB$1.11/hr or $826/mo$3.16/hr or $2,351/mo0.02/hr or $15/mo A1 Basic or Standard11.75 GB224 GB$1.11/hr or $826/mo$3.16/hr or $2,351/mo0.09/hr or $67/mo A2 Basic or Standard23.5 GB489 GB$1.11/hr or $826/month$3.16/hr or $2,351/mo0.18/hr or $134/mo A5 Standard214 GB489 GB$1.11/hr or $826/month$3.16/hr or $2,351/mo0.33/hr or $246/mo A3 Basic or Standard47 GB999 GB$1.28/hr or $952/mo$6.32/hr or $4,702/mo0.036/hr or $268/mo A6 Standard4428 GB999 GB$1.28/hr or $952/mo$6.32/hr or $4,702/mo A4 Basic or Standard814 GB2,039 GB$2.55/hr or $1,897/mo$12.63/hr or $9,397/mo0.72/hr or $536/mo A7 Standard856 GB2,039 GB$2.55/hr or $1,897/mo$12.63/hr or $9,397/mo1.32/hr or $982/mo A8 Standard856 GB382 GB$2.55/hr or $1,897/mo$12.63/hr or $9,397/mo A10 Standard856 GB382 GB$2.55/hr or $1,897/mo$12.63/hr or $9,397/mo A9 Standard16112 GB382 GB$5.10/hr or $3794/mo$25.27/hr or $18,801/mo A11 Standard16112 GB382 GB$5.10/hr or $3794/mo$25.27/hr or $18,801/mo D1 Standard13.5 GB50 GB$1.11/hr or $826/mo$3.16/hr or $2,351/mo$0.14/hr or $104/mo D2 Standard27 GB100 GB$1.11/hr or $826/mo$3.16/hr or $2,351/mo$0.28/hr or $208/mo D11 Standard214 GB100 GB$1.11/hr or $826/mo$3.16/hr or $2,351/mo$0.33/hr or $246/mo D3 Standard414 GB200 GB$1.28/hr or $952/mo$6.32/hr or $4,702/mo$0.56/hr or $417/mo D12 Standard428 GB200 GB$1.28/hr or $952/mo$6.32/hr or $4,702/mo$0.652/hr or $485/mo D4 Standard828 GB400 GB$2.55/hr or $1,897/mo$12.63/hr or $9,397/mo$1.12/hr or $833/mo D14 Standard16112 GB800 GB$5.10/hr or $3,794/mo$25.27/hr or $18,801/mo$2.11/hr or $1571/mo

Note that this is a generic database pricing with compute shape pricing. It is not a Schema as a Service pricing. We still need to layer APEX on top of the generic database installation. This provides the cheapest option for deploying Schema as a Service as $841/month or $1.13/hr for Standard Edition or $2,366/month or $3.18/hr for Enterprise Edition. We can basically stop here. Running an Oracle database on a single core with less than 1 GB is unusable. Going to the A1 Basic or Standard shape increases the memory to 1.75 GB which might work for a single schema and the 70 GB of disk will store enough tables for most use cases.

Let's walk through creation of an Oracle database on Azure. First we go to Azure portal and search for the Oracle virtual machine by clicking on New.

We are looking for either the Enterprise Edition or Standard Edition of the database.

We select the Standard Edition and ask that it be provisioned.

Once we select the product, we select the shape that we will run this instance on. The shape recommended is relatively large so we have to look at all shapes and we select the A1 shape primarily for cost reasons. We should look at what we are trying to do and load the right core count and memory footprint.

We select the network and storage models for this instance. We go with the defaults in this example rather than adding another storage instance since we are just looking for a small footprint.

Some things to note here. First, we have the option of a username and password or ssh to connect to the operating system. Second, we are never presented with any information about the operating system. Based on the documentation that we read earlier, I assume that this would be Windows Server. It turns out that is actually Oracle Enterprise Linux 6.7. Third, we are never asked information about the database. We are not asked about the SID, password for sys, or ports to open up or use to connect to the database. It turns out that the database is installed in the /u01 directory but no database is created. You still need to run the dbca to create a database instance and start a listener. There are other OS options available to install the database on. We theoretically could have selected Windows 2012 but these options did not come up with our search.

It took a few minutes to start up the database virtual machine. We can look up the details on the instance to find the ip address to connect to and use putty or ssh to connect to the instance.

When we log in we can look at the installation directory and notice that everything is installed in /u01. When we look at the /etc/oratab we notice that nothing is configured or installed. We will need to run oratab to create a database instance. We will then need to download and install APEX to configure Schema as a Service.

In summary, we can install and run an Oracle database on Azure. The installation is lacking a bit. This is more of an Infrastructure as a Service with a binary pre-installed but not configured. This is not Database as a Service and lacking when we compare it to Schema as a Service. To get Schema as a Service we need to download software, install it, change the network configuration, and update the virtual machine network (we did not show this). Microsoft has a good tutorial on the steps needed to create the database once the virtual machine is installed. You do get root access to the operating system. You do get sys access to the database. You do get file system access through the operating system. The documentation says that the service is on Windows but got provisioned on Linux. We might have done something wrong looking back or the documentation is wrong. The service is priced for generic database starting at $800/month or more based on the shape you select. This installation is not DBaaS or PaaS. Backups are not automatically done for you. Patches are not configure or installed. You basically get an operating system with a binary configured. The database is not configured and ports are not configured to allow you to connect across the internet.

Additional CCB 2.5 benchmark information

Anthony Shorten - Sun, 2016-05-08 18:27

Recently I published a link to a summary report for the recent Oracle Utilities Customer Care and Billing 2.5 benchmark. Due to popular demand, we have released additional information about the benchmark including some configuration advice in a new additional whitepaper Oracle Utilities Customer Care and Billing V2.5 and 2.4 Comparison Benchmark Whitepaper (Doc Id: 2135359.1) now available from My Oracle Support.

This whitepaper was provided from our performance team and provides additional technical information about the benchmark setup as well as the results.

Join Oracle Service Cloud Customer Mazda at ICMI Contact Center Expo 2016

Linda Fishman Hoyle - Sun, 2016-05-08 15:02

This week is the ICMI Contact Center Expo 2016 in Long Beach, CA. It is a leading customer service industry event. Will you be there?

If you're on the fence, here's a good reason to attend. Mazda's Yvonnne Burkhouse, IT director applications development (pictured left), is speaking. Mazda, a Japanese automaker, is an Oracle Service Cloud customer. Burkhouse will present "Driving Beyond Loyalty and Exceptional Customer Engagement Center Experiences at Mazda." The company is live on Oracle Service Cloud and has a winning contact center solution story with convincing results.

Also Oracle Service Cloud customers, including Kohl’s and WageWorks, are finalists for the ICMI Best Large Contact Center award. Register now. Hope to see you on May 12.