Feed aggregator

private cloud vs public cloud

Pat Shuff - Fri, 2016-04-29 02:19
Today is our last day to talk about infrastructure as a service. We are moving up the stack into platform as a service after this. The higher up the stack we get, the more value it has to the business and end users. It is interesting to talk about storage and compute in the cloud but does this help us treat patients in a medical practice any better, find oil and gas faster, deliver our manufactured product any cheaper? Not really but not having them will negatively effect all of them. We need to make sure that these services are there so that we can perform higher functions without having to worry about triple redundant mirroring of a disk or load balancing compute servers to handle higher loads.

One of the biggest complaints with cloud services is that there are perception problems of security, latency, and governance. Why would I put my data on a computer that I don't control. There is a noisy neighbor issue where I am renting part of a computer, part of a disk, part of a network. If someone wants to play heavy metal at the highest volume (remember our apartment example) while I am trying to file a monthly report or do some analytics, my resources will suffer and it will take me longer to finish my job due to something out of my control.

Many people have decided to go with private hosted cloud solutions like VCE, VBlock, Cisco UCS clusters, and other products that provide raw compute, raw storage, and "hyper-converged" infrastructure to solve the cloud problem. I can create a golden master on my VMWare server and provision a database to my configuration as many times as I want. Yes, I run into a licensing issue. Yes, it is a really big licensing issue running Oracle on VMWare but Microsoft is not far behind with SQL Server licensing on a per core basis either. Let's look at the economics of putting together one of these private cloud servers.

It is important to dive into the licensing issue. The Oracle Database and WebLogic servers are licensed either on a processor or named user basis. Database licensing is detailed in a pdf on the Oracle site. The net of the license says that the database is licensed based on the core count of the processor running on the server. There is a multiplication factor (0.5 for X86) based on the chip type that factors into the license cost. A few years ago it was easy to do this calculation. If I have a dual core, dual socket system, this is a four core computer. The license price of the computer would be 4 cores x 0.5 (Intel x86 chip) x $47,500. The total price would be $95K. Suddenly the core count of computers went to 8, 16, or 32 cores per chip. A single system could easily have 64 cores on a single board. If you aggregate multiple boards as is done in a Cisco UCS system you can have 8 board or 256 cores that you can use. There are very few applications that can take advantage of 256 cores so a virtualization engine was placed on top of these cores so that you could sub-divide the system into smaller chunks. If you have a 4 core database problem, you can allocate 4 cores to it. If you need 8 cores, allocate 8 cores. Products like VMWare and HyperV took advantage of this and grew rapidly. These virtualization packages added features like live migration, dynamic sizing, and bursting utilization. If you allocate 4 cores and the processor goes to 90%, two more cores will be made available for a short burst. The big question comes up as to how you now license on a per core basis. If you can flex to more processors without rebooting or live migrate from a 2 core to a 24 core system, which do you license for?

Unfortunately, Oracle took a different position from the rest of the industry. None of the Oracle products contain a license key. None of the products require that you go to a web site and get a token to allow you to run the software. The code is wide open and freely available to load and run on any system that you want. Unfortunately, companies don't do a good job of tracking utilization. If someone from the sales or purchasing department rolls out a golden master onto a new virtual machine, no one is really tracking that. People outside of IT can easily spin up more licenses. They can provision a database in a cloud service and assume that the company has enough licenses to cover their project. After a while, licensing gets out of control and a license review is done to see what is actually being used and how it is being used. Named user licenses are great but you have to have a ratio of users to cores to meet minimums. You can't for example, buy a 5 user license and deploy it on a 64 core system. You have to maintain a typical ratio of 25 users to a core of 40 users to a core based on the product that you are using. You also need to make sure that you understand soft partitioning vs hard partitioning. Soft partitioning is the ability to flex or change the core count without having to reconfigure or reboot the system. A hard partition puts hard limits on the core count and does not allow you exceed it. Products like OracleVM, Solaris, and AIX contain hard partition virtualization. Products like HyperV and VMWare contain soft partitions. With soft partitions, you need to pay for all of the cores in the cluster since in theory you can consume all of the cores. To be honest, most people don't understand this and get in trouble with license reviews.

When we talk about cloud services, licensing is also important to understand. Oracle published cloud license rules to detail limits and restrictions. The database is still licensed on a per core basis. The Linux operating system is licensed on per server instance and is limited to 8 virtual cores. If you deploy the Oracle database or WebLogic server in AWS or Azure or any other cloud vendor, you have to own a perpetual license for the database using the formulas above. The license must correlate to the high water mark for the core count that you provision. If you provision a 4 core system, you need a 2 processor license. If you run the database for six months and shut it off, you still need to own the perpetual license. The only way to work around this is to purchase database as a service in the Oracle cloud. You can pay for the database license on an hourly or monthly basis with metered services or on an annual basis with non-metered services. This provides a great cost savings because if we only need a database for 6 months we only need to pay for 6 months x the number of cores x the database edition type. If, for example, we want just the Database Enterprise Edition, it is $3K/core/month. If we want 4 cores that is $12K per month. If we want 6 months then we get it for $72K. We can walk away from the license and not have to pay the 22% annual maintenance on the $95K. We save $23K the first year and $20K annually by only using the database in the cloud for six months. If we wanted to use the database for 9 months, it is cheaper to own the license and lease processor and storage. If we go to the next higher level of database, Database High Performance Edition at $4K/core/month, it becomes cheaper to use the cloud service because it contains so many options that cost $15K/processor. Features like partitioning, compression, diagnostics, tuning, real application testing, and encryption are part of this cloud service. Suddenly the economics is in favor of cloud hosting a database rather than hosting in a data center.

Let's go back to the Cisco UCS and network attached storage discussion. If we purchase a UCS 8 blade server with 32 cores per blade we are looking at $150K or higher for the server. We then want to attach a 100 TB disk array to it at about $300K (remember the $3K/TB). We will then have to pay $300K for VMWare. If we add 10% for hardware and 20% for software we are at just over $1M for a base solution. With a three year amortization we are looking at about $330K per year just to have a compute and storage infrastructure. We have to have a VMWare admin who doles out processors and storage, loads operating systems, creates golden masters, and acts as a traffic cop to manage and allocate resources. We still need to pay for the Oracle database license which is licensed on a per core basis. Unfortunately, with VMWare we must license all of the cores in the cluster so we either have to sub-divide the cluster into one blade and license all 32 cores or end up paying for all 256 cores. At roughly $25K/core that gets expensive quickly. Yes, you can run OracleVM or Solaris on one of the blades and subdivide the database into two cores and only pay for two cores since they both support hard partitioning but you would be amazed at how many people fight this solution. You now have two virtualization engines that you need to support with two different file formats. No one in mass wants two solutions just to solve a licensing issue.

Oracle has taken a radically different approach to this. Rather than purchasing hardware, storage, and a virtualization platform, run everything in the cloud and pay for it on a monthly basis. The biggest objection is that the cloud is in another city and security, latency, ... you get the picture. The new solution is to run this hardware in your data center with the Oracle Public Cloud Machine. The cost of this solution is roughly $260K/year with a three year commit. You get 200 plus cores and 100 ish TB of storage to use as you want. You don't manage it with VSphere but manage it with the same web page that you manage the public cloud services. If you want to script everything then you can manage it with REST apis or perl/java/insert your lanaguage scripts. The key benefit to this is that you no longer need to worry about what virtualization engine you are using. You manage the higher level and lease the database or weblogic or SOA license on an hourly or monthly basis.

Next week we will move up the stack and look at database hosting. Today we talked about infrastructure choices and how it impacts database license cost. Going with AWS or Azure still requires that you purchase the database license. Going with the Oracle public cloud or public cloud machine allows you to not own the database license but effectively lease it on an hourly or monthly basis. It might not be the right solution for 7x24x365 operation but it might be. It really is the right solution for bursty needs list holiday peak periods, student registration systems, development and testing where you only need a large footprint for a few weeks and don't need to buy for your highwater mark and run at 20% the rest of the year.

Migrating Oracle Utilities products from On Premise to Oracle Public Cloud

Anthony Shorten - Thu, 2016-04-28 18:48

A while back Oracle Utilities announced that the latest releases of the Oracle Utilities Application Framework applications were supported on Platform As A Service (PaaS) on Oracle Public Cloud. As part of that support a new whitepaper has been released outlining the process of migrating an on-premise installation of the product to the relevant Platform As A Service offering on Oracle Public Cloud.

The whitepaper covers the following from a technical point of view:

  • The Oracle Cloud services to obtain to house the products, including the Oracle Java Cloud Service and Oracle Database As A Service with associated related services.
  • Setup instructions on how to configure the services in preparation to house the product.
  • Instructions of how to prepare the software for transfer.
  • Instructions on how to transfer the product schema to a Oracle Database As A Service instance using various techniques.
  • Instructions on how to transfer the software and make configuration changes to realign the product installation for the cloud. The configuration must follow the instructions in the Native Installation Oracle Utilities Application Framework (Doc Id: 1544969.1) available from My Oracle Support which has also been updated to reflect the new process.
  • Basic instructions on using the native cloud facilities to manage your new PaaS instances. More information is available in the cloud documentation.

The whitepaper applies to the latest releases of the Oracle Utilities Application Framework based products only. Customers and partners wanting to establish new environments (with no previous installation) can use the same process with the addition of actually running the installation on the cloud instance.

Customers and partners considering using Oracle Infrastructure As A Service can use the same process with the addition of installing the prerequisites.

The Migrating From On Premise To Oracle Platform As A Service (Doc Id: 2132081.1) whitepaper is available from My Oracle Support. This will be the first in a series of cloud based whitepapers.

Oracle Fusion Middleware : Concepts & Architecture : If I can do it, you can too

Online Apps DBA - Thu, 2016-04-28 16:43

If I can do it, you can too…and I truly believe this.      In this post I am going to cover what is Oracle Fusion Middleware (FMW), Why I learnt it and What & How you should learn it too.     Before I tell more about FMW, for those who don’t know me, 16 Years ago, […]

The post Oracle Fusion Middleware : Concepts & Architecture : If I can do it, you can too appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

A Practitioner’s Assessment: Digital Transformation

Pythian Group - Thu, 2016-04-28 13:49

 

Rohinee Mohindroo is a guest blogger on Pythian Business Insights.

 

trans·for·ma·tion/ noun: a thorough or dramatic change in form or appearance

The digital transformation rage continues into 2016 with GE, AT&T, GM, Domino’s, Flex, and Starbucks, to name a few. So what’s the big deal?

Technical advances continue to progress at a rapid rate. Digital transformation simply refers to the rate at which the technological trends are embraced by an individual, organization or team.

Organizational culture and vocabulary are leading indicators of the digital transformation maturity level.

blogimagerohinee

Level 1: Business vs. Tech (us vs. them). Each party is fairly ignorant of the value and challenges of the other. Each blames the other for failures and takes credit for successes. Technology is viewed as a competency with a mandate to enable the business.

Level 2: Business and Tech (us and them). Each party is aware of the capability and challenges of the other. Credit for success is shared, failure is not discussed publicly or transparently. Almost everyone  is perceived to be technically literate with a desire to deliver business differentiation.

Level 3: Business is Tech (us). Notable awareness of the business model and technology capabilities and opportunities throughout the organization. Success is expected and failure is an opportunity. The organization is relentlessly focused on learning from customers and partners with a shared goal to continually re-define the business.

Which level best describes you or your organization? Please share what inhibits your organization from moving to the next level.

 

Categories: DBA Blogs

Slides and demo script from my APEX command line scripting talk at APEX Connect 2016 in Berlin

Dietmar Aust - Thu, 2016-04-28 12:30
Hi everybody,

I just came back from the DOAG APEX Connect 2016 conference in Berlin ... very nice location, great content and the wonderful APEX community to hang out with ... always a pleasure. This time we felt a little pinkish ;)

As promised, you can download the slides and the demo script (as is) from my site. They are in German, but I will give the talk in June at KScope 2016 in Chicago in English just as well.

Instructions are included.

See you at KScope in Chicago, #letswreckthistogether .

Cheers and enyoy!
~Dietmar. 


Slides and demo script from my ORDS talk at APEX Connect 2016 in Berlin

Dietmar Aust - Thu, 2016-04-28 12:25
Hi everybody,

I just came back from the DOAG APEX Connect 2016 conference in Berlin ... very nice location, great content and the wonderful APEX community to hang out with ... always a pleasure. This time we felt a little bit pink ;)

As promised, you can download the slides and the demo script (as is) from my site.

Instructions are included.

See you at KScope in Chicago, #letswreckthistogether .

Cheers and enyoy!
~Dietmar. 


Oracle Cloud – Querying Glassfish Memory Usage

John Scott - Thu, 2016-04-28 10:48

In a previous post I posted about a Java Heap memory issue we were having with Glassfish.

One of the ways we tracked down the cause of the issue and by how much to increase the memory by was to use the asadmin command to generate a memory report – which is a really useful way to see a snapshot of how your Glassfish instance is consuming memory.

You can run a command like

./asadmin generate-jvm-report --type memory

which should give output similar to

[oracle@ahi-dhh-prod bin]$ ./asadmin generate-jvm-report --type memory
Enter admin user name> admin
Enter admin password for user "admin">
The uptime of Java Virtual Machine: 47 Hours 2 Minutes 44 Seconds
Memory Pool Name: PS Eden Space
Memory that Java Virtual Machine initially requested to the Operating System: 122,683,392 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 239,599,616 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 264,241,152 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 45,212,424 Bytes

Memory Pool Name: PS Survivor Space
Memory that Java Virtual Machine initially requested to the Operating System: 20,447,232 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 2,097,152 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 2,097,152 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 312,976 Bytes

Memory Pool Name: Code Cache
Memory that Java Virtual Machine initially requested to the Operating System: 2,555,904 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 17,956,864 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 50,331,648 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 17,726,208 Bytes

Memory Pool Name: PS Perm Gen
Memory that Java Virtual Machine initially requested to the Operating System: 67,108,864 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 99,614,720 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 201,326,592 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 99,278,352 Bytes

Memory Pool Name: PS Old Gen
Memory that Java Virtual Machine initially requested to the Operating System: 326,631,424 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 441,974,784 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 536,870,912 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 372,432,888 Bytes


Name of the Garbage Collector: PS MarkSweep
Number of collections occurred using this garbage collector: 244 Bytes
Garbage Collection Time: 37 Seconds 768 Milliseconds
Name of the Garbage Collector: PS Scavenge
Number of collections occurred using this garbage collector: 54,911 Bytes
Garbage Collection Time: 330 Seconds 761 Milliseconds

Heap Memory Usage:
Memory that Java Virtual Machine initially requested to the Operating System: 489,924,096 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 683,671,552 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 716,177,408 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 418,149,376 Bytes

Non-heap Memory Usage:
Memory that Java Virtual Machine initially requested to the Operating System: 69,664,768 Bytes
Memory that Java Virtual Machine is guaranteed to receive from the Operating System: 117,571,584 Bytes
Maximum Memory that Java Virtual Machine may get from the Operating System: 251,658,240 Bytes. Note that this is not guaranteed.
Memory that Java Virtual Machine uses at this time: 117,004,560 Bytes

Approximate number of objects for which finalization is pending: 0


Command generate-jvm-report executed successfully.

As you can see there’s a lot of output here, but I’ve highlighted the section which shows how much Heap Memory we’re currently using and what the maximums are set to. By regularly monitoring these values we were able to hone in on how much memory was the optimal amount to allocate to the Glassfish instance.


Oracle Launches Latest Oracle Field Service Cloud Offering to Power a Completely Connected Customer Service Experience

Oracle Press Releases - Thu, 2016-04-28 10:35
Press Release
Oracle Launches Latest Oracle Field Service Cloud Offering to Power a Completely Connected Customer Service Experience New field service capabilities can improve customer satisfaction and response times, while reducing service delivery costs

Redwood Shores, Calif.—Apr 28, 2016

Oracle today announced a comprehensive new release of Oracle Field Service Cloud, formerly known as TOA Technologies. The latest release, which is part of Oracle’s end-to-end cloud customer service offering, delivers extensive new field service enhancements focused on mobility, ease of use, and connecting contact center agents delivering service via the phone, email, and chat to field technicians providing in person support.

Field service is a customer service cornerstone for brands spanning a number of industries—including telecommunications, utilities, high tech and manufacturing. A field service appointment may be the only time a company engages with its customers face-to-face to deliver a differentiated customer experience. Leveraging time-based, predictive, self-learning technology, Oracle Field Service Cloud helps create the most efficient job schedules and travel routes for field resources to ensure the right field resource arrives to every job on-time, with the parts and knowledge to complete the work the first time.

New features and integrations available in the latest release of Oracle Field Service Cloud include:

  • Expanded Field Resource Manager: Using a mobile device, field supervisors can now manage all field activities in real time with additional insights into field service team capacity. These new capabilities enable the field workforce to quickly respond to changing customer needs, while streamlining collaboration between resources in the field.
  • Automated Urgent Work Assignment: Companies can now respond to emergency situations using Oracle Field Service Cloud. When an event occurs, the application automatically locates the field resource nearest to the job and immediately routes them to the urgent event. The ability to take action automatically, without requiring human intervention, will help companies ensure relevant resources are quickly deployed – which in emergency situations, can make all the difference.
  • Integration with Oracle’s E-Business Suite: This integration helps streamline job scheduling and inventory tracking capabilities. New integrations with Oracle Integration Cloud Service (ICS) help Oracle Field Service Cloud customers quickly and easily integrate with other cloud applications such as Oracle Sales Cloud.

“At Pella we continually strive to exceed our customer’s expectations and with the recent implementation of Oracle Field Service Cloud we are gaining new levels of customer service capabilities,” said Julia Neary, IT Manager, Fulfillment and Service, Pella Corporation. “We are already seeing a reduction in manual processes and improved management of our service technicians. More importantly we are able to leverage Oracle Field Service Cloud to respond faster, more reliably and more efficiently to our customer’s needs.”

Oracle Field Service Cloud is part of the Oracle Service Cloud portfolio which delivers a complete service experience across a wide range of modern contact center channels including web, phone, email, chat, social and the in-person field service interaction, all powered by the latest technology. This cross channel, connected and mobile service experience is something that competitive products fail to deliver.

Oracle believes that the latest release further cements Oracle Service Cloud’s leadership position cited by industry analyst firm Forrester Research in: The Forrester Wave™: Customer Service Solutions For Enterprise Organizations, Q4 2015 and The Forrester Wave™: Customer Service Solutions For Midsize Teams, Q4 2015. Both ranked Oracle Service Cloud as a Leader, with the highest current offering category scores in both reports. In addition, Oracle Service Cloud was noted among vendors that “deliver high-volume omnichannel service” and “have a foundational layer of knowledge management to deliver channel-specific answers to customer inquiries.

Contact Info
Erik Kingham
Oracle
650-506-8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Erik Kingham

  • 650-506-8298

Log Buffer #471: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-04-28 10:14

This Log Buffer Edition covers Oracle, SQL Server and MySQL blog posts of the week.

Oracle:

Improving PL/SQL performance in APEX

A utility to extract and present PeopleSoft Configuration and Performance Data

No, Oracle security vulnerabilities didn’t just get a whole lot worse this quarter.  Instead, Oracle updated the scoring metric used in the Critical Patch Updates (CPU) from CVSS v2 to CVSS v3.0 for the April 2016 CPU.  The Common Vulnerability Score System (CVSS) is a generally accepted method for scoring and rating security vulnerabilities.  CVSS is used by Oracle, Microsoft, Cisco, and other major software vendors.

Oracle Cloud – DBaaS instance down for no apparent reason

Using guaranteed restore points to navigate through time

SQL Server:

ANSI SQL with Analytic Functions on Snowflake DB

Exporting Azure Data Factory (ADF) into TFS Source Control

Getting started with Azure SQL Data Warehouse

Performance Surprises and Assumptions : DATEADD()

With the new security policy feature in SQL Server 2016 you can restrict write operations at the row level by defining a block predicate.

MySQL:

How to rename MySQL DB name by moving tables

MySQL 5.7 Introduces a JSON Data Type

Ubuntu 16.04 first stable distro with MySQL 5.7

MariaDB AWS Key Management Service (KMS) Encryption Plugin

MySQL Document Store versus Bug hunter

Categories: DBA Blogs

Oracle Buys Textura

Oracle Press Releases - Thu, 2016-04-28 07:00
Press Release
Oracle Buys Textura Adds Leading Construction Contracts and Payment Management Cloud Services to the Oracle Engineering and Construction Industry Cloud Platform

Redwood Shores, Calif.—Apr 28, 2016

Oracle (NYSE: ORCL) today announced that it has entered into a definitive agreement to acquire Textura (NYSE: TXTR), a leading provider of construction contracts and payment management cloud services for $26.00 per share in cash. The transaction is valued at approximately $663 million, net of Textura’s cash.

Textura’s cloud services process $3.4 billion in payments for over 6,000 projects each month, helping keep projects on time and under budget while reducing risk for developers, contractors and subcontractors. Textura offers its cloud services in a consumption model preferred by the engineering and construction industry whereby the companies involved pay based on project activity. Further, usage of Textura’s cloud services creates a network effect that benefits all participants as more than 85,000 general and subcontractors are connected to the platform.

Oracle Primavera offers a complete suite of cloud solutions for project, cost, time and risk management. The Oracle Primavera flagship products have been completely re-architected for the Cloud, and the result is a set of cloud services that are growing rapidly as construction and engineering companies embrace digital transformation. Together, Oracle Primavera and Textura will form the Oracle Engineering and Construction Global Business Unit offering a comprehensive cloud-based project control and execution platform that manages all phases of engineering and construction projects.

“The increasingly global engineering and construction industry requires digital modernization in a way that automates manual processes and embraces the power of cloud computing to easily connect the construction job site, reduce cost overruns, and improve productivity,” said Mike Sicilia, SVP and GM, Engineering and Construction Global Business Unit, Oracle. “Together, Textura and Oracle Engineering and Construction will have the most comprehensive set of cloud services in the industry.”

“Textura’s mission is to bring workflow automation and transparency to complex construction projects while improving their financial performance and minimizing risks,” said David Habiger, Chief Executive Officer, Textura. “We are excited to join Oracle and bring our cloud-based capabilities to help extend the Oracle Engineering and Construction Industry Cloud Platform.”

The Board of Directors of Textura has unanimously approved the transaction. The transaction is expected to close in 2016, subject to Textura stockholders tendering 66 2/3% of Textura’s outstanding shares and derivative securities exercised prior to the closing (as required by Textura’s certificate of incorporation) in the tender offer, certain regulatory approvals and other customary closing conditions.

More information about this announcement is available at www.oracle.com/textura.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1.212.508.7935
deborah.hellinger@oracle.com
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Cautionary Statement Regarding Forward-Looking Statements
This document contains certain forward-looking statements about Oracle and Textura, including statements that involve risks and uncertainties concerning Oracle's proposed acquisition of Textura, anticipated customer benefits and general business outlook. When used in this document, the words "anticipates", “can", “will”, "look forward to", "expected" and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or Textura, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business, and the possibility that Oracle or Textura may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or Textura.

In addition, please refer to the documents that Oracle and Textura, respectively, file with the U.S. Securities and Exchange Commission (the “SEC”) on Forms 10-K, 10-Q and 8-K. These filings identify and address other important factors that could cause Oracle's and Textura’s respective operational and other results to differ materially from those contained in the forward-looking statements set forth in this document. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor Textura is under any duty to update any of the information in this document.

Oracle is currently reviewing the existing Textura product roadmap and will be providing guidance to customers in accordance with Oracle's standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle's review of Textura’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by Textura or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision. It is intended for information purposes only, and may not be incorporated into any contract.

Additional Information about the Acquisition and Where to Find It
In connection with the proposed acquisition, Oracle will commence a tender offer for the outstanding shares of Textura. The tender offer has not yet commenced. This document is for informational purposes only and is neither an offer to purchase nor a solicitation of an offer to sell shares of Textura, nor is it a substitute for the tender offer materials that Oracle and its acquisition subsidiary will file with the SEC upon commencement of the tender offer. At the time the tender is commenced, Oracle and its acquisition subsidiary will file tender offer materials on Schedule TO, and Textura will file a Solicitation/Recommendation Statement on Schedule 14D-9 with the SEC with respect to the tender offer. The tender offer materials (including an Offer to Purchase, a related Letter of Transmittal and certain other tender offer documents) and the Solicitation/Recommendation Statement will contain important information. Holders of shares of Textura are urged to read these documents when they become available because they will contain important information that holders of Textura securities should consider before making any decision regarding tendering their securities. The Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, will be made available to all holders of shares of Textura at no expense to them. The tender offer materials and the Solicitation/Recommendation Statement will be made available for free at the SEC’s web site at www.sec.gov.

In addition to the Offer to Purchase, the related Letter of Transmittal and certain other tender offer documents, as well as the Solicitation/Recommendation Statement, Oracle and Textura file annual, quarterly and special reports and other information with the SEC. You may read and copy any reports or other information filed by Oracle or Textura at the SEC public reference room at 100 F Street, N.E., Washington, D.C. 20549. Please call the Commission at 1-800-SEC-0330 for further information on the public reference room. Oracle’s and Textura’s filings with the SEC are also available to the public from commercial document-retrieval services and at the website maintained by the SEC at http://www.sec.gov.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Ken Bond

  • +1.650.607.0349

Standard SQL ? – Oracle REGEXP_LIKE

The Anti-Kyte - Thu, 2016-04-28 06:55

Is there any such thing as ANSI Standard SQL ?
Lots of databases claim to conform to this standard. Recent experience tends to make me wonder whether it’s more a just basis for negotiation.
This view is partly the result of having to juggle SQL between three different SQL parsers in the Cloudera Hadoop infrastructure, each with their own “quirks”.
It’s worth remembering however, that SQL differs across established Relational Databases as well, as a recent question from Simon (Teradata virtuoso and Luton Town Season Ticket Holder) demonstrates :

Is there an Oracle equivalent of the Teradata LIKE ANY operator when you want to match against a list of patterns, for example :

like any ('%a%', '%b%')

In other words, can you do a string comparison, including wildcards, within a single predicate in Oracle SQL ?

The short answer is yes, but the syntax is a bit different….

The test table

We’ve already established that we’re not comparing apples with apples, but I’m on a bit of a health kick at the moment, so…

create table fruits as
    select 'apple' as fruit from dual
    union all
    select 'banana' from dual
    union all
    select 'orange' from dual
    union all
    select 'lemon' from dual
/
The multiple predicate approach

Traditionally the search statement would look something like :

select fruit
from fruits
where fruit like '%a%'
or fruit like '%b%'
/

FRUIT 
------
apple 
banana
orange

REGEXP_LIKE

Using REGEXP_LIKE takes a bit less typing and – unusually for a regular expression – less non-alphanumeric characters …

select fruit
from fruits
where regexp_like(fruit, '(a)|(b)')
/

FRUIT 
------
apple 
banana
orange

We can also search for multiple substrings in the same way :

select fruit
from fruits
where regexp_like(fruit, '(an)|(on)')
/

FRUIT 
------
banana
orange
lemon 

I know, it doesn’t feel like a proper regular expression unless we’re using the top row of the keyboard.

Alright then, if we just want to get records that start with ‘a’ or ‘b’ :

select fruit
from fruits
where regexp_like(fruit, '(^a)|(^b)')
/

FRUIT 
------
apple 
banana

If instead, we want to match the end of the string…

select fruit
from fruits
where regexp_like(fruit, '(ge$)|(on$)')
/

FRUIT
------
orange
lemon

…and if you want to combine searching for patterns at the start, end or anywhere in a string, in this case searching for records that

  • start with ‘o’
  • or contain the string ‘ana’
  • or end with the string ‘on’

select fruit
from fruits
where regexp_like(fruit, '(^o)|(ana)|(on$)')
/

FRUIT
------
banana
orange
lemon

Finally on this whistle-stop tour of REGEXP_LIKE, for a case insensitive search…

select fruit
from fruits
where regexp_like(fruit, '(^O)|(ANA)|(ON$)', 'i')
/

FRUIT
------
banana
orange
lemon

There’s quite a bit more to regular expressions in Oracle SQL.
For a start, here’s an example of using REGEXP_LIKE to validate a UK Post Code.
There’s also a comprehensive guide here on the PSOUG site.
Now I’ve gone through all that fruit I feel healthy enough for a quick jog… to the nearest pub.
I wonder if that piece of lime they put in top of a bottle of beer counts as one of my five a day ?


Filed under: Oracle, SQL Tagged: regexp_like

Contemplating Upgrading to OBIEE 12c?

Rittman Mead Consulting - Thu, 2016-04-28 05:00
Where You Are Now

NewImage
OBIEE 12c has been out for some time, and it seems like most folks are delaying upgrading to OBIEE 12c until the very last minute. Or at least until Oracle decides to put out another major version change of OBIEE, which is understandable. You’ve already spent time and money and devoted hundreds of resource hours to system monitoring, maintenance, testing, and development. Maybe you’ve invested in staff training to try to maximize your ROI in your existing OBIEE purchase. And now, after all this time and effort, you and your team have finally gotten things just right. Your BI engine is humming along, user adoption and stickiness are up, and you don’t have a lot of dead objects clogging up the Web Catalog. Your report hacks and work-arounds have been worked and reworked to become sustainable and maintainable business solutions. Everyone is getting what they want.

Sure, this scenario is part fantasy, but it doesn’t mean that as a BI team lead or member, you’re not always working toward this end. It would be nice to think that the people designing the tools with which we do this work understood the daily challenges and processes we must undergo in order to maintain the precarious homeostasis of our BI ecosystems. That’s where Rittman Mead comes in. If you’re considering upgrading to OBIEE 12c, or are even curious, keep reading. We’re here to help.

So Why Upgrade

Let’s get right down to it. Shoot over here and here to check out what our very own Mark Rittman had to say about the good, the bad, and the ugly of 12c. Our Silvia Rauton did a piece on lots of the nuts and bolts of 12c’s new front-end features. They’re all worth a read. Upgrading to OBIEE 12c offers many exciting new features that shouldn’t be ignored.

Heat Map

How Rittman Mead Can Help

We understand what it is to be presented with so many project challenges. Do you really want to risk the potential perils and pitfalls presented by upgrading to OBIEE 12c? We work both harder and smarter to make this stuff look good. And we get the most out of strategy and delivery via a number of in-house tools designed to keep your OBIEE deployment in tip top shape.

Maybe you want to make sure all your Catalog and RPD content gets ported over without issue? Instead of spending hours on testing every dashboard, report, and other catalog content post-migration, we’ve got the Automated Regression Testing package in our tool belt. We deploy this series of proprietary scripts and dashboards to ensure that everything will work just the way it was, if not better, from one version to the next.

Maybe you’d like to make sure your system will fire on all cylinders or you’d like to proactively monitor your OBIEE implementation. For that we’ve got the Performance Analytics Dashboards, built on the open source ELK stack to give you live, active monitoring of critical BI system stats and the underlying database and OS.

OBIEE Performance Analytics

On top of these tools, we’ve got the strategies and processes in place to not only guarantee the success of your upgrade, but to ensure that you and your team remain active and involved in the process.

What to Expect

You might be wondering what kinds of issues you can expect to experience during upgrading to OBIEE 12c (which is to say, nothing’s going to break, right?). Are you going to have to go through a big training curve? Does upgrading to OBIEE 12c mean you’re going to experience considerable resource downtime as your team, or an even an outside company, manages this process? To answer this question, I’m reminded of a quote from the movie Fight Club: “Choose your level of involvement.”

While we always prefer to work alongside your BI or IT team to facilitate the upgrade process, we also know that resource time is valuable and that your crew can’t stop what they’re doing until things wraps up. We often find that the more clients are engaged with the process, however, the easier the hand-off is because clients better understand best practices, and IT and BI teams are more empowered for the future.

Learning More about OBIEE 12c

But if you’re like many organizations, maybe you have to stay more hands off and get training after the upgrade is complete. Check out the link here to look over the agenda of our OBIEE 12c Bootcamp training course. Like our hugely popular 11g course, this program is five days of back-to-front instruction taught via a selection of seminars and hands-on labs, designed to impart most everything your team will need to know to continue or begin their successful BI practice.

What we often find is that, in addition to being a thorough and informative course, the Bootcamp is a great way to bring together teams or team members, often dispersed among different offices, under one roof to gain common understanding about how each person plays an important role as a member of the BI process. Whether they handle the ETL, data modeling, or report development, everyone can benefit from what often evolves from a training session into some impromptu team building.

Feel Empowered

If you’re still on the fence about whether or not to upgrade, as I said before, you’re not alone. There are lots of things you need to consider, and rightfully so. You might be thinking, “What does this mean for extra work on the plates of my resources? How can I ensure the success of my project? Is it worth it to do it now, or should I wait for the next release?” Whatever you may be mulling over, we’ve been there, know how to answer the questions, and have some neat tools in our utility belt to move the process along. In the end, I hope to have presented you with some bits to aid you in making a decision about upgrading to OBIEE 12c, or at least the impetus to start thinking about it.

If you’d like any more information or just want to talk more about the ins and outs of what an upgrade might entail, send over an email or give us a call.

The post Contemplating Upgrading to OBIEE 12c? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Server Problems : Update

Tim Hall - Thu, 2016-04-28 02:08

hard-disk-42935_640This is a follow on from my server problems post from yesterday…

Regarding the general issue, misiaq came up with a great suggestion, which was to use watchdog. It’s not going to “fix” anything, but if I get a reboot when the general issue happens, that would be much better than having the server sit idle for 5 hours until I  wake up.

Archive Storage Services vs Archive Cloud Services

Pat Shuff - Thu, 2016-04-28 02:07
Yesterday we started talking about the cost comparison for storage in the cloud. We briefly touched on the cost of long term archive in the cloud. How much does it cost to backup data for long term archive and what is the best way to do this? Years ago the default way of doing this was to copy your data on disk to a tape unit and put the tape in a box. The box was then put in an environmentally controlled room to extend the lifetime of tape and a person was put on staff to pull the data off the shelf when the data was needed. The data might be a backup of data on disk or a secondary copy just in case the disk failed. Tape was typically used to provide separation of duties required by Sarbanes-Oxly to keep people who report on financial data separate from the financial data. It also allowed companies to take large volumes of data, like seismic data, and not keep it on spinning disks. The traces were reloaded when geophysicists wanted to look at the data.

The first innovation in this technology was to provide a robot to load and unload tapes as a tape unit gets full or needs to be reloaded. Magazines were created that could hold eight tapes and the robots had bar code readers so that they could seek to the right tape in the magazine, pull it out of the series of tapes and inserted into the tape unit for reading or writing. Management software got more advanced and understood the bar code values and could sequence the whopping 800 GB of data that could be written to an LT04 tape. Again, technology gets updated and the industry moved to LT05 and LT06 tapes with significantly higher densities. A single LT06 could hold 2.5 TB per tape unit. Technology marches on and compression allows us to store 6 TB on these disks. If we go back to our 120 TB case that we talked about yesterday this means that we will need 20 tapes (at $30-$45 for each tape) and $25K for a single tape drive unit. Most tape drive systems support 8 tapes per magazine so we are talking about something that will support three magazines. To support three magazines, we need a second shelf in our tape storage so the price goes up by about $20K. We are sitting at about $55K to backup our 120 TB and $5.5K in support annually for the hardware. We also need about $1K in tape for the number of full and incremental backups that we want which would be $20K for four months of retention before we recycle the tapes. These tapes are good for a dozen re-writes so every three years we will need to repurchase tapes. If we spread the cost of the tape unit, tape drives, and tapes across three years we are looking at $2K/month to backup our 120 TB. We also need to factor in $60/week for tape pickup and storage fees at a service like Iron Mountain and a couple of $250 charges to retrieve tapes in the event of a catastrophic failure to drive tapes back to our data center from cold storage. This bumps the cost to $2.2K/month which is significantly cheaper than the $10K/month for network storage in our data center or $3.6K/month for cloud storage services. Unfortunately, a tape unit requires someone to care and feed it and you will pay that person more than $600/month but not $7.8K/month which you would with the cloud or disk solutions.

If you had a ton of data to archive you could purchase a tape silo that supported hundreds or thousands of magazines. Unfortunately, this expandability cones at a cost. The tape backup unit grew from an eighth of a rack to twenty full racks. There isn't much in between. You can get an eighth of a rack solution, a full rack solution, or a twenty full rack solution. The larger solution comes in at hundreds of thousands of dollars rather than tens of thousands.

Enter cloud solutions. Amazon and Oracle offer tape solutions in the cloud. Both companies offer the twenty full rack solution but only charge a per tape charge to consumers. Amazon Glacier charges $7/TB/month to store data. Oracle charges $1/TB/month for the same service. Both companies charge for data restoration and outbound transfer of data. The Amazon Glacier cost of writing 120 TB and reading back 10% of it comes in at $2218/month. This is the same cost as having the tape unit on site. The key difference is that we can recover the data by requesting it from Amazon and get it back in less than four hours. There is no emergency recovery charges. There is not the weekly pickup charges. We can expand the amount that we backup and the bulk of this cost is reading back the data ($1300). Storage is relatively cheap for our backups, we just need to plan on the cost of recovery and try to limit this since it is the bulk of the cost.

We can drop this cost even more using the Oracle Archive Cloud Services. The price from Oracle is $1/TB/month but the recovery and transmission charges are about the same. The same archive service with Oracle is $1560/month with roughly $1300 being the charges for restoring and outbound transfer of the data. Unfortunately, Oracle does not offer an un-metered archive service so we have to guestimate how much we are going to restore on a monthly basis.

Both services use REST apis to write, restore, and read data. When a container (Oracle Archive) or bucket (Amazon Glacier) is created, a PUT call is done to the endpoint of the service. The first step required by both are authentication to provide credentials into the service. Below we show the Oracle authentication and creation process through the REST api.

The important part of this is the archive header extension. This differentiates if the container is spinning disk or if it is tape in the cloud.

Amazon recommends using a windows based tool like s3browser, CloudBerry, or using a language like Java, .NET, or Ruby and their published SDKs. CloudBerry works for the Oracle Archive as well. When you create a container you have the option of pulling down storage or archive as the container type.

Both services allow you to encrypt and compress the data as it is written with HTML Headers changing the characteristics and parameters of the container. Both services require you to issue a PUT request to write the data to tape. Below we show the Oracle REST api.

For CloudBerry and the other gui based tools, uploading is just a drag and drop from your local file system to the tape storage in the cloud.

Amazon details the readback procedure and job system that shows the status of the restore request. Oracle has a similarly defined retrieval policy as well as an archive tutorial. Both services offer a 4 hour window to allow for restoration. Below is an example of a restore request and checking on the job status of the job spawned to load the tape and transfer the data for reading. The file is ready to read when the completedPercentage is 100.

We can do the same thing with the S3 browser and Amazon Glacier. We need to request the restore, check the job status, then download the restored files. The files change color when they are ready to read.

In summary, we have looked at how to reduce cost of archives and backups. We looked at using a secondary disk at our data center or another data center. We looked at using on site tape units. We looked at disk in the cloud. Today we looked at tape in the cloud. It is important to remember that not one of these solutions is the answer. A combination of any or all of them are needed. Daily and weekly backups should happen to a secondary disk locally. This data is most likely to be restored on a regular basis. Once you get a full backup or two under your belt, move the data to another site. It might be spinning disk, it might be tape but something needs to be offsite in the event of a true catastrophic failure like a communication link going out (think Dell PowerVault and a thunderstorm) and you loose your primary lun and secondary lun that contains your backups. The whole idea of offsite backups are not for restore but primary for insurance and regulation compliance. If someone needs to see the old data, it is there. You are betting that you won't need to read it back and the cloud vendors are counting on that. If you do read it back on a regular basis you might want to significantly increase your budget, pass the charges onto the people who want to read data back, or look for another solution. Tape storage in the cloud is a great way of archiving data for a long time at a low cost.

How to recover space from already deleted files

Pythian Group - Wed, 2016-04-27 14:15

Wait, what? Deleted files are gone, right? Well, not so if they’re currently in use, with an open file handle by an application. In the Windows world, you just can’t touch it, but under Linux (if you’ve got sufficient permissions), you can!

Often in the Systems Administration, and Site Reliability Engineering world, we will encounter a disk space issue being reported, and there’s very little we can do to recover the space. Everything is critically important! We then check for deleted files and find massive amounts of space consumed when someone has previously deleted Catalina, Tomcat, or Weblogic log files while Java had them in use, and we can’t restart the processes to release the handles due to the critical nature of the service. Conundrum!

Here at Pythian, we Love Your Data, so I thought I’d share some of the ways we deal with situations like this.

How to recover

First, we grab a list of PIDs with files still open, but deleted. Then iterate over the open file handles, and null them.

PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done

This could be scripted in an automatic nulling of all deleted files, with great care.

Worked example

1. Locating deleted files:

[root@importantserver1 usr]# lsof | head -n 1 ; lsof | grep -i deleted
 COMMAND   PID   USER   FD  TYPE DEVICE SIZE/OFF NODE   NAME
 vmtoolsd  2573  root   7u  REG  253,0  9857     65005  /tmp/vmware-root/appLoader-2573.log (deleted)
 zabbix_ag 3091  zabbix 3wW REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3093  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3094  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3095  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3096  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3097  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 java      23938 tomcat 1w  REG  253,0  0        32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 2w  REG  253,0  45322216 32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 9w  REG  253,0  174      32133  /opt/log/tomcat/catalina.2015-01-17.log (deleted)
 java      23938 tomcat 10w REG  253,0  57408    32154  /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 java      23938 tomcat 11w REG  253,0  0        32156  /opt/log/tomcat/manager.2014-12-09.log (deleted)
 java      23938 tomcat 12w REG  253,0  0        32157  /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 java      23938 tomcat 65w REG  253,0  363069   638386 /opt/log/archive/athena.log.20160105-09 (deleted)

2. Grab the PIDs:

[root@importantserver1 usr]# lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq
 2573
 3091
 3093
 3094
 3095
 3096
 3097
 23938

Show the deleted files that each process still has open (and is consuming space):

[root@importantserver1 usr]# export PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
[root@importantserver1 usr]# for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done
 lrwx------ 1 root root 64 Mar 21 21:15 7 -> /tmp/vmware-root/appLoader-2573.log (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 1 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 10 -> /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 11 -> /opt/log/tomcat/manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 12 -> /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 2 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 65 -> /opt/log/archive/athena.log.20160105-09 (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 9 -> /opt/log/tomcat/catalina.2015-01-17.log (deleted)

Null the specific files (here, we target the catalina.out file):

[root@importantserver1 usr]# cat /dev/null > /proc/23938/fd/2
Alternative ending

Instead of deleting the contents to recover the space, you might be in the situation where you need to recover the contents of the deleted file. If the application still has the file descriptor open on it, you can then recover the entire file to another one (dd if=/proc/23938/fd/2 of=/tmp/my_new_file.log) – assuming you have the space to do it!

Conclusion

While it’s best not to get in the situation in the first place, you’ll sometimes find yourself cleaning up after someone else’s good intentions. Now, instead of trying to find a window of “least disruption” to the service, you can recover the situation nicely. Or, if the alternative solution is what you’re after, you’ve recovered a file that you thought was long since gone.

Categories: DBA Blogs

Deploy Docker containers using AWS Opsworks

Pythian Group - Wed, 2016-04-27 13:51
Introduction

This post is about how to deploy Docker containers on AWS using Opsworks and Docker Composer.
For AWS and Docker, the introduction isn’t required. So, let’s quickly introduce Opsworks and Docker Composer.

Opsworks

Opsworks is a great tool provided by AWS, which runs Chef recipes on your Instances. If the instance is an AWS instance, you don’t pay anything for using Opsworks, but you can also manage instances outside of AWS with a flat cost just by installing the Agent and registering the instance on Opsworks.

Opsworks Instances type

We have three different types of instances on Opsworks:

1. 24x7x365
Run with no stop

2. Time based
Run in a predefined time. Such as work hours.

3. Load based
Scale up and down according to the metrics preconfigured.

You can find more details here.

Custom JSON

Opsworks provides Chef Databags (variables to be used in your recipes) via Custom JSON, and that’s the key to this solution. We will manage everything just changing a JSON file. This file can become a member of your development pipeline easily.

Life cycle

Opsworks has five life cycles:
1. Setup
2. Configure
3. Deploy
4. Undeploy
5. Shutdown
We will use setup, deploy, and shutdown. You can find more details about Opsworks life cycle here.

Docker Compose

Docker Compose was originally developed under the Fig project. Nowadays, the fig is deprecated, and docker-compose is a built-in component of Docker.
Using docker-compose, you can manage all containers and their attributes (links, share volumes, etc.) in a Docker host. Docker-compose can only manage containers on the local host where it is deployed. It cannot orchestrate Docker containers between hosts.
All configuration is specified inside of a YML file.

Chef recipes

Using Opsworks, you will manage all hosts using just one small Chef cookbook. All the magic is in translating Custom JSON file from Opsworks to YML file to be used by docker-compose.
The cookbook will install all components (Docker, pip, and docker-compose), translate Custom JSON to YML file and send commands to docker-compose.

Hands ON

Let’s stop talking and see things happen.

We can split it into five steps:

  1. Resources creation
    1. Opsworks Stack
        1. Log into your AWS account
        2. Go to Services -> Management Tools -> Opsworks
          Accessing Opsworks menu
        3. Click on Add stack (if you already have stacks on Opsworks) or Add your first stack (if it’s the first time you are creating stacks on opsworks)
        4. Select type Chef 12 stack
          Note: The Chef cookbook used in this example only supports Chef12
        5. Fill out stack information
          aws_opsworks_docker_image02
          Note:
          – You can use any name as stack name
          – Make sure VPC selected are properly configured
          – This solution supports Amazon Linux and Ubuntu
          – Repository URL https://bitbucket.org/tnache/opsworks-recipes.git
        6. Click on advanced if you want to change something. Changing “Use OpsWorks security groups” to No can be a good idea when you need to communicate with instances which are running outside of Opsworks
        7. Click on “Add stack”
    2. Opsworks layer
        1. Click on “Add a layer”
        2. Set Name, Short name and Security groups. I will use webserver

      Note:
      Use a simple name because we will use this name in next steps
      The Name web is reserved for AWS internal use

        1. Click on “Add layer”

      aws_opsworks_docker_image03

    3. Opsworks Instance
        1. Click on “Instances” on left painel
        2. Click on “Add an instance”
        3. Select the size (instance type)
        4. Select the subnet
        5. Click on “Add instance”

      aws_opsworks_docker_image05

  2. Resources configuration
    1. Opsworks stack
        1. Click on “Stack” on left painel
        2. Click on “Stack Settings”
        3. Click on “Edit”
        4. Find Custom JSON field and paste the content of the file bellow

      custom_json_1

      1. Click on “Save”
    2. Opsworks layer
        1. Click on “Layers” on left painel
        2. Click on “Recipes”
        3. Hit docker-compose and press enter on Setup
        4. Hit docker-compose::deploy and press enter on Deploy
        5. Hit docker-compose::stop and press enter on Deploy
        6. Click on “Save”

      aws_opsworks_docker_image04

  3. Start
    1. Start instance
        1. Click on start

      aws_opsworks_docker_image06

  4. Tests
    Note: Wait until instance get online state

      1. Open your browser and you should be able to see It works!
      2. Checking running containers

    aws_opsworks_docker_image07

  5. Management
      1. Change custom json to file bellow (See resources configuration=>Opsworks stack)

    custom_json_2

      1. Click on “Deployments” on left painel
      2. Click on “Run Command”
      3. Select “Execute Recipes” as “Command”
      4. Hit “docker-compose::deploy” as “Recipes to execute”
      5. Click on “Execute Recipes”

    Note: Wait until deployment finish

      1. Checking running containers

    aws_opsworks_docker_image08

Categories: DBA Blogs

Oracle Celebrates 10th Anniversary of Markie Awards

Oracle Press Releases - Wed, 2016-04-27 09:32
Press Release
Oracle Celebrates 10th Anniversary of Markie Awards The 2016 Markie Awards represent Oracle Marketing Cloud’s biggest award campaign in history with 350 submissions highlighting modern marketing success

MODERN MARKETING EXPERIENCE—Las Vegas, NV—Apr 27, 2016

To promote best practices and celebrate the accomplishments of modern marketers around the world, Oracle has announced the winners of the 2016 Markie Awards. Celebrating its 10th anniversary, the annual Markie Awards recognize the top companies and leaders for creative, innovative, and effective modern marketing campaigns that delivered increased engagement, conversation and ROI. Past winners include notable companies such as The Dow Chemical Company, Intel, Marriott Rewards, DocuSign, Twitter and Rockwell Automation.

The Markie Awards highlight the best omnichannel success stories in an age where the customer experience is more important than ever before. This year marked the Markie Award’s biggest campaign yet with a total of 350 award submissions, up 80 percent over last year.

The 2016 Markie Awards featured 19 categories that reflect the evolving marketing landscape, including: Best Digital Marketing Ecosystem, Best Sales and Marketing Alignment, Best Mobile Experience, Rapid Transformation, Content is King, and Modern Marketing Leader of the Year.

“The 10th anniversary of the Markie Awards represents a decade of marketing excellence from top companies and leaders around the world,” said Kevin Akeroyd, senior vice president and general manager, Oracle Marketing Cloud. “This year’s winners have gone above and beyond, fully leveraging the most innovative marketing technologies to create campaigns that put the customer at the heart of their programs, which delivered outstanding results.”

Winners of the 2016 Markie Awards are:

  • Best Use of Audience Data: Dell, Inc.
  • Best Digital Marketing Ecosystem: Lenovo
  • Best Email Marketing Campaign: Jetstar
  • Best International Campaign: Tomtom
  • Best Mobile Experience: Zalora
  • Best Social Campaign: Family Share Network/Deseret Digital
  • Content is King: Wine.com.br
  • Best Customer Retention or Loyalty Program: Cetera Financial Group
  • Best Use of Data Analytics and Insights: Thomson Reuters (MarkMonitor)
  • Best Integrated Consumer Marketing Program: AGA
  • Rapid Transformation: BT Global Services
  • Best Sales and Marketing Alignment: Sierra
  • Best Testing and Optimization of the Customer Experience: YOTEL
  • Best Lead Management Program: Juniper
  • Best Emerging Company Marketing Program: Mobovida
  • Modern Marketing Leader of the Year: Blake Cahill, Royal Phillips
  • Best Web or Commerce Experience: SunPower
  • Most Creative Marketing Campaign: Eaton
  • People’s Choice Markie Award - Best Video: Cisco Systems

“At Oracle, we strive to put the customer at the forefront of everything we do,” said Catherine Blackmore, group vice president, customer success, Oracle Marketing Cloud. “Therefore, we believe the best Modern Marketers are all about the customer, too. Oracle is proud to recognize and honor these winners for their customer-centric campaigns, outstanding achievements, and their dedication to Modern Marketing.”

The 2016 Markie Awards were presented on Tuesday, April 26th, during Oracle Modern Marketing Experience in Las Vegas, Nevada.

Contact Info
Erik Kingham
Oracle
650-506-8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Erik Kingham

  • 650-506-8298

Oracle Marketing Cloud Helps Simplify Data-Driven Marketing

Oracle Press Releases - Wed, 2016-04-27 07:40
Press Release
Oracle Marketing Cloud Helps Simplify Data-Driven Marketing New innovations and integrations help direct-to-consumer marketers personalize the customer experience and optimize marketing spend

MODERN MARKETING EXPERIENCE – Las Vegas, NV—Apr 27, 2016

Extending its commitment to help consumer marketers target, convert and retain their ideal customers, Oracle has introduced new innovations and integrations for the Oracle Marketing Cloud. The latest additions empower marketers to use differentiated audience data for targeting on their paid media, orchestrate cross-channel interactions and further optimize the customer experience.

Marketers can now improve the customer experience through a new direct integration between Oracle Maxymiser and Oracle Responsys applications. The integration enables marketers to orchestrate Oracle Maxymiser-delivered messaging on a website in conjunction with messaging in email, SMS, MMS, push messaging, in-app messaging and display advertising. In addition, Oracle Maxymiser is now fully integrated with the Oracle Data Management Platform. The integration empowers marketers to tap into first, second and third party data and use it for audience testing and optimization initiatives.

To help marketers connect the right audience data and deliver a more consistent customer experience, new integrations with the Oracle Data Cloud have been introduced to the Oracle Marketing Cloud’s Data Management Platform. By leveraging Oracle Data Cloud’s enhanced modeling solution, the first integration gives marketers the ability to model their online audiences based on critical offline data, such as in-store purchase transactions. The second integration gives customers access to Oracle Data Cloud's AddThis Audience Discovery tool, which helps marketers better understand and target their ideal audience with relevant paid media campaigns. The powerful data-driven marketing solution uses natural language processing to look at keywords on billions of web pages across the web. It transforms those keywords into audience data that is based on the online behaviors of two billion unique users traversing more than 15 million websites around the world.

The latest additions to the Oracle Marketing Cloud also reduce the complexity of data-driven marketing by delivering greater insight into how customer engagement drives revenue. By leveraging the speed and scale of Oracle Business Intelligence, Oracle Marketing Cloud now provides marketers with advanced data visualization and investigation tools through an easy-to-use interface that delivers deep insights into performance across different channels. To further enhance usability, a new graphical audience segmentation feature allows B2C marketers to benefit from segmentation logic that is easier to visualize, test and reuse.

“Marketing has the opportunity to take the lead in driving a deeply personalized customer experience across all channels, touch points and interactions,” said Steve Krause, group vice president, Product Management, Oracle Marketing Cloud. “To help marketers capitalize on this opportunity, we have introduced a number of new integrations and innovations within the Oracle Marketing Cloud that help B2C marketers orchestrate and personalize the full customer experience. The latest additions include powerful data integrations that will help brands increase engagement and optimize marketing spend.” 

Contact Info
Erik Kingham
Oracle
650-506-8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Erik Kingham

  • 650-506-8298

Oracle Marketing Cloud Helps Marketers Reduce Sales Cycles and Improve Conversion Rates

Oracle Press Releases - Wed, 2016-04-27 07:33
Press Release
Oracle Marketing Cloud Helps Marketers Reduce Sales Cycles and Improve Conversion Rates Innovative new platform enables business-to-business marketers to scale and optimize account-based marketing programs

MODERN MARKETING EXPERIENCE - Las Vegas, NV—Apr 27, 2016

Oracle today announced new account-based marketing capabilities within the Oracle Marketing Cloud that help business-to-business (B2B) marketers increase lead generation, reduce sales cycles and improve conversion rates. By supporting the entire B2B buyers’ journey from acquisition and prospecting to closing new business, the new Oracle Marketing Cloud Account-Based Marketing capabilities enable B2B marketers to optimize targeting on paid media, more effectively engage target prospects and simplify data management and integration.   

Account-based marketing can be extremely complex and difficult to scale as marketers often have to personalize interactions with multiple IT and business stakeholders during the deal cycle. In order to simplify the process and help marketers accelerate deal cycles and maximize revenue potential, the new Oracle Marketing Cloud Account-Based Marketing capability provides one of the industry’s largest sets of curated B2B account audience data for targeting on paid media through a new direct integration between Oracle Marketing Cloud and Oracle Data Cloud. Oracle Data Cloud has data on more than one million US companies and over 60 million anonymous business profiles.  B2B marketers can now use this data within the Oracle Marketing Cloud to prospect for anonymous users based on account, online behavior and offline data such as past purchases or other behavioral data. The powerful insights delivered enable marketers to systematically drive account-based re-targeting, site and content optimization.

In addition, to empower B2B marketers to execute account-specific campaigns that increase conversion rates and decrease sales cycles, the new capabilities include unique account-based messaging that enables communications to be personalized at scale. Marketers can also easily append account data to new and existing leads, eliminating the need to present long forms for lead capture, while significantly improving data quality. The new account scoring and nurturing capabilities are powered by integrations with Demandbase and other Oracle Marketing AppCloud partners. It can be administered directly inside Oracle Eloqua, the Oracle Marketing Cloud’s marketing automation product. 

“Account-based marketing can be very powerful, but it has always been unnecessarily complex for B2B marketers,” said John Stetic, group vice president of products, Oracle Marketing Cloud. “With the latest enhancements to the Oracle Marketing Cloud, customers will benefit from a simpler and more innovative approach to account-based marketing through new integrations, data services and expanded capabilities. B2B marketers will be able to more efficiently and effectively execute account-based marketing campaigns that demonstrate a clear ROI.

Contact Info
Erik Kingham
Oracle
650-506-8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Erik Kingham

  • 650-506-8298

Stats History

Jonathan Lewis - Wed, 2016-04-27 07:09

From time to time we see a complaint on OTN about the stats history tables being the largest objects in the SYSAUX tablespace and growing very quickly, with requests about how to work around the (perceived) threat. The quick answer is – if you need to save space then stop holding on to the history for so long, and then clean up the mess left by the history that you have captured; on top of that you could stop gathering so many histograms because you probably don’t need them, they often introduce instability to your execution plans, and they are often the largest single component of the history (unless you are using incremental stats on partitioned objects***)

For many databases it’s the histogram history – using the default Oracle automatic stats collection job – that takes the most space, here’s a sample query that the sys user can run to get some idea of how significant this history can be:


SQL> select table_name , blocks from user_tables where table_name like 'WRI$_OPTSTAT%HISTORY' order by blocks;

TABLE_NAME                           BLOCKS
-------------------------------- ----------
WRI$_OPTSTAT_AUX_HISTORY                 80
WRI$_OPTSTAT_TAB_HISTORY                244
WRI$_OPTSTAT_IND_HISTORY                622
WRI$_OPTSTAT_HISTHEAD_HISTORY          1378
WRI$_OPTSTAT_HISTGRM_HISTORY           2764

5 rows selected.

As you can see the “histhead” and “histgrm” tables (histogram header and histogram detail) are the largest stats history tables in this (admittedly very small) database.

Oracle gives us a couple of calls in the dbms_stats package to check and change the history setting, demonstrated as follows:


SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31

1 row selected.

SQL> execute dbms_stats.alter_stats_history_retention(7)

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                          7

1 row selected.

Changing the retention period doesn’t reclaim any space, of course – it simply tells Oracle how much of the existing history to eliminate in the next “clean-up” cycle. This clean-up is controllled by a “savtime” column in each table:

SQL> select table_name from user_tab_columns where column_name = 'SAVTIME' and table_name like 'WRI$_OPTSTAT%HISTORY';

TABLE_NAME
--------------------------------
WRI$_OPTSTAT_AUX_HISTORY
WRI$_OPTSTAT_HISTGRM_HISTORY
WRI$_OPTSTAT_HISTHEAD_HISTORY
WRI$_OPTSTAT_IND_HISTORY
WRI$_OPTSTAT_TAB_HISTORY

5 rows selected.

If all you wanted to do was stop the tables from growing further you’ve probably done all you need to do. From this point onwards the automatic Oracle job will start deleting the oldest saved stats and re-using space in the existing table. But you may want to be a little more aggressive about tidying things up, and Oracle gives you a procedure to do this – and it might be sensible to use this procedure anyway at a time of your own choosing:


SQL> execute dbms_stats.purge_stats(sysdate - 7);

Basically this issues a series of delete statements (including a delete on the “stats operation log (wri$_optstat_opr)” table that I haven’t previously mentioned) – here’s an extract from an 11g trace file of a call to this procedure (output from a simple grep command):


delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_tab_history          where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_ind_history h        where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_aux_history          where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_opr                  where start_time < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history     where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histgrm_history      where savtime < :1 and rownum <= NVL(:2, rownum)

Two points to consider here: although the appearance of the rownum clause suggests that there’s a damage limitation strategy built into the code I only saw one commit after the entire delete cycle, and I never saw a limiting bind value being supplied. If you’ve got a large database with very large history tables you might want to delete one day (or even just a few hours) at a time. The potential for a very long, slow, delete is also why you might want to do a manual purge at a time of your choosing rather than letting Oracle do the whole thing on auto-pilot during some overnight operation.

Secondly, even though you may have deleted a lot of data from these table you still haven’t reclaimed the space – so if you’re trying to find space in the sysaux tablespace you’re going to have to rebuild the tables and their indexes. Unfortunately a quick check of v$sysaux_occupants tells us that there is no official “move” producedure:


SQL> execute print_table('select occupant_desc, move_procedure, move_procedure_desc from v$sysaux_occupants where occupant_name = ''SM/OPTSTAT''')

OCCUPANT_DESC                 : Server Manageability - Optimizer Statistics History
MOVE_PROCEDURE                :
MOVE_PROCEDURE_DESC           : *** MOVE PROCEDURE NOT APPLICABLE ***

So we have to run a series of explicit calls to alter table move and alter index rebuild. (Preferably not when anyone is trying to gather stats on an object). Coding that up is left as an exercise to the reader, but it may be best to move the tables in the order of smallest table first, rebuilding indexes as you go.

Footnote:

*** Incremental stats on partitioned objects: I tend to assume that sites which use partitioning are creating very large databases and have probably paid a lot more attention to the details of how to use statistics effectively and successfully; that’s why this note is aimed at sites which don’t use partitioning and therefore think that the space taken up by the stats history significant.


Pages

Subscribe to Oracle FAQ aggregator