Feed aggregator

Managing My Amazon Web Services Redhat Instance

Yann Neuhaus - Tue, 2016-11-15 10:20

In a precedent Blog I talked about how to create an AWS linux instance. Some questions can be: How to create a new user and to connect with, how to transfert files from my workstation, how to connect to my oracle instance from my workstation and so on.
In this blog I am going to deal with some basic but useful  administration tasks.
Changing my hostname
One thing we will probably do is to change the hostname. Indeed the linux is built with a generic hostname. Changing hostname include following tasks
Update /etc/hostname with the new hostname

[root@ip-172-31-47-219 etc]# vi hostname
[root@ip-172-31-47-219 etc]# cat /etc/hostname
[root@ip-172-31-47-219 etc]#

Update /etc/hosts

[root@primaserver ORCL]# cat /etc/hosts primaserver.us-west-2.compute.internal  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Update our /etc/sysconfig/network with the HOSTNAME value

[root@ip-172-31-47-219 sysconfig]# cat network
[root@ip-172-31-47-219 sysconfig]#

To keep the change permanent we have to add in /etc/cloud/cloud.cfg file the line preserve_hostname: true

[root@ip-172-31-47-219 cloud]# grep   preserve_hostname cloud.cfg
preserve_hostname: true
[root@ip-172-31-47-219 cloud]

 The last step is to reboot the server

[root@ip-172-31-47-219 cloud]# reboot
Using username "ec2-user".
Authenticating with public key "imported-openssh-key"
Last login: Mon Nov 14 03:20:13 2016 from
[ec2-user@primaserver ~]$ hostname

Creating a new user and connecting with
User creation is done by useradd as usual. But to be able to connect with this user we have to do some tasks. Suppose the new user is oracle.
With oracle we have to create a .ssh directory and adjust permissions on it

[root@ip-172-31-33-57 ~]# su - oracle
[oracle@ip-172-31-33-57 ~]$ pwd
[oracle@ip-172-31-33-57 ~]$ mkdir .ssh
 [oracle@ip-172-31-33-57 .ssh]$ chmod 700 .ssh

And then let’s create an authorized_keys file

[oracle@ip-172-31-33-57 ~]$ touch .ssh/authorized_keys
[oracle@ip-172-31-33-57 ~]$ cd .ssh/
[oracle@ip-172-31-33-57 .ssh]$ vi authorized_keys
[oracle@ip-172-31-33-57 .ssh]$ chmod 600 authorized_keys

The last step is to copy the content of our public key (we used for user ec2-user). Remember that we have created a key pair when we built our linux box (see corresponding blog ) into the authorized_keys under /home/oracle/.ssh/authorized_keys

cd /home/ec2-user/
cd .ssh/
cat authorized_keys >> /home/oracle/.ssh/authorized_keys

And now connection  should be fine with my new user from my workstation using public DNS and putty.
Tranferring files from my workstation to the AWS instance
One need might be to transfer files from our local workstation to the our AWS instance. We can use WinSCP, we just have to use the key by importing our putty session (we already used to connect) into WinSCP and after we can connect. Launch WinSCP and use Tools option.

And then select the session we want to import and we should connect with WinSCP
Connecting to my oracle instance from my workstation
I have installed my oracle software and my database and listener are running. How to connect from my workstation? It is like we usually do. We just have to allow connection on the database port (here I am using the default 1521). Security Groups option is used for editing the inbound rules.
Using Add Rule, we can allow connection on port 1521. Of course we can filter the source for the access.

And using the DNS of my instance I can connect.For this example I am connecting to an oracle express instance XE.

You have AWS documentation here https://aws.amazon.com/documentation/









Cet article Managing My Amazon Web Services Redhat Instance est apparu en premier sur Blog dbi services.

Live Webinar - Master Class - ADF Bindings Explained

Andrejus Baranovski - Tue, 2016-11-15 10:09
I will be running free online webinar on Wed, Dec 7, 2016 7:00 PM - 9:00 PM CET. Everyone who wants to learn more about ADF Bindings is welcome to join !

Registration URL: https://attendee.gotowebinar.com/register/3325820742563232258
Webinar ID: 806-309-947


Master Class - ADF Bindings Explained (Andrejus Baranovskis, Oracle ACE Director)


This 2 hours long webinar is targeted for ADF beginners with main goal to explain ADF bindings concept and its usage to the full potential. ADF Bindings is one of the most complex parts to learn in ADF, every ADF developer should understand how ADF bindings work. Goal is to have interactive session, participants could ask questions and get answers live. This live event is completely free - join it on December 7th at 7:00 PM CET (Central European Time) (which is respectively 12:00 PM New York and 10:00 AM San Francisco on December 7th).

In order to join live webinar, you need to complete registration form on GoToWebinar. Number of participants is limited, don't wait - register now.

Topics to be covered: 

1. ADF Bindings overview. Why ADF Bindings are required and how they are useful
2. Drill down into ADF Bindings. Explanation how binding object is executed from UI fragment down to Page Definition
3. ADF Binding types explanation. Information about different bindings generated, when using JDeveloper wizards. What happens with ADF Bindings, when using LOV, table, ADF Query, Task Flow wizards.
4. Declarative ADF binding access with expressions
5. Programmatic ADF binding access from managed beans
6. ADF binding sharing and access from ADF Task Flows. How to create binding layer for Task Flow method call or router activities.
7. Best practices for ADF Bindings
8. Your questions

Linux Instance in Amazon Web Services (AWS)

Yann Neuhaus - Tue, 2016-11-15 03:19

In this article I will talk about how to create a linux machine in the cloud amazon AWS. For testing a trial account can be created.
Once registered, we can connect by using the “Sign In to the Console” button
To create an instance, let’s click on EC2 under Compute
And then let’s use the Launch Instance button

We can see the templates  for building our machine. In our exemple we are going to use a Redhat one.
We keep the default selected
We keep the default instance details
Below the storage details
The instance tag
We keep default values for the security group
After we have the instance review which is resuming our configuration
Before launching the instance, we have to create a key pair. And we have to store the private one we will use to connect using putty for example.


We can finish the process now by clicking on Launch Instances

If we click on the Connect tab on the top we have info how to connect. One useful info is the Public DNS we will use to connect.
Now that our instance is ready let’s see how to connect. I am using putty.
A few steps ago we have created a key pair  and we kept the private one with an extension .pem. Using this key we will create a key with a format for putty (.ppk). For this we will use puttygen.
Just launch putty key generator and load the .pem key and follow the instructions
And Now we can use putty and load the .ppk private key to connect with the user ec2-user which is a built-in user and using the Public DNS.
Click Browse to load the .ppk file

Using username "ec2-user".
Authenticating with public key "imported-openssh-key"
[ec2-user@ip-172-31-33-57 ~]$ hostname
[ec2-user@ip-172-31-33-57 ~]$

[ec2-user@ip-172-31-33-57 ~]$ cat /proc/meminfo | grep Mem
MemTotal:        1014976 kB
MemFree:          630416 kB
MemAvailable:     761716 kB
[ec2-user@ip-172-31-33-57 ~]$

[ec2-user@ip-172-31-33-57 ~]$ cat /proc/cpuinfo | grep proc
processor       : 0
[ec2-user@ip-172-31-33-57 ~]$

[ec2-user@ip-172-31-33-57 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)
[ec2-user@ip-172-31-33-57 ~]$





Cet article Linux Instance in Amazon Web Services (AWS) est apparu en premier sur Blog dbi services.

Fix for Big Data Lite 4.6

If you are using Big Data Lite 4.6, you will need to make a change to the /etc/fstab file:sudo gedit /etc/fstab<replace line 1 with line 2> ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DSTv28 Timezone Patches Available for E-Business Suite 12.1 and 12.2

Steven Chan - Tue, 2016-11-15 02:04
Hourglass iconIf your E-Business Suite Release environment is configured to support Daylight Saving Time (DST) or international time zones, it's important to keep your timezone definition files up-to-date. They were last changed in October 2016 and released as DSTv27.

DSTv28 is now available and certified with Oracle E-Business Suite Release 12.1 and 12.2. This update includes the timezone information from the IANA tzdata 2016g.  It is cumulative: it includes all previous Oracle DST updates. 

Is Your Apps Environment Affected?

When a country or region changes DST rules or their time zone definitions, your Oracle E-Business Suite environment will require patching if:

  • Your Oracle E-Business Suite environment is located in the affected country or region OR
  • Your Oracle E-Business Suite environment is located outside the affected country or region but you conduct business or have customers or suppliers in the affected country or region

The latest DSTv28 timezone definition file is cumulative and includes all DST changes released in earlier time zone definition files. DSTv27 includes changes to the following timezones since the DSTv24 release:

  • Asia/Rangoon
  • Asia/Istanbul
  • Europe/Istanbul
  • Turkey

What Patches Are Required?

In case you haven't been following our previous time zone or Daylight Saving Time (DST)-related articles, international timezone definitions for E-Business Suite environments are captured in a series of patches for the database and application tier servers in your environment. The actual scope and number of patches that need to be applied depend on whether you've applied previous DST or timezone-related patches. Some sysadmins have remarked to me that it generally takes more time to read the various timezone documents than it takes to apply these patches, but your mileage may vary.

Proactive backports of DST upgrade patches to all Oracle E-Business Suite tiers and platforms are not created and supplied by default. If you need this DST release and an appropriate patch is not currently available, raise a service request through support providing a business case with your version requirements.

The following Note identifies the various components in your E-Business Suite environment that may need DST patches:

What is the business impact of not applying these patches?

Timezone patches update the database and other libraries that manage time. They ensure that those libraries contain the correct dates and times for the changeover between Daylight Savings Time and non-Daylight Savings Time.

Time is used to record events, particularly financial transactions.  Time is also used to synchronize transactions between different systems.  Some organizations’ business transactions are more-sensitive to timezone changes than others. 

If you do not apply a timezone patch, and do business with a region that has changed their timezone definitions, and record a transaction that occurs at the boundary between the “old” and the “new” time, then the transaction may be recorded incorrectly. That transaction's timestamp may be off by an hour. 

For example:

  • An order is placed for a customer in a country which changed their DST dates in DST v27
  • The old Daylight Savings Time expiration date was Nov. 2
  • The new Daylight Savings Time expiration date is now October 31
  • An order is set to ship at 12am on November 1st
  • Under the old Daylight Savings Time rules, the revenue would be recorded for November
  • Under the new Daylight Savings Time rules, the revenue would be recorded for October

Related Article

Oracle's Revenue Recognition rules prohibit us from discussing certification and release dates, but you're welcome to monitor or subscribe to this blog. I'll post updates here as soon as soon as they're available.   

Categories: APPS Blogs

Oracle TO_DATE Function Explained with Examples

Complete IT Professional - Mon, 2016-11-14 21:26
The Oracle TO_DATE function is one of the most common and useful string manipulation functions in Oracle, but it can be confusing. I’ll explain how to use the TO_DATE function in this article. Purpose of the Oracle TO_DATE Function The purpose of the TO_DATE function in Oracle is to convert a character value to a […]
Categories: Development

How to insert the data using sql*loader by CSV file which contain comma as separator and comma present at column value

Tom Kyte - Mon, 2016-11-14 20:06
Hi Connor, I have an issue with sql*loader during loading below CSV file I have a csv file with below data:- Column names:- empid,empname,address,salary,deptn0 CSV file data:- 1123,Swarup,PO Box 42,1407 Graymalkin Lane,Salem Center, N...
Categories: DBA Blogs

Data Guard Log Apply method

Tom Kyte - Mon, 2016-11-14 20:06
I have a primary and a standby database which is running in maximum performance mode and LGWR ASYNC has been set for the same in Primary. Platform - Linux and Version - 12c This is regarding the apply process in Standby Database 1. I do not ...
Categories: DBA Blogs

Sql statistics per execution

Tom Kyte - Mon, 2016-11-14 20:06
Hi Tom, Is there a way to find cpu_time, db_time, physical_read_requests, physical_write_requests...etc per execution basis ? Say I run a particular SQL multiple times with different bind values. I'm interested in seeing sql with bind variables ...
Categories: DBA Blogs

Oracle Tracing with Bind Variables

Tom Kyte - Mon, 2016-11-14 20:06
Hi , I enabled tracing on the particular session in oracle database by using "dbms_system.set_sql_trace_in_session" and i am not enabled to trace back the binding variables associated with insert statements . Below is the sample statement: i...
Categories: DBA Blogs

OGG Activity Logging Tracing (Doc ID 1204284.1)

Michael Dinh - Mon, 2016-11-14 19:54

I just came across MOS Doc for tracing OGG processes.

Just thought I would compare the old versus new.

You can find comparison and my preference here

Is it safe to move/recreate alertlog while the database is up and running

Learn DB Concepts with me... - Mon, 2016-11-14 19:00

 Is it safe to move/recreate alertlog while the database is up and running??

It is totally safe to "mv" or rename it while we are running. Since chopping part of it out would be lengthly process, there is a good chance we would write to it while you are editing it so I would not advise trying to "chop" part off -- just mv the whole thing and we'll start anew in another file.

If you want to keep the last N lines "online", after you mv the file, tail the last 100 lines to "alert_also.log" or something before you archive off the rest.

[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle   
488012 Nov 14 10:23 alert_orcl.log

I will rename the existing alertlog file to something
[oracle@Linux03 trace]$ mv alert_orcl.log alert_orcl_Pre_14Nov2016.log

[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle 488012 Nov 14 15:42 alert_orcl_Pre_14Nov2016.log
[oracle@Linux03 trace]$ ls -ll alert_*

Now lets create some activity that will need to update the alertlog.

[oracle@Linux03 bin]$ sqlplus / as sysdba

SQL*Plus: Release Production on Mon Nov 14 16:23:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system switch logfile;

System altered.

SQL> /

lets see if the new alertlog file has been created.[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle    249 Nov 14 16:23 alert_orcl.log
-rw-r-----. 1 oracle oracle 488012 Nov 14 15:42 alert_orcl_Pre_14Nov2016.log
Categories: DBA Blogs

Version Control for PL/SQL

Gerger Consulting - Mon, 2016-11-14 15:47
We are hosting a free webinar to talk about how to manage PL/SQL code bases. Attend and learn how you can use Gitora, our new product that links Oracle Database to Git, to manage your PL/SQL source code.

170+ people have already signed up! :-) Register at this link.

Categories: Development

Oracle Data Integrator 12c: Getting Started - Developer's Quickstart

Rittman Mead Consulting - Mon, 2016-11-14 13:18

I’ve decided that it’s time for a refresher on Oracle Data Integrator 12c. This week in the “Oracle Data Integrator 12c: Getting Started” series: getting a quick start on mapping development. Several objects must be created before a single bit of ETL can even be created, and for those who are new to the product, as many readers of this series will be, that can be frustrating. The objects that must be in place are as follows:

  • Data Server
  • This object is the connection to your data source. Created under one of the many technologies available in ODI, this is where the JDBC url, username, password, and other properties are all created and stored.
  • Physical Schema
  • Underneath the Data Server you’ll find the Physical Schema. This object, when connecting to a relational database, represents the database schema where the tables reside that you wish to access in ODI.
  • Logical Schema
  • Here’s where it can sometimes get a bit tricky for folks new to Oracle Data Integrator. One of the great features in ODI is how it abstracts the physical connection and schema from the logical objects. The Logical Schema is mapped to the Physical Schema by an object called a Context. This allows development of mappings and other objects to occur against the Logical schema, shielding the physical side from the developers. Now when promoting code to the next environment, nothing must changed in the developed objects for the connection.
  • Model
  • Once you have the Topology setup (Data Server, Physical Schema, Logical Schema), you can then create your Model. This is going to be where the logical Datastores are grouped for a given schema. There are many other functions of the Model object, such as journalizing (CDC) setup, but we’ll save those features for another day.
  • Datastore
  • The Datastore is a logical representation of a table, file, XML element, or other physical object. Stored in the form of a table, the Datastore has columns and constraints. This is the object that will be used as a source or target in your ODI Mappings.

Now you can create your mapping. Whew!

Over the years, Oracle has worked to make the process of getting started a lot easier. Back in ODI 11g, the Oracle Data Integrator QuickStart was a 10 step checklist, where each step leads to another section in the documentation. A nice gesture by Oracle but by no means “quick”. There was also a great tool, the ODI Accelerator Launchpad, built in Groovy by David Allan of the Oracle DI team. Now we were getting closer to something “quick”. But this was simply a script that you had to run, not an integrated part of the ODI Studio platform. Finally, with the release of ODI 12.1.3, the Quickstart was introduced. The New Model and Topology Objects wizard allows you to create everything you need in order to reverse engineer tables into ODI Datastore objects and begin creating your first mappings.

ODI 12c New Model and Topology Objects wizard

Going through the wizard is much simpler than manually setting up the Topology objects and Model for folks just getting started with Oracle Data Integrator. The blog post from Oracle linked above can walk you through the process and I’ve added a demonstration video below that does the same. As a bonus in my demo, I’ve added a tip to help you get your initial load mappings created in an instant. Have a look:

There you have it, a quick and easy way to get started with Oracle Data Integrator 12c and create your first source to target Mapping. If you have further questions and would like a more detailed answer, you can always join one of the Rittman Mead ODI bootcamps to learn more from one of our data integration experts. Up next in the Getting Started series, we’ll look at adding enhancing the ODI metadata by adding constraints and other options.

Categories: BI & Warehousing

Bulgarian Oracle User Group (BGOUG) 2016 : Pravets to Birmingham

Tim Hall - Mon, 2016-11-14 13:05

bgougA group of us were being picked up by a minibus at 09:50 for the trip back to the airport. Timo Raitalaakso and Gianni Ceresa were on the same flight as me for the first leg. We said our goodbyes to everyone in the hotel lobby, then it was off to Sofia airport.

The airport was very quiet when we arrived. We checked in and dropped off our bags, then walked straight through security. It really doesn’t get easier than that. Timo, Gianni and myself then sat and chatted until the it was time to board.

The flight to Munich was listed as a two hour flight, but I have no idea how long it actually took. I was reading a novel written by one of my friends during the trip. The guy a couple of seats along was snoring so loud it kept making me laugh. I’m not sure how anyone could sleep in the same house as him!

We arrived at Munich, where I said goodby to Timo and Gianni, before trudging around for quite some time trying to find my gate. It was a 1:40 layover for me, so it wasn’t a rush.

The flight from Munich to Birmingham was another two hour flight. I spent the journey reading again, so I didn’t really notice much about the flight.

Back in Birmingham, I got my case and took a taxi home, while continuing to read my book. By the time I got home I was feeling quite drained, so I went to bed early, ready to start the working week!

That marked the end of my last international event of the year and I’m looking forward to spending some time at home in a single timezone. The last few months have been a killer!



Bulgarian Oracle User Group (BGOUG) 2016 : Pravets to Birmingham was first posted on November 14, 2016 at 8:05 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Marketing Cloud Helps B2B Marketers Accelerate Lead Generation

Oracle Press Releases - Mon, 2016-11-14 12:29
Press Release
Oracle Marketing Cloud Helps B2B Marketers Accelerate Lead Generation New marketing automation and content marketing capabilities break down marketing silos and simplify cross-device content creation

Redwood Shores, Calif.—Nov 14, 2016

Oracle today announced enhancements to the marketing automation and content marketing capabilities within the Oracle Marketing Cloud that simplify digital marketing and empower marketers to deliver a truly personalized cross-channel customer experience. The latest additions to Oracle Eloqua and Oracle Content Marketing enable marketers to create and distribute content, and easily transform data in order to rapidly adapt to customer behavior and needs. This helps customers to increase sales and marketing collaboration, build stunning cross-device content and accelerate lead generation.

Ever increasing customer expectations are forcing B2B organizations to rethink established marketing processes in order to break down internal silos between marketing, sales and other customer facing departments and prevent a fragmented customer experience. To address this challenge, Oracle has introduced a new Content Portal, Program Canvas and Responsive Content Editor within the Oracle Marketing Cloud. These innovative new capabilities give marketers the power to align sales and marketing by making approved marketing content easy to find, track and share.

“The very nature of B2B marketing is changing and as a result marketers need to rethink content strategies and work closer than ever with sales to generate and convert leads,” said Stephen Streich, senior director of product management, Oracle Marketing Cloud. “With the latest additions to Oracle Eloqua and Oracle Content Marketing, Oracle is empowering marketers to quickly create and share compelling content across their organization and reduce the number of steps required to identify and pursue new leads. This will drive efficiency across the marketing and sales process and ultimately help marketers improve the customer experience and lead generation.”

The new enhancements to Oracle Eloqua and Oracle Content Marketing enable marketers to:

  • Find, track and share content across the organization: The new Content Portal allows cross-functional sales and marketing teams to find and utilize the right content at the right time using search criteria including Sales Stage, Buyer Persona, Content Type and custom fields. Individual users can subscribe to content that is relevant for them which then triggers automated notifications as soon as new content assets become available. Easily embedded into any third-party application or page, the new Content Portal also allows users to work in the tools they know enriched with capabilities they need to easily find, track and share marketing approved content.
  • Improve speed to lead and data normalization: The new Program Canvas empowers marketers to quickly set up data transformations and data normalization workflows. A new Listener Framework makes the data workflows dramatically faster and more responsive by listening to lead scoring models, forms and new contact creation events to give marketers the ability to be more responsive to critical prospect behaviors. In addition, next-generation application integration capabilities significantly reduce the number of workflow steps necessary to manage and maintain data, improving speed to lead while keeping databases maintained.
  • Streamline and simplify responsive content creation: Slated for planned release in early CY 2017, the new Responsive Content Editor helps marketers make content more meaningful and responsive by removing technical roadblocks.

The Oracle Marketing Cloud is part of the Oracle Customer Experience (CX) Cloud, one of the industry’s most complete CX solutions. Oracle CX Cloud empowers organizations to improve experiences, enhance loyalty, differentiate their brands, and drive measurable results by creating consistent, connected, and personalized brand experiences across all channels and devices.

Contact Info
Simon Jones
PR for Oracle
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

Small and Medium-Sized Businesses Scale with Oracle Cloud Platform

Oracle Press Releases - Mon, 2016-11-14 09:56
Press Release
Small and Medium-Sized Businesses Scale with Oracle Cloud Platform Oracle Enables Companies of All Sizes to Succeed with Easy-to-Deploy Cloud Solutions

Redwood Shores, Calif.—Nov 14, 2016

Oracle Corporation announced today that small and medium sized business (SMBs) turn to Oracle for strategic collaboration on their journey to the Cloud. SMBs are utilizing the benefits of enterprise-grade capabilities delivered by Oracle Cloud Platform for rapid application development, fast and predictable performance, reliable security, and elastic scalability to support their growth.

Oracle Cloud Platform offers an integrated offering across Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) to enable developers, IT and line of business leaders to leverage the Cloud. These offerings allow SMBs to develop, deploy and manage any application or workload. Oracle offers a strategic platform to help solve business challenges and ensure time-to-value with the cloud. This offering allows SMBs to utilize the same best-in-class features that enterprises leverage.

MCH Data Taps Oracle Database Cloud Service for Seamless Transition into the Cloud

MCH Strategic Data is a data services company headquartered in Sweet Springs, Missouri that specializes in collecting and compiling data from institutions in the education, healthcare, government and religious sectors. With its nimble 100-person staff, MCH relies on third-party vendors to manage many of its technology systems. When one of MCH Data’s vendors needed to be replaced, they standardized on Oracle as other vendors failed to meet their needs. Oracle Database Cloud Service provided an easy and fast transition to the Cloud as MCH Data was able to provision a complete Oracle Database Cloud Service environment in a few minutes and get started immediately without upfront costs.

“The power of data in the Cloud is in the ability to rapidly leverage it for business insights that drive our customers’ success,” said Brian Vogelsmeier, director of IT, MCH Strategic Data. “As collecting data is paramount to our business, we needed to pick a vendor that has best-in-class database Cloud technology that makes the back up to the database seamless, hence our choice to leverage Oracle’s services.”  

Pragmatyxs Standardized on Oracle Database Cloud Service, PaaS and IaaS for Efficiency, Security and 24/7 Availability

Pragmatyxs, a company that works with Fortune 500 clients to ensure smooth communications between their supply chain and finance systems, as well as the barcode and product labels mandated by regulatory bodies, also leverages Oracle. All of Pragmatyxs’ clients’ product labels must comply with industry and regulatory standards, across dozens of countries. Pragmatyxs had an Oracle Database on premises, in addition to using Java for product development.They chose to migrate both to the Cloud in order to reduce the amount of time that their spent on maintenance and support, without having the upfront hardware costs, without needing to know backup and recovery commands, and without having to perform complex tasks such as database software upgrades and patching.

“We launched our Cloud based label printing service for our partners and remote facilities which was a strategic initiative for our business. We have 15 employees, so we needed a solution that was the most efficient and secure with 24/7 availability” said Paul Van Hout, CEO and Founder, Pragmatyxs. “By using Oracle Database Cloud Service, PaaS and IaaS. Pragmatyxs utilizes Oracle’s Database Cloud Service, Java Cloud Service, Infrastructure as a Service, and Messaging and Cloud Service. We are maximizing the efficiency promised by the Cloud while giving our customers a better, more configurable product. With Oracle, we can stay ahead of the industry and compete like we never could before.”

IQMS ERP Selects Oracle Cloud Platform to Offer Customers Cloud Backup and Industry-Leading Security

After 27 years of offering manufacturing operations applications entirely under a licensed, on-premises model, IQMS ERP, a comprehensive manufacturing MES and ERP software system, now offers the choice to subscribe to the software as service running on Oracle Cloud Platform. This allows them to offer customers capabilities they didn’t have before, such as Oracle Business Intelligence Cloud Service alongside its IQMS application, providing the tools for companies to analyze and visualize the extensive Internet of Things (IoT) data that IQMS collects on factory-floor machine performance. Similarly, IQMS has started offering Oracle’s Cloud backup service, for both Cloud and on-premises versions of IQMS. Since IQMS applications run on the Oracle Database, the Cloud backup service drew immediate interest from customers.

“For its reliability, price point, and strong security, the real value of Oracle Cloud Platform is being able to offer customers more services along with added simplicity,” said Gary Nemmers, CEO and President, IQMS. “If any component fails, factories stop and that can’t happen. With Oracle Cloud Platform, our customers have benefits like more extensive data encryption than they run in house and some are have even freed up resources previously tied up with on-premises datacenter maintenance.”

“We are proud to be helping businesses of all sizes build, grow and compete in the Cloud,” said Ashish Mohindroo, Vice President, Oracle Cloud. “With a single, connected cloud, Oracle offers SMBs more than just subscription software. We are proud to be a strategic partner to help explain how all of the pieces of our cloud work together to help solve business challenges—by helping SMBs build innovative applications and solutions for sales, marketing, finance and reporting, to talent and recruitment drives.”

In addition to subscription pricing that’s SMB-friendly, Oracle has improved the buying experience to help enable customers be successful with their cloud purchases. Oracle created The Accelerated Buying Experience to make purchasing cloud services fast and simple. Rather than taking weeks to execute a transaction, customers can now complete their purchases in a matter of hours or a few days.

Key features include:

  • Data Management
    • MySQL Cloud Service
    • Oracle Database Cloud Service
    • Exadata Express Cloud Service
    • Big Data Cloud Service
  • Cloud Native Application Development
    • Application Builder Cloud Service
    • Application Container Cloud Service
    • Java Cloud Service
    • Mobile Cloud Service
  • Integration
    • Application Integration Cloud Service
    • SOA Cloud Service
  • Application and Cloud Management
    • Application Performance Monitoring Cloud Service
    • Log Analytics Cloud Service

To learn more about Oracle’s SMB offerings, please visit us at oracle.com/smb.

To learn more about what the Oracle Cloud Platform, please visit us at cloud.oracle.com.

Contact Info
Sarah Fraser
Oracle PR
+1 (650) 743.0660
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Sarah Fraser

  • +1 (650) 743.0660

Flashback Database -- 2 : Flashback Requires Redo (ArchiveLog)

Hemant K Chitale - Mon, 2016-11-14 09:03
Although Flashback Logs support the ability to execute a FLASHBACK DATABASE command, the actual Flashback also requires Redo to be applied.  This is because the Flashback resets the images of blocks but doesn't guarantee that all transactions are reset to the same point in time (any one block can contain one or more active, uncommitted transactions, and there can be multiple blocks with active transactions at any point in time).  Therefore, since Oracle must revert the database to a consistent image, it needs to be able to apply redo as well (just as it would do for a roll-forward recovery from a backup).

Here's a quick demo of what happens if the redo is not available.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.oldest_flashback_scn, l.oldest_flashback_time
2 from v$flashback_database_log l;

------------------ -------------------- ------------------
14-NOV-16 22:51:37 7246633 14-NOV-16 22:39:43


sh-4.1$ pwd
sh-4.1$ date
Mon Nov 14 22:52:29 SGT 2016
sh-4.1$ rm *

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.

SQL> flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS');
flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS')
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 7246634 to SCN 7269074
ORA-38761: redo log sequence 70 in thread 1, incarnation 5 could not be

SQL> l
1 select sequence#, first_change#, first_time
2 from v$archived_log
3 where resetlogs_time=(select resetlogs_time from v$database)
4 and sequence# between 60 and 81
5* order by 1
SQL> /

---------- ------------- ------------------
60 7245238 14-NOV-16 22:27:35
61 7248965 14-NOV-16 22:40:46
62 7250433 14-NOV-16 22:40:52
63 7251817 14-NOV-16 22:41:04
64 7253189 14-NOV-16 22:41:20
65 7254583 14-NOV-16 22:41:31
66 7255942 14-NOV-16 22:41:44
67 7257317 14-NOV-16 22:41:59
68 7258689 14-NOV-16 22:42:10
69 7260094 14-NOV-16 22:42:15
70 7261397 14-NOV-16 22:42:22
71 7262843 14-NOV-16 22:42:28
72 7264269 14-NOV-16 22:42:32
73 7265697 14-NOV-16 22:42:37
74 7267121 14-NOV-16 22:42:43
75 7269075 14-NOV-16 22:48:05
76 7270476 14-NOV-16 22:48:11
77 7271926 14-NOV-16 22:48:17
78 7273370 14-NOV-16 22:48:23
79 7274759 14-NOV-16 22:48:32
80 7276159 14-NOV-16 22:48:39
81 7277470 14-NOV-16 22:48:43

22 rows selected.


Note how the error message states that Redo(Archive)Log Sequence#70 is required but provides a range of SCNs that span Sequence#60 to Sequence#74 !

Bottom Line : Flashback Logs alone aren't adequate to Flashback database.  You also need the corresponding Redo.

Just to confirm that I can continue with the current (non-Flashbacked Database) state (in spite of the failed Flashback)  :

SQL> shutdown;
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 89
Next log sequence to archive 90
Current log sequence 90
SQL> select current_scn from v$database;



.Bottom Line : *Before* you attempt a FLASHBACK DATABASE to the OLDEST_FLASHBACK_TIME (or SCN) from V$FLASHBACK_DATABASE_LOG, ensure that you *do* have the "nearby"  Archive/Redo Logs. !
Categories: DBA Blogs

12cR2 Single-Tenant: Multitenant Features for All Editions

Yann Neuhaus - Mon, 2016-11-14 09:00

Now that 12.2 is there, in the Oracle Public Cloud Service, I can share the slides of the presentation I made for Oracle Open World:

I’ll give the same session in French, In Geneva on November 23rd at Oracle Switzerland. Ask me if you want an invitation.

The basic idea is that non-CDB is deprecated, and not available in the Oracle Public Cloud. If you don’t purchase the Multitenant Option, then you will use ‘Single-Tenant’. And in 12.2 there are interesting features coming with it. Don’t fear it. Learn it and benefit from it.


In addition to that, I’ll detail

  • The 12.2 new security feature coming with multitenant: at DOAG 2016
  • The internals of multitenant architecture: at UKOUG TECH16

And don’t hesitate to come at the dbi services booth for questions and/or demos about Multitenant.
There’s also the book I co-authored: Oracle Database 12c Release 2 Multitenant (Oracle Press) which should be available within a few weeks.


Cet article 12cR2 Single-Tenant: Multitenant Features for All Editions est apparu en premier sur Blog dbi services.

Adding Reserved command in SQLcl

Kris Rice - Mon, 2016-11-14 08:27
I saw Stephen's example of checking reserved words in the database from Vertan's day and figured I'd do the same in SQLcl. #595 #plsql Is it a reserved word? PL/SQL procedure to help you sort that out. Dyn PLSQL example! @oraclelivesql https://t.co/M10kVnsQ3y pic.twitter.com/XFFHOVzNCK — Steven Feuerstein (@sfonplsql) November 11, 2016 Checked if something is reserved seems like a nice add


Subscribe to Oracle FAQ aggregator