The Oracle Big Data Lite VM available on Oracle technet, provides a pre built environment for learning about a number of key Oracle products, including Oracle 12c database, Big Data Discovery and Data integrator as well as Cloudera Distribution – Apache Hadoop (CDH 5.8.0).
The download ultimately delivers an OVA “appliance” file for use with Oracle VirtualBox, but there isn’t anything to stop you running this as a VM on proxmox 4, with a bit of effort, as follows.
NOTE – Things to read which can help with this process:
- Oracle Big Data Lite Deployment Guide.
- How to upload an OVA to proxmox guide by James Coyle: https://www.jamescoyle.net/how-to/1218-upload-ova-to-proxmox-kvm
- Converting to RAW and pushing to a raw lvm partition: https://www.nnbfn.net/2011/03/convert-kvm-qcow2-to-lvm-raw-partition/
- Firstly download the files that make up the OVA from here.
- Follow the instructions on the download page to convert the multiple files into one single OVA file.
- For Oracle Virtualbox, simple follow the rest of the instructions in the Deployment Guide.
- For Proxmox, where I was running LVM storage for the virtual machines, first rename the single OVA file to .ISO, then upload that file (BigDataLite460.iso) to a storage area on your proxmox host, in this case, mine was called “data”. You can upload the file through the Proxmox GUI, or manually via the command line. My files were uploaded through the GUI and end up in “/mnt/pve-data/template/iso”.
- Now, bring up a shell and navigate to the ISO directory and then unpack the ISO file by running “tar xvf BigDataLite460.iso”. This should create five files which include one OVF file (Open Virtualisation Format) and four VMDK files (Virtual Machine Disk).
root@HP20052433:/mnt/pve-data/template/iso# ls -l total 204127600 -rw------- 1 root root 8680527872 Oct 25 02:43 BigDataLite460-disk1.vmdk -rw------- 1 root root 1696855040 Oct 25 02:45 BigDataLite460-disk2.vmdk -rw------- 1 root root 23999689216 Oct 25 03:11 BigDataLite460-disk3.vmdk -rw------- 1 root root 220160 Oct 25 03:11 BigDataLite460-disk4.vmdk -rw-r--r-- 1 root root 34377315328 Nov 14 10:59 BigDataLite460.iso -rw------- 1 root root 20056 Oct 25 02:31 BigDataLite460.ovf
- Now, create a new VM in proxmox via the GUI or manually. The VM I created had the required memory and CPUs as per the deployment guide, together with four Hard Disks – mine were all on the SCSI interface and were set to be 10G in size initially – this will change later.
- The hard disks were using a storage area on Proxmox that was defined as type LVM.
- Now convert the VMDK files to RAW files which we’ll then push to the LVM Hard Disks as follows:
qemu-img convert -f vmdk BigDataLite460-disk1.vmdk -O raw BigDataLite460-disk1.raw qemu-img convert -f vmdk BigDataLite460-disk2.vmdk -O raw BigDataLite460-disk2.raw qemu-img convert -f vmdk BigDataLite460-disk3.vmdk -O raw BigDataLite460-disk3.raw qemu-img convert -f vmdk BigDataLite460-disk4.vmdk -O raw BigDataLite460-disk4.raw
- Now list those raw files, so we can see their sizes:
root@HP20052433:/mnt/pve-data/template/iso# ls -l *.raw -rw-r--r-- 1 root root 104857600000 Nov 16 07:58 BigDataLite460-disk1.raw -rw-r--r-- 1 root root 214748364800 Nov 16 08:01 BigDataLite460-disk2.raw -rw-r--r-- 1 root root 128849018880 Nov 16 08:27 BigDataLite460-disk3.raw -rw-r--r-- 1 root root 32212254720 Nov 16 08:27 BigDataLite460-disk4.raw
- Now resize the lvm hard disks to the corresponding sizes (the ID of my proxmox VM was 106 and my hard disks were scsi):
qm resize 106 scsi0 104857600000 qm resize 106 scsi1 214748364800 qm resize 106 scsi2 128849018880 qm resize 106 scsi3 32212254720
- Now copy over the content of the raw files to the corresponding lvm hard disks:
dd if=BigDataLite460-disk1.raw of=/dev/vm_storage_group/vm-106-disk-1 dd if=BigDataLite460-disk2.raw of=/dev/vm_storage_group/vm-106-disk-2 dd if=BigDataLite460-disk3.raw of=/dev/vm_storage_group/vm-106-disk-3 dd if=BigDataLite460-disk4.raw of=/dev/vm_storage_group/vm-106-disk-4
- Now start the VM and hey presto there it is.
- You could stop there as it’s a self contained environment, but obviously you can also do a whole bunch of networking stuff to make it visible on your network as well.
The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify missing application tier or database bugfixes that need to be applied to your E-Business Suite Release 12.2 system. ETCC maps missing bugfixes to the default corresponding patches and displays them in a patch recommendation summary.
ETCC was recently updated to include bug fixes and patching combinations for the following:
- October 2016 WebLogic Server (WLS) Patch Set Update (PSU)
- October 2016 Database Patch Set Update and Bundle Patch
- July 2016 Database Patch Set Update and Bundle Patch
- July 2016 Database Cloud Service (DBCS) / Exadata Cloud Service (ExaCS) service
Obtaining ETCCWe recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.
- Identifying Missing App Tier and Database Tier Patches for EBS 12.2
- ETCC Tool Enhanced for Finding Mandatory EBS 12.2 Patches
- Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes (Doc ID 1594274.1)
- Database Patch Set Update Overlay Patches Required for Use with PSUs (Doc ID 1147107.1)
- Database Patches Required by Oracle E-Business Suite on Oracle Engineered Systems: Exadata Database Machines and SuperClusters (Doc ID 1392527.1)
In a precedent Blog I talked about how to create an AWS linux instance. Some questions can be: How to create a new user and to connect with, how to transfert files from my workstation, how to connect to my oracle instance from my workstation and so on.
In this blog I am going to deal with some basic but useful administration tasks.
Changing my hostname
One thing we will probably do is to change the hostname. Indeed the linux is built with a generic hostname. Changing hostname include following tasks
Update /etc/hostname with the new hostname
[root@ip-172-31-47-219 etc]# vi hostname [root@ip-172-31-47-219 etc]# cat /etc/hostname primaserver.us-west-2.compute.internal [root@ip-172-31-47-219 etc]#
[root@primaserver ORCL]# cat /etc/hosts 127.0.0.1 primaserver.us-west-2.compute.internal localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Update our /etc/sysconfig/network with the HOSTNAME value
[root@ip-172-31-47-219 sysconfig]# cat network NETWORKING=yes NOZEROCONF=yes HOSTNAME=primaserver.us-west-2.compute.internal [root@ip-172-31-47-219 sysconfig]#
To keep the change permanent we have to add in /etc/cloud/cloud.cfg file the line preserve_hostname: true
[root@ip-172-31-47-219 cloud]# grep preserve_hostname cloud.cfg preserve_hostname: true [root@ip-172-31-47-219 cloud]
The last step is to reboot the server
[root@ip-172-31-47-219 cloud]# reboot Using username "ec2-user". Authenticating with public key "imported-openssh-key" Last login: Mon Nov 14 03:20:13 2016 from 220.127.116.11.static.wline.lns.sme.cust.swisscom.ch [ec2-user@primaserver ~]$ hostname primaserver.us-west-2.compute.internal
Creating a new user and connecting with
User creation is done by useradd as usual. But to be able to connect with this user we have to do some tasks. Suppose the new user is oracle.
With oracle we have to create a .ssh directory and adjust permissions on it
[root@ip-172-31-33-57 ~]# su - oracle [oracle@ip-172-31-33-57 ~]$ pwd /home/oracle [oracle@ip-172-31-33-57 ~]$ mkdir .ssh [oracle@ip-172-31-33-57 .ssh]$ chmod 700 .ssh
And then let’s create an authorized_keys file
[oracle@ip-172-31-33-57 ~]$ touch .ssh/authorized_keys [oracle@ip-172-31-33-57 ~]$ cd .ssh/ [oracle@ip-172-31-33-57 .ssh]$ vi authorized_keys [oracle@ip-172-31-33-57 .ssh]$ chmod 600 authorized_keys
The last step is to copy the content of our public key (we used for user ec2-user). Remember that we have created a key pair when we built our linux box (see corresponding blog ) into the authorized_keys under /home/oracle/.ssh/authorized_keys
cd /home/ec2-user/ cd .ssh/ cat authorized_keys >> /home/oracle/.ssh/authorized_keys
And now connection should be fine with my new user from my workstation using public DNS and putty.
Tranferring files from my workstation to the AWS instance
One need might be to transfer files from our local workstation to the our AWS instance. We can use WinSCP, we just have to use the key by importing our putty session (we already used to connect) into WinSCP and after we can connect. Launch WinSCP and use Tools option.
And then select the session we want to import and we should connect with WinSCP
Connecting to my oracle instance from my workstation
I have installed my oracle software and my database and listener are running. How to connect from my workstation? It is like we usually do. We just have to allow connection on the database port (here I am using the default 1521). Security Groups option is used for editing the inbound rules.
Using Add Rule, we can allow connection on port 1521. Of course we can filter the source for the access.
Registration URL: https://attendee.gotowebinar.com/register/3325820742563232258
Webinar ID: 806-309-947
Master Class - ADF Bindings Explained (Andrejus Baranovskis, Oracle ACE Director)
This 2 hours long webinar is targeted for ADF beginners with main goal to explain ADF bindings concept and its usage to the full potential. ADF Bindings is one of the most complex parts to learn in ADF, every ADF developer should understand how ADF bindings work. Goal is to have interactive session, participants could ask questions and get answers live. This live event is completely free - join it on December 7th at 7:00 PM CET (Central European Time) (which is respectively 12:00 PM New York and 10:00 AM San Francisco on December 7th).
In order to join live webinar, you need to complete registration form on GoToWebinar. Number of participants is limited, don't wait - register now.
Topics to be covered:
1. ADF Bindings overview. Why ADF Bindings are required and how they are useful
2. Drill down into ADF Bindings. Explanation how binding object is executed from UI fragment down to Page Definition
3. ADF Binding types explanation. Information about different bindings generated, when using JDeveloper wizards. What happens with ADF Bindings, when using LOV, table, ADF Query, Task Flow wizards.
4. Declarative ADF binding access with expressions
5. Programmatic ADF binding access from managed beans
6. ADF binding sharing and access from ADF Task Flows. How to create binding layer for Task Flow method call or router activities.
7. Best practices for ADF Bindings
8. Your questions
In this article I will talk about how to create a linux machine in the cloud amazon AWS. For testing a trial account can be created.
Once registered, we can connect by using the “Sign In to the Console” button
To create an instance, let’s click on EC2 under Compute
And then let’s use the Launch Instance button
We can see the templates for building our machine. In our exemple we are going to use a Redhat one.
We keep the default selected
We keep the default instance details
Below the storage details
The instance tag
We keep default values for the security group
After we have the instance review which is resuming our configuration
Before launching the instance, we have to create a key pair. And we have to store the private one we will use to connect using putty for example.
If we click on the Connect tab on the top we have info how to connect. One useful info is the Public DNS we will use to connect.
Now that our instance is ready let’s see how to connect. I am using putty.
A few steps ago we have created a key pair and we kept the private one with an extension .pem. Using this key we will create a key with a format for putty (.ppk). For this we will use puttygen.
Just launch putty key generator and load the .pem key and follow the instructions
And Now we can use putty and load the .ppk private key to connect with the user ec2-user which is a built-in user and using the Public DNS.
Click Browse to load the .ppk file
Using username "ec2-user". Authenticating with public key "imported-openssh-key" [ec2-user@ip-172-31-33-57 ~]$ hostname ip-172-31-33-57.us-west-2.compute.internal [ec2-user@ip-172-31-33-57 ~]$ [ec2-user@ip-172-31-33-57 ~]$ cat /proc/meminfo | grep Mem MemTotal: 1014976 kB MemFree: 630416 kB MemAvailable: 761716 kB [ec2-user@ip-172-31-33-57 ~]$ [ec2-user@ip-172-31-33-57 ~]$ cat /proc/cpuinfo | grep proc processor : 0 [ec2-user@ip-172-31-33-57 ~]$ [ec2-user@ip-172-31-33-57 ~]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo) [ec2-user@ip-172-31-33-57 ~]$
We share our skills to maximize your revenue!
DSTv28 is now available and certified with Oracle E-Business Suite Release 12.1 and 12.2. This update includes the timezone information from the IANA tzdata 2016g. It is cumulative: it includes all previous Oracle DST updates.Is Your Apps Environment Affected?
a country or region changes DST rules or their time zone definitions,
your Oracle E-Business Suite environment will require patching if:
- Your Oracle E-Business Suite environment is located in the affected country or region OR
- Your Oracle E-Business Suite environment is located outside the affected country or region but you conduct business or have customers or suppliers in the affected country or region
The latest DSTv28 timezone definition file is cumulative and includes all DST changes released in earlier time zone definition files. DSTv27 includes changes to the following timezones since the DSTv24 release:
Patches Are Required?
In case you haven't been following our previous time zone or Daylight Saving Time (DST)-related articles, international timezone definitions for E-Business Suite environments are captured in a series of patches for the database and application tier servers in your environment. The actual scope and number of patches that need to be applied depend on whether you've applied previous DST or timezone-related patches. Some sysadmins have remarked to me that it generally takes more time to read the various timezone documents than it takes to apply these patches, but your mileage may vary.
Proactive backports of DST upgrade patches to all Oracle E-Business
Suite tiers and platforms are not created and supplied by default. If
you need this DST release and an appropriate patch is not currently
available, raise a service request through support providing a business
case with your version requirements.
The following Note identifies the various components in
your E-Business Suite environment that may need DST patches:
- Complying with Daylight Saving Time (DST) and Time Zone Rule Changes in E-Business Suite 12 (Note 563019.1)
What is the business impact of not applying these patches?
Timezone patches update the database and other libraries
that manage time. They ensure that those libraries contain the correct
dates and times for the changeover between Daylight Savings Time and
non-Daylight Savings Time.
Time is used to record events, particularly financial transactions. Time is also used to synchronize transactions between different systems. Some organizations’ business transactions are more-sensitive to timezone changes than others.
If you do not apply a timezone patch, and do business with a region that has changed their timezone definitions, and record a transaction that occurs at the boundary between the “old” and the “new” time, then the transaction may be recorded incorrectly. That transaction's timestamp may be off by an hour.
- An order is placed for a customer in a country which changed their DST dates in DST v27
- The old Daylight Savings Time expiration date was Nov. 2
- The new Daylight Savings Time expiration date is now October 31
- An order is set to ship at 12am on November 1st
- Under the old Daylight Savings Time rules, the revenue would be recorded for November
- Under the new Daylight Savings Time rules, the revenue would be recorded for October
- DSTv27 Timezone Patches Available for E-Business Suite 12.1
- DSTv27 Timezone Patches Available for E-Business Suite 12.2
Oracle's Revenue Recognition rules prohibit us from discussing
certification and release dates, but you're welcome to monitor or
subscribe to this blog. I'll post updates here as soon as soon as
How to insert the data using sql*loader by CSV file which contain comma as separator and comma present at column value
I just came across MOS Doc for tracing OGG processes.
Just thought I would compare the old versus new.
You can find comparison and my preference here
Is it safe to move/recreate alertlog while the database is up and running??
It is totally safe to "mv" or rename it while we are running. Since chopping part of it out would be lengthly process, there is a good chance we would write to it while you are editing it so I would not advise trying to "chop" part off -- just mv the whole thing and we'll start anew in another file.
If you want to keep the last N lines "online", after you mv the file, tail the last 100 lines to "alert_also.log" or something before you archive off the rest.
[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle 488012 Nov 14 10:23 alert_orcl.log
I will rename the existing alertlog file to something
[oracle@Linux03 trace]$ mv alert_orcl.log alert_orcl_Pre_14Nov2016.log
[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle 488012 Nov 14 15:42 alert_orcl_Pre_14Nov2016.log
[oracle@Linux03 trace]$ ls -ll alert_*
Now lets create some activity that will need to update the alertlog.
[oracle@Linux03 bin]$ sqlplus / as sysdba
SQL*Plus: Release 18.104.22.168.0 Production on Mon Nov 14 16:23:02 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Oracle Database 12c Enterprise Edition Release 22.214.171.124.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> alter system switch logfile;
lets see if the new alertlog file has been created.[oracle@Linux03 trace]$ ls -ll alert_*
-rw-r-----. 1 oracle oracle 249 Nov 14 16:23 alert_orcl.log
-rw-r-----. 1 oracle oracle 488012 Nov 14 15:42 alert_orcl_Pre_14Nov2016.log
170+ people have already signed up! :-) Register at this link.
I’ve decided that it’s time for a refresher on Oracle Data Integrator 12c. This week in the “Oracle Data Integrator 12c: Getting Started” series: getting a quick start on mapping development. Several objects must be created before a single bit of ETL can even be created, and for those who are new to the product, as many readers of this series will be, that can be frustrating. The objects that must be in place are as follows:
- Data Server This object is the connection to your data source. Created under one of the many technologies available in ODI, this is where the JDBC url, username, password, and other properties are all created and stored.
- Physical Schema Underneath the Data Server you’ll find the Physical Schema. This object, when connecting to a relational database, represents the database schema where the tables reside that you wish to access in ODI.
- Logical Schema Here’s where it can sometimes get a bit tricky for folks new to Oracle Data Integrator. One of the great features in ODI is how it abstracts the physical connection and schema from the logical objects. The Logical Schema is mapped to the Physical Schema by an object called a Context. This allows development of mappings and other objects to occur against the Logical schema, shielding the physical side from the developers. Now when promoting code to the next environment, nothing must changed in the developed objects for the connection.
- Model Once you have the Topology setup (Data Server, Physical Schema, Logical Schema), you can then create your Model. This is going to be where the logical Datastores are grouped for a given schema. There are many other functions of the Model object, such as journalizing (CDC) setup, but we’ll save those features for another day.
- Datastore The Datastore is a logical representation of a table, file, XML element, or other physical object. Stored in the form of a table, the Datastore has columns and constraints. This is the object that will be used as a source or target in your ODI Mappings.
Now you can create your mapping. Whew!
Over the years, Oracle has worked to make the process of getting started a lot easier. Back in ODI 11g, the Oracle Data Integrator QuickStart was a 10 step checklist, where each step leads to another section in the documentation. A nice gesture by Oracle but by no means “quick”. There was also a great tool, the ODI Accelerator Launchpad, built in Groovy by David Allan of the Oracle DI team. Now we were getting closer to something “quick”. But this was simply a script that you had to run, not an integrated part of the ODI Studio platform. Finally, with the release of ODI 12.1.3, the Quickstart was introduced. The New Model and Topology Objects wizard allows you to create everything you need in order to reverse engineer tables into ODI Datastore objects and begin creating your first mappings.
Going through the wizard is much simpler than manually setting up the Topology objects and Model for folks just getting started with Oracle Data Integrator. The blog post from Oracle linked above can walk you through the process and I’ve added a demonstration video below that does the same. As a bonus in my demo, I’ve added a tip to help you get your initial load mappings created in an instant. Have a look:
There you have it, a quick and easy way to get started with Oracle Data Integrator 12c and create your first source to target Mapping. If you have further questions and would like a more detailed answer, you can always join one of the Rittman Mead ODI bootcamps to learn more from one of our data integration experts. Up next in the Getting Started series, we’ll look at adding enhancing the ODI metadata by adding constraints and other options.