Feed aggregator

PostgreSQL Inheritance in Oracle

Tom Kyte - Fri, 2017-07-14 20:26
Hi Tom, How do I implement inheritance this way in oracle? create table requests(); create table requests_new() inherits (requests); create table requests_old() inherits (requests); I should be able to query the child tables independentl...
Categories: DBA Blogs

help

Tom Kyte - Fri, 2017-07-14 20:26
For which constraint does the Oracle Server implicitly create a unique index? a)PRIMARY KEY b)NOT NULL c)FOREIGN KEY d)CHECK Which tablespace can NOT be recovered with the database open? a)USERS b)TOOLS c)DATA d)SYSTEM ...
Categories: DBA Blogs

Query Returns via SQL*Plus - but not via ODP.net Driver

Tom Kyte - Fri, 2017-07-14 20:26
We have a database with some partitioned tables (main table by value, the children by reference). We have a query that includes a function call in the where clause. <code>Select bunch_of_columns, package.function(parameter) as column18 from ta...
Categories: DBA Blogs

SQL*Loader save filename into table column

Tom Kyte - Fri, 2017-07-14 20:26
I need to import different csv-files into 1 table. I need to use the sqlloader.(Oracle Version 12.1.0.2) This is my control-file: <code>load data append into table SAMP_TABLE fields terminated by ',' OPTIONALLY ENCLOSED BY '"' AND '"' traili...
Categories: DBA Blogs

SqlPlus spool extract getting truncated

Tom Kyte - Fri, 2017-07-14 20:26
Hi, I have a script to extract data from the Oracle EBS database. I have provided an extract of the script below for reference: ---------------------------------------------------------------------- <code>set feedback off set trims on set li...
Categories: DBA Blogs

How to get rowid by chain row

Tom Kyte - Fri, 2017-07-14 20:26
Hi Tom, I create a table, like this create table pan0.t (c1 int, c2 varchar2(4000), c3 varchar2(4000), c4 varchar2(4000)); and then, insert data to generate a chain row insert into pan0.t values(7, rpad('`',4000,'`'), rpad('!',4000,...
Categories: DBA Blogs

How to group timestamps in 10 minute buckets and aggregate

Tom Kyte - Fri, 2017-07-14 20:26
I've been grappling with this problem for almost a week. Other threads on this site have gotten me close but not all the way. I created a table and populated it using sqlldr. The queries for that are below this question. I want dateValidFrom ...
Categories: DBA Blogs

Bash: enabling Eclipse for Bash Programming | Plugin basheclipse (debugging) part 1

Dietrich Schroff - Fri, 2017-07-14 14:45
In my last posting i showed how to install the plugin shelled and its configuration options.

Now the next step is to install basheclipse, which enables eclipse in conjunction with shelled to debug bash scripts.

Step 1: Download basheclipse

Step 2: Do not copy the jar file to the plugins directory if you are using Eclipse Neon. Copy them to the dropins directory.

Step 3: Restart eclipse (and wait. I had to wait nearly 5 minutes with a cpu usage of 100%)

Step 4: Change the Shell interpreter in shelled (->window->preferences->shell script->interpreters):

Step  5: Create a new Shell Script Project:
 


 Step 6: Create a Shell Script File:

Step 7: Copy _DEBUG.sh into your project directory
Step 8: Add _DEBUG.sh at the beginning of your file with full qualified directory name:
 ~/devel/workspace/MyShellScriptProjekct/_DEBUG.shStep 9: Go to ->run->debug configurations and create a new configuration inside "bash script"
 You have to select your scriptfile here:
But after doing all this steps i was not able to set breakpoints.
I tried
  • java 1.5 with
    Eclipse Juno


    But this did not work.
  • java 1.9 with
    Eclipse Neon


    Did neither work (this posting)

But: Java 1.6 with Eclipse Luna works!


Read the next posting how to debug a bash scripts in eclipse...

EDIT: After changing from OpenJDK to Oracle's JDK it even works with Eclipse Neon.



























AWS – Build your own Oracle Linux 7 AMI in the Cloud

Amis Blog - Fri, 2017-07-14 14:37

I always like to know what is installed in the servers that I need to use for databases or Weblogic installs. Whether it is in the Oracle Cloud or in any other Cloud. One way to know is to build your own image that will be used to start your instances. My latest post was about building my own image for the Oracle Cloud (IAAS), but I could only get it to work with Linux 6. Whatever I tried with Linux 7 it wouldn’t start in a way that I could logon to it. And no way to see what was wrong. Not even when mounting the boot disk to an other instance after a test boot. My trial ran out before I could get it to work and a new trial had other problems.

Since we have an AWS account I could try to do the same in AWS EC2 when I had some spare time. A few years back I had built Linux 6 AMI’s via a process that felt a bit complicated but it worked for a PV Kernel. For Linux 7 I couldn’t find any examples on the web on how to do that with enough detail to really get it working. But while was studying for my Oracle VM 3.0 for x86 Certified Implementation Specialist exam I realized what must have been the problem. Therefore below follow my notes on how to build my own Oracle Linux 7.3 AMI for EC2.

General Steps:
  1. Create a new Machine in VirtualBox
  2. Install Oracle Linux 7.3 on it
  3. Configure it and install some extra packages
  4. Clean your soon to be AMI
  5. Export your VirtualBox machine as an OVA
  6. Create an S3 bucket and upload your OVA
  7. Use aws cli to import your image
  8. Start an instance from your new AMI, install the UEKR3 kernel.
  9. Create a new AMI from that instance in order to give it a sensible name

The nitty gritty details: Ad 1) Create a new Machine in VirtualBox

Create an New VirtualBox Machine and start typing the name as “OL” which sets the type to Linux and version to Oracle (64 bit). Pick a name you like. I choose OL73. I kept the memory as it was (1024M). Create a HardDisk. 10Gb Dynamically allocated (VDI) worked for me. I disabled the audio as I had no use for that and made sure one network interface was available. I selected my NatNetwork type because that gives me VM access to the network and lets me access it via a Forwarding Rule on just one interface. You need to logon via VBox first to get the IP address then you can use an other preferred terminal to login. I like putty.

Attach the DVD with the Linux you want to use, I like Oracle Linux (https://otn.oracle.com), and start the VM.

Ad 2) Install Oracle Linux 7.3 on it

When you get the installation screen do not choose “Install Oracle Linux 7.3” but use TAB to add “ net.ifnames=0” to the boot parameters (note the extra space) and press enter.

Choose the language you need, English (United States) with a us keyboard layout works for me. Go to the next screen.

Before you edit “Date & Time” edit the network connection (which is needed for NTP).

Notice that the interface has the name eth0 and is disconnected. Turn the eth0 on by flipping the switch

And notice the IP address etc. get populated:

Leave the host name as it is (localhost.localdomain) because your cloud provider will change anything you set here anyway, and press the configure button. Then choose the General tab to check “Automatically connect to this network when it is available”, keep the settings on the Ethernet tab as they are, the same for 802.1X Security tab, DCB tab idem. On the IPv4 Settings tab, leave “Method” on Automatic (DHCP) and check “Require IPv4 addressing for this connection to complete”. On the IPv6 Settings tab change “Method” to Ignore and press the “Save” button and then press “Done”.

Next change the “Date & Time” settings to your preferred settings and make sure that “Network Time” is on and configured. Then press “Done”.

Next you have to press “Installation Destination”

Now if the details are in accordance with what you want press “Done”.

Your choice here has impact on what you can expect from the “cloud-init” tools.

For example: Later on you can launch an instance with this soon to be AMI and start it with let’s say a 20 GiB disk instead of the 10GiB disk this image now has. The extra 10GiB can be used via a new partition and adding that to a LVM pool. That requires manual actions. But if you expect the cloud-init tools to resize your partition to make use of the extra 10GiB and extend the filesystem (at first launch). Then you need to change a few things.

Then press “Done” and you get guided through an other menu:

Change LVM to “Standard Partition”

And then create the mount points you need by pressing “+” or click the blue link:

Now what you get are 3 partitions on your disk (/dev/sda). Notice that “/” is sda3 and is the last partition. When you choose this in your image the cloud-init utils will resize that partition to use the extra 10GiB and extend the filesystem on it as well. It makes sense that it can only resize the last partition of your disk. Which means that that your swap size is fixed between these partitions and can only be increased on a different disk (Or volume as it is called in EC2) that you need to add to your instance when launching (or afterwards). Leaving you with a gap of 1024MiB that is not very useful.

You might know what kind of memory size instances you want to use this image for and create the necessary swap up front (and maybe increase the disk from 10GiB to a size that caters for the extra needed swap).

I like LVM and choose to partition automatically and will use LVM utils to use the extra space by creating a third partition.

The other options I kept default:

And press “Begin Installation”. You then will see:

Set the root password to something you will remember, later I will disable it via “cloud-init” and there is no need to create an other user. Cloud-init will also take care of that.

I ignored the message: and pressed Done again.

Press the “Reboot” button when you are asked to and when restarting select the standard kernel (Not UEK). This is needed for the Amazon VMImport tool. You have less then 5 seconds to change the default kernel (UEK) from booting.

If you missed it just restart the VM.

Ad 3) Configure it and install some extra packages

Login with your preferred terminal program via NatNetwork (make sure you have a forwarding rule for the IP you wrote down for ssh)

 

or use the VirtualBox console. If you forgot to write the IP down you can still find it via the VirtualBox console session:

You might have noticed that my IP address changed. That is because I forgot to set the network in VirtualBox to NatNetwork when making the screenshots. As you can see the interface name is eth0 as expected. If you forgot to set the boot parameter above you need to do some extra work in the Console to make sure that eth0 is used.

Check the grub settings:

cat /etc/default/grub

And look at: GRUB_CMDLINE_LINUX (check if net.ifnames=0 is in there), and look at: GRUB_TIMEOUT. You might want to change that from 5 seconds to give you a bit more time. The AWS VMImport tool will change it to 30 seconds.

If you made some changes, you need to rebuild grub via:

grub2-mkconfig -o /boot/grub2/grub.cfg

Change the network interface settings:

vi /etc/sysconfig/network-scripts/ifcfg-eth0
Make it look like this:

TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes

Change dracut.conf *** this is very important. In VirtualBox the XEN drivers do not get installed in the initramfs image and that will prevent your AMI from booting in AWS if it is not fixed ***

vi /etc/dracut.conf

adjust the following two lines:

# additional kernel modules to the default
#add_drivers+=””

to:

# additional kernel modules to the default
add_drivers+=”xen-blkfront xen-netfront”

Temporarily change default kernel:

(AWS VMImport has issues when the UEK kernels are installed or even present)

vi /etc/sysconfig/kernel

change:

DEFAULTKERNEL=kernel-uek

to:

DEFAULTKERNEL=kernel

Remove the UEK kernel:

yum erase -y kernel-uek kernel-uek-firmware

Check the saved_entry setting of grub:

cat /boot/grub2/grubenv
or: grubby –default-kernel

If needed set it to the RHCK (RedHat Compatible Kernel) via:

grub2-set-default <nr>

Find the <nr> to use via:

grubby –info=ALL

Use the <nr> of index=<nr> where kernel=/xxxx lists the RHCK (not a UEK kernel).

Rebuild initramfs to contain the xen drivers for all the installed kernels:

rpm -qa kernel | sed ‘s/^kernel-//’  | xargs -I {} dracut -f /boot/initramfs-{}.img {}

Verify that the xen drivers are indeed available:

rpm -qa kernel | sed ‘s/^kernel-//’  | xargs -I {} lsinitrd -k {}|grep -i xen

Yum repo adjustments:

vi /etc/yum.repos.d/public-yum-ol7.repo

Disable: ol7_UEKR4 and ol7_UEKR3.
You don’t want to get those kernels back with a yum update just yet.
Enable: ol7_optional_latest, ol7_addons

Install deltarpm, system-storage-manager and wget:

yum install -y deltarpm system-storage-manager wget

(Only wget is really necessary to enable/download the EPEL repo. The others are useful)

Change to a directory where you can store the rpm and install it. For example:

cd ~
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh epel-release-latest-7.noarch.rpm

Install rlwrap (useful tool) and the necessary cloud tools:

yum install -y rlwrap cloud-init cloud-utils-growpart

Check your Firewall settings (SSH should be enabled!):

firewall-cmd –get-default-zone
firewall-cmd –zone=public –list-all

You should see something like for your default-zone:
interfaces: eth0
services: dhcpv6-client ssh

Change SELinux to permissive (might not be really needed, but I haven’t tested it without this):

vi /etc/selinux/config
change: SELINUX=enforcing
to: SELINUX=permissive

Edit cloud.cfg:

vi /etc/cloud/cloud.cfg
change: ssh_deletekeys:    0
to: ssh_deletekeys:   1

change:

system_info:
default_user:
name: cloud-user
to:
system_info:
default_user:
name: ec2-user

Now cloud.cfg should look like this: (between the =====)

users:
– default

disable_root: 1
ssh_pwauth:   0

mount_default_fields: [~, ~, ‘auto’, ‘defaults,nofail’, ‘0’, ‘2’]
resize_rootfs_tmp: /dev
ssh_deletekeys:   1
ssh_genkeytypes:  ~
syslog_fix_perms: ~

cloud_init_modules:
– migrator
– bootcmd
– write-files
– growpart
– resizefs
– set_hostname
– update_hostname
– update_etc_hosts
– rsyslog
– users-groups
– ssh

cloud_config_modules:
– mounts
– locale
– set-passwords
– yum-add-repo
– package-update-upgrade-install
– timezone
– puppet
– chef
– salt-minion
– mcollective
– disable-ec2-metadata
– runcmd

cloud_final_modules:
– rightscale_userdata
– scripts-per-once
– scripts-per-boot
– scripts-per-instance
– scripts-user
– ssh-authkey-fingerprints
– keys-to-console
– phone-home
– final-message

system_info:
default_user:
name: ec2-user
lock_passwd: true
gecos: Oracle Linux Cloud User
groups: [wheel, adm, systemd-journal]
sudo: [“ALL=(ALL) NOPASSWD:ALL”]
shell: /bin/bash
distro: rhel
paths:
cloud_dir: /var/lib/cloud
templates_dir: /etc/cloud/templates
ssh_svcname: sshd

# vim:syntax=yaml

With this cloud.cfg you will get new ssh keys for the server when you deploy a new instance, a user “ec2-user” that has password less sudo rights to root, and direct ssh to root becomes disabled as well as using a password for ssh authentication.

**** Remember, when you reboot now cloud-init will kick in and only console access to root will be available. Ssh to root is disabled ****
**** because you do not have an http server running serving ssh keys for the new ec2-user that cloud-init can use ****
**** It might be prudent to validate your cloud.cfg is a valid yaml file via http://www.yamllint.com/ ****

Check for the latest packages and update:

yum check-update
yum update -y

Ad 4) Clean your soon to  be AMI

You might want to clean the VirtualBox machine of logfiles and executed commands etc:

rm -rf  /var/lib/cloud/
rm -rf /var/log/cloud-init.log
rm -rf /var/log/cloud-init-output.log

yum -y clean packages
rm -rf /var/cache/yum
rm -rf /var/lib/yum

rm -rf /var/log/messages
rm -rf /var/log/boot.log
rm -rf /var/log/dmesg
rm -rf /var/log/dmesg.old
rm -rf /var/log/lastlog
rm -rf /var/log/yum.log
rm -rf /var/log/wtmp

find / -name .bash_history -exec rm -rf {} +
find / -name .Xauthority -exec rm -rf {} +
find / -name authorized_keys -exec rm -rf {} +

history -c
shutdown -h now

Ad 5) Export your VirtualBox machine as an OVA

In VirtualBox Manager:

And select the Virtual Machine you had just shutdown:

If needed change the location of the ova to be created:

 

Ad 6) Create an S3 bucket and upload your OVA

Log in to your AWS console choose the region where you want your AMI to be created and create a bucket there (or re-use one that you already have):

https://console.aws.amazon.com/s3/home?region=eu-west-1

(I used the region eu-west-1)

Set the properties you want, I kept the defaults properties and permissions:

Then press:

 

Ad 7) Use aws cli to import your image

Before you can import the OVA file you need to put it in the created bucket. You can upload it via the browser or use “aws cli” to do that. I prefer the aws cli because that always works and the browser upload gave me problems.

How to install the command line interface is described here: http://docs.aws.amazon.com/cli/latest/userguide/installing.html

On an Oracle linux 7 machine it comes down to:

yum install -y python34.x86_64 python34-pip.noarch
pip3 install –upgrade pip
pip install –upgrade awscli
aws –version

Then it is necessary to configure it, which is basically (http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html):

aws configure

And answer the questions by supplying your credentials and your preferences. These are fake credentials below

AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: eu-west-1
Default output format [None]: json

The answers will be saved in two files:

~/.aws/credentials
~/.aws/config

To test the access try to do a listing of your bucket:

aws s3 ls s3://amis-share

To upload the generated OVA file is then as simple as:

aws s3 cp /file_path/OL73.ova s3://amis-share

The time it takes depends on your upload speed.

Create the necessary IAM role and policy (http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html):

Create a trust-policy.json file:

vi trust-policy.json
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": { "Service": "vmie.amazonaws.com" },
         "Action": "sts:AssumeRole",
         "Condition": {
            "StringEquals":{
               "sts:Externalid": "vmimport"
            }
         }
      }
   ]
}

Create the IAM role:

aws iam create-role –role-name vmimport –assume-role-policy-document file:///home/ec2-user/trust-policy.json

Create the role-policy.json file:

Change the file to use your S3 bucket (amis-share/*).

vi role-policy.json
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource": [
            "arn:aws:s3:::amis-share"
         ]
      },
      {
         "Effect": "Allow",
         "Action": [
            "s3:GetObject"
         ],
         "Resource": [
            "arn:aws:s3:::amis-share/*"
         ]
      },
      {
         "Effect": "Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource": "*"
      }
   ]
}

aws iam put-role-policy –role-name vmimport –policy-name vmimport –policy-document file:///home/ec2-user/role-policy.json

Now you should be able to import the OVA.

Prepare a json file with the following contents (adjust to your own situation):

cat imp_img.json 
{
    "DryRun": false,
    "Description": "OL73 OVA",
    "DiskContainers": [
        {
            "Description": "OL73 OVA",
            "Format": "ova",
            "UserBucket": {
                "S3Bucket": "amis-share",
                "S3Key": "OL73.ova"
            }
        }
    ],
    "LicenseType": "BYOL",
    "Hypervisor": "xen",
    "Architecture": "x86_64",
    "Platform": "Linux",
    "ClientData": {
        "Comment": "OL73"
    }
}

Then start the actual import job:

aws ec2 import-image –cli-input-json file:///home/ec2-user/imp_img.json

The command retuns with the name of the import job which you can then use to get the progress:
aws ec2 describe-import-image-tasks –import-task-ids import-ami-fgotr2g7

Or in a loop:

while true; do sleep 60; date; aws ec2 describe-import-image-tasks –import-task-ids import-ami-fgotr2g7; done

Depending on the size of your OVA it takes some time to complete. An example output is:

{
    "ImportImageTasks": [
        {
            "StatusMessage": "converting",
            "Status": "active",
            "LicenseType": "BYOL",
            "SnapshotDetails": [
                {
                    "DiskImageSize": 1470183936.0,
                    "Format": "VMDK",
                    "UserBucket": {
                        "S3Bucket": "amis-share",
                        "S3Key": "OL73.ova"
                    }
                }
            ],
            "Platform": "Linux",
            "ImportTaskId": "import-ami-fgotr2g7",
            "Architecture": "x86_64",
            "Progress": "28",
            "Description": "OL73 OVA"
        }
    ]
}

Example of an error:

{
    "ImportImageTasks": [
        {
            "SnapshotDetails": [
                {
                    "DiskImageSize": 1357146112.0,
                    "UserBucket": {
                        "S3Key": "OL73.ova",
                        "S3Bucket": "amis-share"
                    },
                    "Format": "VMDK"
                }
            ],
            "StatusMessage": "ClientError: Unsupported kernel version 3.8.13-118.18.4.el7uek.x86_64",
            "ImportTaskId": "import-ami-fflnx4fv",
            "Status": "deleting",
            "LicenseType": "BYOL",
            "Description": "OL73 OVA"
        }
    ]
}

Once the import is successful you can find your AMI in your EC2 Console:

Unfortunately no matter what you Description or Comment you supply in the json file the AMI is only recognized via the name of the import job: import-ami-fgotr2g7. As I want to use the UEK kernel I need to start an instance from this AMI and use that as an new AMI. And via that process (Step 9) I can supply a better name. Make a note of the snapshots and volumes that have been created via this import job. You might want to remove those later to prevent storage costs for something you don’t need anymore.

 

Ad 8) Start an instance from your new AMI, install the UEKR3 kernel

I want an AMI to run Oracle software and want the UEK kernel that has support. UEKR4 wasn’t supported for some of the software I recently worked with, thus that left me with the UEKR3 kernel.

Login to your new instance as the ec2-user with your preferred ssh tool and use sudo to become root:

sudo su –

Enable Yum Repo UEKR3

vi /etc/yum.repos.d/public-yum-ol7.repo
ol7_UEKR3
enabled=0 ==> 1

Change the default kernel back to UEK:

vi /etc/sysconfig/kernel
change:
DEFAULTKERNEL=kernel
To:
DEFAULTKERNEL=kernel-uek

Update the kernel:

yum check-update
yum install kernel-uek.x86_64

Notice the changes in grub_cmd_line that where made by the import proces:

cat /etc/default/grub

Notice some changes:

GRUB_TIMEOUT=30
GRUB_TERMINAL_OUTPUT=”console”
GRUB_CMDLINE_LINUX=”crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet net.ifnames=0 console=ttyS0″
GRUB_DISABLE_RECOVERY=”true”

To verify which kernel will be booted next time you can use:

cat /boot/grub2/grubenv
grubby –default-kernel
grubby –default-index
grubby –info=ALL

Clean the instance again and shut it down in order to create an new AMI:

rm -rf  /var/lib/cloud/
rm -rf /var/log/cloud-init.log
rm -rf /var/log/cloud-init-output.log

yum -y clean packages
rm -rf /var/cache/yum
rm -rf /var/lib/yum

rm -rf /var/log/messages
rm -rf /var/log/boot.log
rm -rf /var/log/dmesg
rm -rf /var/log/dmesg.old
rm -rf /var/log/lastlog
rm -rf /var/log/yum.log
rm -rf /var/log/wtmp

find / -name .bash_history -exec rm -rf {} +
find / -name .Xauthority -exec rm -rf {} +
find / -name authorized_keys -exec rm -rf {} +

history -c
shutdown -h now

Ad 9) Create a new AMI from that instance in order to give it a sensible name

Use the instance id of the instance that you just shut down: i-050357e3ecce863e2 to create a new AMI.

To generate a skeleton json file:

aws ec2 create-image –instance-id i-050357e3ecce863e2 –generate-cli-skeleton

Edit the file to your needs or liking:

vi cr_img.json
{
“DryRun”: false,
“InstanceId”: “i-050357e3ecce863e2”,
“Name”: “OL73 UEKR3 LVM”,
“Description”: “OL73 UEKR3 LVM 10GB disk with swap and root on LVM thus expandable”,
“NoReboot”: true
}

And create the AMI:

aws ec2 create-image –cli-input-json file:///home/ec2-user/cr_img.json
{
“ImageId”: “ami-27637b41”
}

It takes a few minutes for the AMI to be visable in the webconsole of AWS EC2.

Don’t forget to:

  • Deregister the AMI generated bij VMImport
  • Delete the corresponding snaphot
  • Terminate the instance you used to create the new AMI
  • Delete the volumes of that instance (if they are not deleted on termination) (expand the info box in AWS you see when you terminate the instance to see which volume it is. E.g.: The following volumes are not set to delete on termination: vol-0150ca9702ea0fa00)
  • Remove the OVA from your S3 bucket if you don’t need it for something else.

Launch an instance of your new AMI and start to use it.

Useful documentation:

The post AWS – Build your own Oracle Linux 7 AMI in the Cloud appeared first on AMIS Oracle and Java Blog.

OpenJDK 9: Jshell - how to load scripts and how to save/persist finished snippets

Dietrich Schroff - Fri, 2017-07-14 13:07
In my last posting i showed the builtin jshell commands and how to start working with the java shell.

What about loading and saving scripts?

I created a file myshell.txt with this content:
class MyClass {
 private int a;
 public MyClass(){a=0;}
 int getA() {return a;};
 void setA(int var) {a=var; return;}
}
MyClass ZZ;
ZZ = new MyClass();
ZZ.setA(200);
The help shows the following:
-> /help /open

|  /open

|  Open a file and read its contents as snippets and commands.

|  /open
|      Read the specified file as jshell input.
so i tried this one:
-> /open myjshell.txthmmm. No feedback inside jshell. But no news is good news:
-> /list

   1 : class MyClass {
        private int a;
        public MyClass(){a=0;}
        int getA() {return a;};
        void setA(int var) {a=var; return;}
       }
   2 : MyClass ZZ;
   3 : ZZ = new MyClass();
   4 : ZZ.setA(200);
 Saving your work is quite easy:
-> /help /save

|  /save

|  Save the specified snippets and/or commands to the specified file.

|  /save
|      Save the source of current active snippets to the file.

|  /save all
|      Save the source of all snippets to the file.
|      Includes source including overwritten, failed, and start-up code.

|  /save history
|      Save the sequential history of all commands and snippets entered since jshell was launched.

|  /save start
|      Save the default start-up definitions to the file.
And also no news is good news:
-> /save myjshell2.txtand like expected:
$ cat myjshell2.txt
class MyClass {
 private int a;
 public MyClass(){a=0;}
 int getA() {return a;};
 void setA(int var) {a=var; return;}
}
MyClass ZZ;
ZZ = new MyClass();
ZZ.setA(200);
But what about this /save start ?
-> /save start myjshell3.txtand the content of this file is:
$ cat myjshell3.txt

import java.util.*;
import java.io.*;
import java.math.*;
import java.net.*;
import java.util.concurrent.*;
import java.util.prefs.*;
import java.util.regex.*;
void printf(String format, Object... args) { System.out.printf(format, args); }
 To load a scipt on startup just type
jshell myjshell.txt


Importance Of Executive Dashboards For Insurance Companies

Nilesh Jethwa - Fri, 2017-07-14 12:15

Were there times when you became so handicapped because data were not readily available? You demanded for reports for your perusal but the teams were still scrambling for data from files.

This is not happening only to you. It is a common flaw among insurance company managers.

What is being sacrificed when this happens?

  • Companies incur revenue losses.
  • Customer trust erodes.
  • Management loses times in meetings without the availability of concrete data.
  • Tension builds up between the higher management and low-ranking teams.
  • Current problems are not resolved and allowed to pester executives as days go by.
  • Moods flare up.

This has to stop? The solution to avoid such problems from occurring again is to build a KPI executive dashboard. With a KPI dashboard incorporated to your system, you will have access to real-time performance data.

Accessibility to data is always within your fingertips. With a few clicks, important data are all flashed on the screen for your immediate and timely decision-making.

Benefits for the Insurance Industry

Performance dashboards can benefit insurance companies in several ways. Strict regulatory guidelines control many industries such as the insurance sector and the financial services sector.

In these sectors, two things are prominent:

  • Higher need for Innovative customer support and service.
  • High exposure to risk

It is therefore important for these sectors to have accurate and up-to-date information for quick detection of potential problems and to seize new market opportunities. Key Performance Indicator or KPI metrics, presented through clear graphic images, can help in determining the right steps for managers and executives to take to quickly achieve their goals.

sketchnote of Gael Colas on ‘Devops’

Matt Penny - Fri, 2017-07-14 10:56


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator