Feed aggregator

RMAN Backup script

DBA Scripts and Articles - Wed, 2017-11-08 03:54

This is a sample backup script I used, it has already a lot of options. Feel free to make any modification you want. If you add some good enhancements, let me know I can put them here so everybody can profit from them. RMAN Backup script [crayon-5a031da255613263047755/]  

The post RMAN Backup script appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01)

Yann Neuhaus - Wed, 2017-11-08 02:30

After having worked with OVM on various architectures I can say that it is a good technology to easily build virtualized environments for production applications. Because it is based on XEN and has simple ways to deal with existing storage (FC, ISCSI, NFS, …) and networking solution (bond, lacp, …) it is a robust and convenient way to virtualized IT infrastructures keeping “bare-metal” performance.
Besides, it is an hard partitioning technology which is compliant with the Oracle licensing policies for partitioned environments to control CPU counting for licensing.

The aim of this post is to demonstrate how simple it is to build an HA virtualized architecture with the OVM Manager command line tool only (doc link). So we will create 1 VM on each server including all Oracle OS , network and storage requirements to run RAC 12.2.

Initial state:

  • 2 physical servers installed with Oracle VM Server 3.4 (namely OVS, installation procedure here ) to host VMs including:
    • 5 NICs on each (no bounding for the example but recommended for production system)
      • eth0: administrative network connected to the organization’s administrative network
      • eth1: application network dedicated to application
      • eth2: storage network cabled to the storage for ISCSI LUNs and NFS accessibility
      • eth3: cabled between both OVS Servers for RAC interconnect link #1/2
      • eth4: cabled between both OVS Servers for RAC interconnect link #2/2
  • 1 server with Oracle VM Manager 3.4 yet installed (installation procedure here)
    • eth0: administrative network connector to the organization’s administrative network
  • 1 storage system (Here we are going to use a ZFS Storage appliance)
  • 1 OVM Template from Oracle Corp. (available here)

Summary


Step 0: Connect to the OVM Manager client

Because the client connect through SSH protocol (default port number 10000, user admin), connecting to the OVM Manager client can be done from wherever you have network connectivity with OVM Server.
OVMCLI is a separate service from OVM Manager running on the OVM Manager server. Here I check the OVMCLI service status and I connect from within the VM Manager server.

service ovmcli status

ssh -l admin localhost -p 10000

OVM>                                            # --> prompt for OVM
OVM> ?                                          # --> to show which action can be done
OVM> list ?                                     # --> to show which options are available for command "list"
OVM> set OutputMode={ Verbose | Sparse | Xml }  # --> to make output matching your automation style


Step 1: discover OVS servers

discoverServer ipAddress=192.168.56.101 password=oracle takeOwnership=Yes
discoverServer ipAddress=192.168.56.102 password=oracle takeOwnership=Yes


Step 2: Discover file server

In this example I going to store the ServerPool FSs to NFS from the ZFS Storage appliance. But it could be whatever NFS technologies or directly can be stored in ISCSI/FC LUNs.

OVM> list FileServerPlugin
Command: list FileServerPlugin
Status: Success
Time: 2017-10-19 14:51:31,311 CEST
Data:
id:oracle.ocfs2.OCFS2.OCFS2Plugin (0.1.0-47.5)  name:Oracle OCFS2 File system
id:oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0)  name:Oracle Generic Network File System

OVM> create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage
Command: create FileServer plugin="Oracle Generic Network File System" accessHost=192.168.238.10 adminServers=ovs001,ovs002 name=zfsstorage
Status: Success
Time: 2017-10-19 14:58:46,411 CEST
JobId: 1508417926209
Data:
id:0004fb00000900004801ecf9996f1d43  name:zfsstorage

OVM> refreshAll
Command: refreshAll
Status: Success
Time: 2017-10-19 16:26:58,705 CEST
JobId: 1508422976145

OVM> list FileSystem
Command: list FileSystem
Status: Success
Time: 2017-10-19 17:41:35,737 CEST
Data:
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65
Command: refresh FileSystem id=75734f6d-704d-48ee-9853-f6cc09b5af65
Status: Success
Time: 2017-10-19 17:42:28,516 CEST
JobId: 1508427714903

OVM> refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17
Command: refresh FileSystem id=3f81dcad-e1ce-41b9-b0f3-3222b3816b17
Status: Success
Time: 2017-10-19 17:43:02,144 CEST
JobId: 1508427760257


Step 3: Discover NAS Storage

OVM> list StorageArrayPlugin
Command: list StorageArrayPlugin
Status: Success
Time: 2017-10-19 15:28:23,932 CEST
Data:
id:oracle.s7k.SCSIPlugin.SCSIPlugin (2.1.2-3)  name:zfs_storage_iscsi_fc
id:oracle.generic.SCSIPlugin.GenericPlugin (1.1.0)  name:Oracle Generic SCSI Plugin

OVM> create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=oracle pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target"
Command: create StorageArray plugin=zfs_storage_iscsi_fc name=zfsstorage storageType=ISCSI accessHost=192.168.238.10 accessPort=3260 adminHost=192.168.238.10 adminUserName=ovmuser adminPassword=***** pluginPrivateData="OVM-iSCSI,OVM-iSCSI-Target"
Status: Success
Time: 2017-10-19 15:48:00,761 CEST
JobId: 1508420880565
Data:
id:0004fb0000090000c105d1003f051fbd  name:zfsstorage

OVM> addAdminServer StorageArray name=zfsstorage server=ovs001
Command: addAdminServer StorageArray name=zfsstorage server=ovs001
Status: Success
Time: 2017-10-19 16:11:32,448 CEST
JobId: 1508422292175

OVM> addAdminServer StorageArray name=zfsstorage server=ovs002
Command: addAdminServer StorageArray name=zfsstorage server=ovs002
Status: Success
Time: 2017-10-19 16:11:35,424 CEST
JobId: 1508422295266

OVM> validate StorageArray name=zfsstorage
Command: validate StorageArray name=zfsstorage
Status: Success
Time: 2017-10-19 16:10:04,937 CEST
JobId: 1508422128777

OVM> refreshAll
Command: refreshAll
Status: Success
Time: 2017-10-19 16:26:58,705 CEST
JobId: 1508422976145


Step 4: Creation of a server pool

OVM need to put its physical servers in a logical space called server pool. A server pool will use a least 2 storage spaces:

  • a cluster storage configuration and disk Heartbeat (must be at least of 10GB regarding OVM 3.4’s recommendations) and it is better to separate the network access for this storage space in order to avoid unwanted cluster eviction.
  • a storage space for the serverpool in which we can store VMs configuration file, Template, ISOs and so on.
OVM> list FileSystem
Command: list FileSystem
Status: Success
Time: 2017-10-19 17:41:35,737 CEST
Data:
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER
Command: create ServerPool clusterEnable=yes filesystem=3f81dcad-e1ce-41b9-b0f3-3222b3816b17 name=prod01 description='Server pool for production 001' startPolicy=CURRENT_SERVER
Status: Success
Time: 2017-10-19 17:15:11,431 CEST
Data:
id:0004fb0000020000c6b2c32fc58646e7  name:prod01


Step 5: Add servers to the server pool

OVM> list server
Command: list server
Status: Success
Time: 2017-10-19 17:15:28,111 CEST
Data:
id:65:72:21:77:7b:0d:47:47:bc:43:e5:1f:64:3d:56:d9  name:ovs002
id:bb:06:3c:3e:a4:76:4b:e2:9c:bc:65:69:4e:35:28:b4  name:ovs001

OVM> add Server name=ovs001 to ServerPool name=prod01
Command: add Server name=ovs001 to ServerPool name=prod01
Status: Success
Time: 2017-10-19 17:17:55,131 CEST
JobId: 1508426260895

OVM> add Server name=ovs002 to ServerPool name=prod01
Command: add Server name=ovs002 to ServerPool name=prod01
Status: Success
Time: 2017-10-19 17:18:21,439 CEST
JobId: 1508426277115


Step 6: Creation of a repository to store VMs’s configuration files and to import the Oracle Template

OVM> list filesystem
Command: list filesystem
Status: Success
Time: 2017-10-19 17:44:23,811 CEST
Data:
id:0004fb00000500009cbc79dde9b6649e  name:Server Pool File System
id:75734f6d-704d-48ee-9853-f6cc09b5af65  name:nfs on 192.168.238.10:/export/RepoOracle
id:3f81dcad-e1ce-41b9-b0f3-3222b3816b17  name:nfs on 192.168.238.10:/export/ServerPoolProd01

OVM> create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle"
Command: create Repository name=RepoOracle on FileSystem name="nfs on 192.168.238.10://export//RepoOracle"
Status: Success
Time: 2017-10-19 17:45:22,346 CEST
JobId: 1508427888238
Data:
id:0004fb0000030000f1c8182390a36c8c  name:RepoOracle

OVM> add ServerPool name=prod01 to Repository name=RepoOracle
Command: add ServerPool name=prod01 to Repository name=RepoOracle
Status: Success
Time: 2017-10-19 17:53:08,020 CEST
JobId: 1508428361049

OVM> refresh Repository name=RepoOracle
Command: refresh Repository name=RepoOracle
Status: Success
Time: 2017-10-19 17:53:40,922 CEST
JobId: 1508428394212

OVM> importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz"
Command: importTemplate Repository name=RepoOracle url="ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz,ftp:////192.168.56.200//pub//OVM_OL7U4_X86_64_12201DBRAC_PVHVM//OVM_OL7U4_X86_64_12201DBRAC_PVHVM-2of2.tar.gz"
Status: Success
Time: 2017-11-02 12:05:29,341 CET
JobId: 1509619956729
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz


Step 7: Create VMs called rac001 and rac002 for my 2 nodes RAC

Here we create VMs by cloning the template OVM_OL7U4_X86_64_12201DBRAC_PVHVM from Oracle.

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 12:07:06,077 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM-1of2.tar.gz

OVM> edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM
Command: edit vm id=0004fb00001400005f68a4067eda1e6b name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM
Status: Success
Time: 2017-11-02 12:07:30,392 CET
JobId: 1509620850142

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 12:07:36,282 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM

OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01
Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac001 serverPool=prod01
Status: Success
Time: 2017-11-02 12:31:31,798 CET
JobId: 1509622291342
Data:
id:0004fb0000060000d4819629ebc0687f  name:rac001

OVM> clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01
Command: clone Vm name=OVM_OL7U4_X86_64_12201DBRAC_PVHVM destType=Vm destName=rac002 serverPool=prod01
Status: Success
Time: 2017-11-02 13:57:34,125 CET
JobId: 1509627453634
Data:
id:0004fb0000060000482c8e4790b7081a  name:rac002

OVM> list vm
Command: list vm
Status: Success
Time: 2017-11-02 15:23:54,077 CET
Data:
id:0004fb00001400005f68a4067eda1e6b  name:OVM_OL7U4_X86_64_12201DBRAC_PVHVM
id:0004fb0000060000d4819629ebc0687f  name:rac001
id:0004fb0000060000482c8e4790b7081a  name:rac002

OVM> edit vm name=rac001 memory=2048 memoryLimit=2048
Command: edit vm name=rac001 memory=2048 memoryLimit=2048
Status: Success
Time: 2017-11-02 17:14:45,542 CET
JobId: 1509639285374

OVM> edit vm name=rac002 memory=2048 memoryLimit=2048
Command: edit vm name=rac002 memory=2048 memoryLimit=2048
Status: Success
Time: 2017-11-02 17:14:59,458 CET
JobId: 1509639299301


Step 8: Network definition for RAC interconnect and application network

create Network roles=VIRTUAL_MACHINE name=Application-Network
create Network roles=VIRTUAL_MACHINE name=Interco-Network-01
create Network roles=VIRTUAL_MACHINE name=Interco-Network-02

OVM> list network
Command: list network
Status: Success
Time: 2017-10-17 00:31:53,673 CEST
Data:
id:108572a7ca  name:Application-Network
id:10922ff6d7  name:Interco-Network-01
id:106765828d  name:Interco-Network-02

 

Next, we attach physical OVS’s NICs to corresponding networks

OVM> list port
Command: list port
Status: Success
Time: 2017-11-02 16:03:40,026 CET
Data:
id:0004fb00002000007667fde85d2a2944  name:eth0 on ovs002
id:0004fb00002000001fa791c597d71947  name:eth1 on ovs002
id:0004fb00002000003842bd1f3acb476b  name:eth2 on ovs002
id:0004fb000020000031652acb25248275  name:eth3 on ovs002
id:0004fb00002000006fb524dac1f2319c  name:eth4 on ovs001
id:0004fb0000200000748a37db41f80fb2  name:eth4 on ovs002
id:0004fb00002000000178e5cefb3c0161  name:eth3 on ovs001
id:0004fb000020000020373da7c0cdf4cf  name:eth2 on ovs001
id:0004fb0000200000b0e747714aa822b7  name:eth1 on ovs001
id:0004fb00002000002787de2e68f61ecd  name:eth0 on ovs001

add Port id=0004fb0000200000b0e747714aa822b7 to Network name=Application-Network
add Port id=0004fb00002000000178e5cefb3c0161 to Network name=Interco-Network-01
add Port id=0004fb00002000006fb524dac1f2319c to Network name=Interco-Network-02

add Port id=0004fb00002000001fa791c597d71947 to Network name=Application-Network
add Port id=0004fb000020000031652acb25248275 to Network name=Interco-Network-01
add Port id=0004fb0000200000748a37db41f80fb2 to Network name=Interco-Network-02

 

Then create Virtual NIC for Virtual Machines (the order matter as first created will fill first slot of the VM)

OVM> list vnic
Command: list vnic
Status: Success
Time: 2017-11-02 15:25:54,571 CET
Data:
id:0004fb00000700001fe86897bfb0ecd4  name:Template Vnic
id:0004fb00000700005351eb55314ab34e  name:Template Vnic

create Vnic name=rac001_vnic_admin network=Admin-Network on Vm name=rac001
create Vnic name=rac001_vnic_application network=Application-Network on Vm name=rac001
create Vnic name=rac001_vnic_interconnect network=Interco-Network-01 on Vm name=rac001
create Vnic name=rac001_vnic_interconnect network=Interco-Network-02 on Vm name=rac001

create Vnic name=rac002_vnic_admin network=Admin-Network on Vm name=rac002
create Vnic name=rac002_vnic_application network=Application-Network on Vm name=rac002
create Vnic name=rac002_vnic_interconnect network=Interco-Network-01 on Vm name=rac002
create Vnic name=rac002_vnic_interconnect network=Interco-Network-02 on Vm name=rac002

OVM> list vnic
Command: list vnic
Status: Success
Time: 2017-11-02 15:27:34,642 CET
Data:
id:0004fb00000700005631bb2fbbeed53c  name:rac002_vnic_interconnect
id:0004fb00000700005e93ec7e8cf529b6  name:rac001_vnic_interconnect
id:0004fb0000070000c091c9091b464846  name:rac002_vnic_admin
id:0004fb00000700001fe86897bfb0ecd4  name:Template Vnic
id:0004fb00000700009430b0a26566d6e3  name:rac002_vnic_application
id:0004fb0000070000c4113fb1d9375791  name:rac002_vnic_interconnect
id:0004fb00000700005351eb55314ab34e  name:Template Vnic
id:0004fb0000070000e1abd7e572bffc3a  name:rac001_vnic_admin
id:0004fb000007000079bb1fbf1d1942c9  name:rac001_vnic_application
id:0004fb000007000085d8a41dc8fd768c  name:rac001_vnic_interconnect


Step 9: Shared disks attachment to VMs for RAC ASM

Thanks to the Storage plugin available for the ZFS appliance we can directly create LUNs from the OVM Cli. You may find plugin for your Storage constructor in the Oracle Web Site https://www.oracle.com/virtualization/storage-connect-partner-program.html.
The storage plugin need to be installed on each OVS Servers and OVS servers need to be rediscovered after changes.

create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu001 name=clu001dgclu001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu002 name=clu001dgclu002 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu003 name=clu001dgclu003 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu004 name=clu001dgclu004 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgclu005 name=clu001dgclu005 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata001 name=clu001dgdata001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgdata002 name=clu001dgdata002 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra001 name=clu001dgfra001 on VolumeGroup  name=data/local/OracleTech
create PhysicalDisk size=5 shareable=yes thinProvision=yes userFriendlyName=clu001dgfra002 name=clu001dgfra002 on VolumeGroup  name=data/local/OracleTech

OVM> list PhysicalDisk
Command: list PhysicalDisk
Status: Success
Time: 2017-11-02 11:44:41,624 CET
Data:
id:0004fb0000180000ae02df42a4c8e582  name:clu001dgclu004
id:0004fb0000180000d91546f7d1a09cfb  name:clu001dgclu005
id:0004fb0000180000ab0030fb540a55b9  name:clu001dgclu003
id:0004fb0000180000d20bb1d7d50d6875  name:clu001dgfra001
id:0004fb00001800009e39a0b8b1edcf90  name:clu001dgfra002
id:0004fb00001800003742306aa30bfdd4  name:clu001dgdata001
id:0004fb00001800006131006a7a9fd266  name:clu001dgdata002
id:0004fb0000180000a5177543a1ef0464  name:clu001dgclu001
id:0004fb000018000035bd38c6f5245f66  name:clu001dgclu002

create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac001_clu001dgclu001 on Vm name=rac001
create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac001_clu001dgclu002 on Vm name=rac001
create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac001_clu001dgclu003 on Vm name=rac001
create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac001_clu001dgclu004 on Vm name=rac001
create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac001_clu001dgclu005 on Vm name=rac001
create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac001_clu001dgdata001 on Vm name=rac001
create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac001_clu001dgdata002 on Vm name=rac001
create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac001_clu001dgfra001 on Vm name=rac001
create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac001_clu001dgfra002 on Vm name=rac001

create vmdiskmapping slot=10 physicalDisk=clu001dgclu001 name=asm_disk_cluster_rac002_clu001dgclu001 on Vm name=rac002
create vmdiskmapping slot=11 physicalDisk=clu001dgclu002 name=asm_disk_cluster_rac002_clu001dgclu002 on Vm name=rac002
create vmdiskmapping slot=12 physicalDisk=clu001dgclu003 name=asm_disk_cluster_rac002_clu001dgclu003 on Vm name=rac002
create vmdiskmapping slot=13 physicalDisk=clu001dgclu004 name=asm_disk_cluster_rac002_clu001dgclu004 on Vm name=rac002
create vmdiskmapping slot=14 physicalDisk=clu001dgclu005 name=asm_disk_cluster_rac002_clu001dgclu005 on Vm name=rac002
create vmdiskmapping slot=15 physicalDisk=clu001dgdata001 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002
create vmdiskmapping slot=16 physicalDisk=clu001dgdata002 name=asm_disk_cluster_rac002_clu001dgdata on Vm name=rac002
create vmdiskmapping slot=17 physicalDisk=clu001dgfra001 name=asm_disk_cluster_rac002_clu001dgfra001 on Vm name=rac002
create vmdiskmapping slot=18 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002

 

#Output of an attachment:
OVM> create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002
Command: create vmdiskmapping slot=51 physicalDisk=clu001dgfra002 name=asm_disk_cluster_rac002_clu001dgfra002 on Vm name=rac002
Status: Success
Time: 2017-11-02 15:49:44,573 CET
JobId: 1509634184144
Data:
id:0004fb0000130000d1a3ecffefcc0b5b  name:asm_disk_cluster_rac002_clu001dgfra002

 

OVM> list vmdiskmapping
Command: list vmdiskmapping
Status: Success
Time: 2017-11-02 15:50:05,117 CET
Data:
id:0004fb0000130000a2e52668e38d24f0  name:Mapping for disk Id (0004fb00001200008e5043cea31e4a1c.img)
id:0004fb00001300000b0202b6af4254b1  name:asm_disk_cluster_rac002_clu001dgclu003
id:0004fb0000130000f573415ba8af814d  name:Mapping for disk Id (0004fb0000120000073fd0cff75c5f4d.img)
id:0004fb0000130000217c1b6586d88d98  name:asm_disk_cluster_rac002_clu001dgclu002
id:0004fb00001300007c8f1b4fd9e845c4  name:asm_disk_cluster_rac002_clu001dgclu001
id:0004fb00001300009698cf153f616454  name:asm_disk_cluster_rac001_clu001dgfra002
id:0004fb0000130000c9caf8763df6bfe0  name:asm_disk_cluster_rac001_clu001dgfra001
id:0004fb00001300009771ff7e2a1bf965  name:asm_disk_cluster_rac001_clu001dgdata002
id:0004fb00001300003aed42abb7085053  name:asm_disk_cluster_rac001_clu001dgdata001
id:0004fb0000130000ac45b70bac2cedf7  name:asm_disk_cluster_rac001_clu001dgclu005
id:0004fb000013000007069008e4b91b9d  name:asm_disk_cluster_rac001_clu001dgclu004
id:0004fb0000130000a8182ada5a07d7cd  name:asm_disk_cluster_rac001_clu001dgclu003
id:0004fb00001300009edf25758590684b  name:asm_disk_cluster_rac001_clu001dgclu002
id:0004fb0000130000a93c8a73900cbf80  name:asm_disk_cluster_rac002_clu001dgfra001
id:0004fb0000130000c8c35da3ad0148c4  name:asm_disk_cluster_rac001_clu001dgclu001
id:0004fb0000130000d1a3ecffefcc0b5b  name:asm_disk_cluster_rac002_clu001dgfra002
id:0004fb0000130000ff84c64175d7e6c1  name:asm_disk_cluster_rac002_clu001dgdata
id:0004fb00001300009c08b1803928536d  name:Mapping for disk Id (dd3c390b29af49809caba202f234a443.img)
id:0004fb0000130000e85ace19b45c0ad6  name:Mapping for disk Id (0004fb00001200002aa671facc8a1307.img)
id:0004fb0000130000e595c3dc5788b87a  name:Mapping for disk Id (0004fb000012000087341e27f9faaa17.img)
id:0004fb0000130000c66fe2d0d66b7276  name:asm_disk_cluster_rac002_clu001dgdata
id:0004fb00001300009c85bca66c400366  name:Mapping for disk Id (46da481163424b739feeb08b4d22c1b4.img)
id:0004fb0000130000768a2af09207e659  name:asm_disk_cluster_rac002_clu001dgclu004
id:0004fb000013000092836d3ee569e6ac  name:asm_disk_cluster_rac002_clu001dgclu005

OVM> add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001
Command: add StorageInitiator name=iqn.1988-12.com.oracle:1847e1b91b5b to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 16:59:32,116 CET
JobId: 1509638311277

OVM> add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001
Command: add StorageInitiator name=iqn.1988-12.com.oracle:a5c84f2c8798 to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 16:57:31,703 CET
JobId: 1509638191228

add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu002 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu003 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu004 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgclu005 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgdata001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgdata002 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgfra001 to AccessGroup name=cluster001
add PhysicalDisk name=clu001dgfra002 to AccessGroup name=cluster001

#Output of an Access addition:
OVM> add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
Command: add PhysicalDisk name=clu001dgclu001 to AccessGroup name=cluster001
Status: Success
Time: 2017-11-02 17:10:13,636 CET
JobId: 1509639013463
OVM> refreshStorageLayer Server name=ovs001
Command: refreshStorageLayer Server name=ovs001
Status: Success
Time: 2017-11-02 16:42:26,230 CET
JobId: 1509637330270

OVM> refreshStorageLayer Server name=ovs002
Command: refreshStorageLayer Server name=ovs002
Status: Success
Time: 2017-11-02 16:42:51,296 CET
JobId: 1509637355423

 

Final state: 2 VMs hosted on 2 different Servers with OS, Network and Storage requirements to run RAC 12.2.

This concluded this part and demonstrates how easy it can be to automate those commands and deploy many different architectures.
The next part will describe how to deploy a RAC 12.2 on top of this infrastructure with the Oracle DeployCluster Tool in few commands …

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 01) est apparu en premier sur Blog dbi services.

Error handling behavior change according to PLSQL_OPTIMIZE_LEVEL

Tom Kyte - Tue, 2017-11-07 21:26
We had faced a case in our application where error message disappear according to PLSQL_OPTIMIZE_LEVEL. I had isolated the problem in a simple script. Run this script, you will see that at first execution of the procedure "test_error_proc#" error i...
Categories: DBA Blogs

SQL Query based on performance

Tom Kyte - Tue, 2017-11-07 21:26
Hi Tom, There is a table say t_tab with columns a,b and c. Data in the table is huge(more than million). You run the following three statements: 1. select * from t_tab 2. select a,b,c from t_tab 3. select b,c,a from t_tab Will there be a diffe...
Categories: DBA Blogs

Fuzzy Matching in SQL

Tom Kyte - Tue, 2017-11-07 21:26
Is there any SQL construct that does fuzzy matching ? As an example , if I have the values as Monroe , Monroe Twp , Monroe Township , "Monroe Twp,NJ" , I would like to consider them as one value .
Categories: DBA Blogs

Subquery with Select statement works in 12C but not on 11g.

Tom Kyte - Tue, 2017-11-07 21:26
Hi I am trying to run a select query which has the sub queries it is running well and good in 12C environment but its throughout error in 11g. Could you please help me on this. Thanks, Kumar
Categories: DBA Blogs

Index creation on empty column on Large Table

Tom Kyte - Tue, 2017-11-07 21:26
Quite a time we face a situation where we have large table with Hundreds of Millions of records(sometimes even Billions of records), and we might need to add column to that table and then add index on that new column. We have absolute control over...
Categories: DBA Blogs

ORA-29284: file read error for a few lines

Tom Kyte - Tue, 2017-11-07 21:26
Hi Experts, Thanks for taking the time out to ready my Question. I am receiving a file from a third party as a flat file, with different lines of different lengths. The first two characters of each line represents what data that line will hav...
Categories: DBA Blogs

using connect by without relationship using parent_id

Tom Kyte - Tue, 2017-11-07 21:26
Hi, I have information about father's , mother's and children but there is no relationship between the rows using Paernt_id as follows, <code>drop table tbl_family; create table tbl_family ( father nvarchar2(50) , mother nvarchar2(50) , ...
Categories: DBA Blogs

database migration from AIX to Linux

Tom Kyte - Tue, 2017-11-07 21:26
Hello Tom, We are planning to migrate database from AIX to Linux. Because of different endian we can't built the standby,here my request was Production databases have 30-40TB of data. Some tables have 1-5 TB of only data what was the best way...
Categories: DBA Blogs

Replication of multiple sourde database to a single read only database

Tom Kyte - Tue, 2017-11-07 21:26
Dears Hope you are fine,,, I have a distributed database about 20 branch,,, each database have the same schema structure we need a centralized report that reads from only four tables. currently we take a dump file from each branch and impo...
Categories: DBA Blogs

SunMoon Food Company Implements NetSuite OneWorld to Support Rapid Global Expansion

Oracle Press Releases - Tue, 2017-11-07 20:00
Press Release
SunMoon Food Company Implements NetSuite OneWorld to Support Rapid Global Expansion Fresh produce distributor increases productivity and efficiency, saves S$20,000 in just five months

Sngapore—Nov 8, 2017

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, announced today that SunMoon Food Company Limited (SunMoon), a global distributor of fruit and food products, has deployed NetSuite OneWorld to support its global growth, enabling dramatically increased overall productivity and efficiency. In just five months, NetSuite OneWorld facilitated 900 transactions, having saved SunMoon 150 hours and an estimated S$20,000.

Live in April 2017, SunMoon is leveraging OneWorld for financials, inventory and order management, financial consolidation across three subsidiaries in China, Indonesia, and the US, and multicurrency transactions in 11 different currencies – Australian, Canadian, Hong Kong, Singapore and US Dollar, Euro, Indonesian Rupiah, Malaysian Ringgit, Renminbi and Thai Baht. It also supports, English, Chinese, and Bahasa Malay.

Established in 1983, SunMoon distributes a wide range of fresh and sustainable produce, from premium frozen durians to ready-to-eat sweet corn. The produce is directly sourced from over 200 carefully selected and certified suppliers according to the ‘SunMoon Quality Assurance’ standard, a critical checklist of freshness, quality, safety and traceability. It is then distributed to health-conscious consumers globally, across various ecommerce channels, major supermarkets and SunMoon’s own franchise outlets.

Prior to deploying NetSuite OneWorld, SunMoon primarily used emails to correspond with its farmers, suppliers and customers for stock taking, order management, invoicing and billing. This required significant manual coordination, making it extremely difficult to track orders and compare quotes, greatly impacting productivity and the company’s growth potential.

NetSuite OneWorld empowers SunMoon’s suppliers to enter expiry dates, packaging sizes and other details from any internet-connected device into the cloud-based system. Based on this information, SunMoon can easily create a quote for its customers, which they can accept with just one click. NetSuite OneWorld then automatically sends a PO to farmers and generates an invoice once the order has been fulfilled.

“SunMoon offers over 200 products through more than 11,000 points of sales to 169 customers in 20 countries, and these numbers are growing daily,” said Gary Loh, Deputy Chairman and CEO of SunMoon Food Company Limited. “With NetSuite OneWorld, we’ve been able to move our products seamlessly from farm to fork on a global scale much faster and more efficiently. Using NetSuite OneWorld’s integrated capabilities helps us transform SunMoon into an asset-light and customer-centric enterprise.”

NetSuite OneWorld also supports SunMoon’s aggressive expansion plans. “Thanks to NetSuite OneWorld, we can enter new markets more easily,” continued Loh. “It’s multi-language and multi-currency features put us on the world map, empowering us to further expand our operations in Indonesia, the US and Southeast Asia. And best of all, we won’t even need an overseas IT department to support these countries. Our Singapore team can provide support remotely as NetSuite OneWorld is completely cloud based.”

Zakir Ahmed,General Manager, Oracle NetSuite Asia commented: “With Asia Pacific accounting for nearly 60 percent of the global population, an efficient food supply chain and distribution network is even more critical here than anywhere else in the world. We are committed to giving forward-looking businesses the tools to innovate. SunMoon is a great example of a business that harnesses technology to digitise and transform this traditional market.

NetSuite OneWorld supports 190 currencies, 20 languages, automated tax calculation and reporting in more than 100 countries, and customer transactions in more than 200 countries and territories. These global financial capabilities give SunMoon real-time organisation-wide visibility and new insights for its three subsidiaries via supplier, customer and other transaction data.

Contact Info
Suzanne Myerson
Oracle NetSuite
+61 414 101 583
suzanne.myerson@oracle.com
Mizu Chitra
Text100 Singapore
+65 6603 9000
SGNetSuite@text100.com.sg
About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world's first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.sg.

Follow NetSuite's Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About SunMoon Food Company Limited (www.sunmoonfood.com)

SunMoon Food Company Limited (“SunMoon”) is a global distributor and marketer of nutritious fresh fruits, vegetables, and products, delivered to the health-conscious consumer in the most convenient way.

Started in 1983, SunMoon has grown its product offering to over 200 product types, including fresh fruits, vegetables, freeze-dried fruit snacks, nuts, fruit cups, fruit sticks, juices, sorbets, frozen fruits and assorted water packaged under its own brand.

With an extensive sales network of over 11,000 points of sales globally, SunMoon's offering of quality, premium products are distributed via supermarkets, convenience stores, online and wholesale channels, airlines, food services as well as SunMoon's franchise outlets in Singapore.

Since 2015, the company has shifted towards an asset-light consumer-centric and brand-focused business model by tapping on its strong brand equity and the 'Network x Geography x Product' operational model. Instead of owning farms, SunMoon works with farmers to ensure they meet our quality standards.

SunMoon's products come with the SunMoon Quality Assurance, backed by internationally recognised accreditations such as HACCP; Good Manufacturing Practice (GMP); AIB (Excellent), ISO 22000, Halal and Kosher Certification.

SunMoon was listed in 1997 on the Mainboard of the Singapore Exchange. 

Talk to a Press Contact

Suzanne Myerson

  • +61 414 101 583

Mizu Chitra

  • +65 6603 9000

Oracle SOA Suite 12c: rcu fails on Oracle Linux

Dietrich Schroff - Tue, 2017-11-07 15:39
Next step after setting up a database is running the rcu script to create the soa suite schema inside the database. But this step fails with an ugly exception:

[oracle@localhost bin]$ pwd
/mnt/Middleware/Oracle_Home/oracle_common/bin
[oracle@localhost bin]$ ./rcu

    RCU-Logdatei: /tmp/RCU2017-10-07_18-13_966788282/logs/rcu.log

Exception in thread "main" java.lang.ExceptionInInitializerError
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
    at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
    at javax.swing.UIDefaults.getUI(UIDefaults.java:769)
    at javax.swing.UIManager.getUI(UIManager.java:1016)
    at javax.swing.JComboBox.updateUI(JComboBox.java:266)
    at javax.swing.JComboBox.init(JComboBox.java:231)
    at javax.swing.JComboBox.(JComboBox.java:183)
    at oracle.help.DefaultNavigatorPanel$MinimumSizedComboBox.(DefaultNavigatorPanel.java:791)
    at oracle.help.DefaultNavigatorPanel.(DefaultNavigatorPanel.java:106)
    at oracle.help.Help._initHelpSystem(Help.java:1045)
    at oracle.help.Help.(Help.java:353)
    at oracle.help.Help.(Help.java:307)
    at oracle.help.Help.(Help.java:271)
    at oracle.help.Help.(Help.java:146)
    at oracle.sysman.assistants.rcu.ui.InteractiveRCUModel.initializeHelp(InteractiveRCUModel.java:261)
    at oracle.sysman.assistants.rcu.ui.InteractiveRCUModel.(InteractiveRCUModel.java:151)
    at oracle.sysman.assistants.rcu.Rcu.execute(Rcu.java:360)
    at oracle.sysman.assistants.rcu.Rcu.main(Rcu.java:433)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
    at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
    at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
    at sun.font.FontDesignMetrics.(FontDesignMetrics.java:350)
    at sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:302)
    at sun.swing.SwingUtilities2.getFontMetrics(SwingUtilities2.java:1113)
    at javax.swing.JComponent.getFontMetrics(JComponent.java:1626)
    at javax.swing.text.PlainView.calculateLongestLine(PlainView.java:639)
    at javax.swing.text.PlainView.updateMetrics(PlainView.java:209)
    at javax.swing.text.PlainView.updateDamage(PlainView.java:527)
    at javax.swing.text.PlainView.insertUpdate(PlainView.java:451)
    at javax.swing.text.FieldView.insertUpdate(FieldView.java:293)
    at javax.swing.plaf.basic.BasicTextUI$RootView.insertUpdate(BasicTextUI.java:1610)
    at javax.swing.plaf.basic.BasicTextUI$UpdateHandler.insertUpdate(BasicTextUI.java:1869)
    at javax.swing.text.AbstractDocument.fireInsertUpdate(AbstractDocument.java:201)
    at javax.swing.text.AbstractDocument.handleInsertString(AbstractDocument.java:748)
    at javax.swing.text.AbstractDocument.insertString(AbstractDocument.java:707)
    at javax.swing.text.PlainDocument.insertString(PlainDocument.java:130)
    at javax.swing.text.AbstractDocument.replace(AbstractDocument.java:669)
    at javax.swing.text.JTextComponent.setText(JTextComponent.java:1669)
    at javax.swing.JTextField.(JTextField.java:243)
    at javax.swing.JTextField.(JTextField.java:183)
    at com.jgoodies.looks.plastic.PlasticComboBoxUI.(PlasticComboBoxUI.java:88)
    ... 25 more
Hmmm. Not that good.
(i was running this from a shared virtual box folder.)
Next step was to install the Middleware home on my Oracle Linux. But fails too:
[oracle@localhost mnt]$ java -jar fmw_12.2.1.0.0_soa_quickstart.jar
Launcher-Logdatei ist /tmp/OraInstall2017-10-07_06-29-37PM/launcher2017-10-07_06-29-37PM.log.
Dateien werden extrahiert.......................................................
Oracle Universal Installer wird gestartet

Es wird geprüft, ob CPU-Geschwindigkeit über 300 MHz liegt   Tatsächlich 2904.000 MHz    Erfolgreich
Monitor wird geprüft: muss so konfiguriert sein, dass mindestens 256 Farben angezeigt werden   Tatsächlich 16777216    Erfolgreich
Swap-Bereich wird geprüft: muss größer sein als 512 MB   Tatsächlich 3967 MB    Erfolgreich
Es wird geprüft, ob diese Plattform eine 64-Bit-JVM erfordert   Tatsächlich 64    Erfolgreich (64-Bit nicht erforderlich)
Temporärer Speicherplatz wird geprüft: muss größer sein als 300 MB   Tatsächlich 16325 MB    Erfolgreich


Vorbereitung für das Starten von Oracle Universal Installer aus /tmp/OraInstall2017-10-07_06-29-37PM
Log: /tmp/OraInstall2017-10-07_06-29-37PM/install2017-10-07_06-29-37PM.log
java.lang.ArrayIndexOutOfBoundsException: 0
    at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
    at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
    at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
    at sun.font.FontDesignMetrics.(FontDesignMetrics.java:350)
    at sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:302)
    at sun.swing.SwingUtilities2.getFontMetrics(SwingUtilities2.java:1113)
    at javax.swing.JComponent.getFontMetrics(JComponent.java:1626)
    at javax.swing.text.GlyphPainter1.sync(GlyphPainter1.java:226)
    at javax.swing.text.GlyphPainter1.getSpan(GlyphPainter1.java:59)
    at javax.swing.text.GlyphView.getPreferredSpan(GlyphView.java:592)
    at javax.swing.text.FlowView$LogicalView.getPreferredSpan(FlowView.java:732)
    at javax.swing.text.FlowView.calculateMinorAxisRequirements(FlowView.java:233)
    at javax.swing.text.ParagraphView.calculateMinorAxisRequirements(ParagraphView.java:717)
    at javax.swing.text.html.ParagraphView.calculateMinorAxisRequirements(ParagraphView.java:157)
    at javax.swing.text.BoxView.checkRequests(BoxView.java:935)
    at javax.swing.text.BoxView.getMinimumSpan(BoxView.java:568)
    at javax.swing.text.html.ParagraphView.getMinimumSpan(ParagraphView.java:270)
    at javax.swing.text.BoxView.calculateMinorAxisRequirements(BoxView.java:903)
    at javax.swing.text.html.BlockView.calculateMinorAxisRequirements(BlockView.java:146)
    at javax.swing.text.BoxView.checkRequests(BoxView.java:935)
    at javax.swing.text.BoxView.getMinimumSpan(BoxView.java:568)
    at javax.swing.text.html.BlockView.getMinimumSpan(BlockView.java:378)
    at javax.swing.text.BoxView.calculateMinorAxisRequirements(BoxView.java:903)
    at javax.swing.text.html.BlockView.calculateMinorAxisRequirements(BlockView.java:146)
    at javax.swing.text.BoxView.checkRequests(BoxView.java:935)
    at javax.swing.text.BoxView.getPreferredSpan(BoxView.java:545)
    at javax.swing.text.html.BlockView.getPreferredSpan(BlockView.java:362)
    at javax.swing.plaf.basic.BasicHTML$Renderer.(BasicHTML.java:383)
    at javax.swing.plaf.basic.BasicHTML.createHTMLView(BasicHTML.java:67)
    at javax.swing.plaf.basic.BasicHTML.updateRenderer(BasicHTML.java:207)
    at javax.swing.plaf.basic.BasicLabelUI.propertyChange(BasicLabelUI.java:417)
    at oracle.bali.ewt.olaf2.OracleLabelUI.propertyChange(OracleLabelUI.java:53)
    at java.beans.PropertyChangeSupport.fire(PropertyChangeSupport.java:335)
    at java.beans.PropertyChangeSupport.firePropertyChange(PropertyChangeSupport.java:327)
    at java.beans.PropertyChangeSupport.firePropertyChange(PropertyChangeSupport.java:263)
    at java.awt.Component.firePropertyChange(Component.java:8428)
    at javax.swing.JLabel.setText(JLabel.java:330)
    at oracle.as.install.engine.modules.presentation.ui.common.label.ModifiedJLabel.setText(ModifiedJLabel.java:183)
    at oracle.as.install.engine.modules.presentation.ui.screens.WelcomeWindow.jbInit(WelcomeWindow.java:303)
    at oracle.as.install.engine.modules.presentation.ui.screens.WelcomeWindow.(WelcomeWindow.java:112)
    at oracle.as.install.engine.modules.presentation.action.LaunchWelcomeWindowAction.execute(LaunchWelcomeWindowAction.java:86)
    at oracle.as.install.engine.modules.presentation.util.ActionQueue.run(ActionQueue.java:70)
    at oracle.as.install.engine.modules.presentation.PresentationModule.prepareAndRunActions(PresentationModule.java:281)
    at oracle.as.install.engine.modules.presentation.PresentationModule.launchModule(PresentationModule.java:235)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at oracle.as.install.engine.InstallEngine.launchModule(InstallEngine.java:580)
    at oracle.as.install.engine.InstallEngine.processAndLaunchModules(InstallEngine.java:522)
    at oracle.as.install.engine.InstallEngine.startOperation(InstallEngine.java:471)
    at oracle.sysman.oio.oioc.OiocOneClickInstaller.main(OiocOneClickInstaller.java:717)
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at oracle.as.install.engine.InstallEngine.launchModule(InstallEngine.java:580)
    at oracle.as.install.engine.InstallEngine.processAndLaunchModules(InstallEngine.java:522)
    at oracle.as.install.engine.InstallEngine.startOperation(InstallEngine.java:471)
    at oracle.sysman.oio.oioc.OiocOneClickInstaller.main(OiocOneClickInstaller.java:717)
Caused by: java.lang.ExceptionInInitializerError
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
    at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
    at javax.swing.UIDefaults.getUI(UIDefaults.java:769)
    at javax.swing.UIManager.getUI(UIManager.java:1016)
    at javax.swing.JComboBox.updateUI(JComboBox.java:266)
    at javax.swing.JComboBox.init(JComboBox.java:231)
    at javax.swing.JComboBox.(JComboBox.java:183)
    at oracle.help.DefaultNavigatorPanel$MinimumSizedComboBox.(DefaultNavigatorPanel.java:791)
    at oracle.help.DefaultNavigatorPanel.(DefaultNavigatorPanel.java:106)
    at oracle.help.Help._initHelpSystem(Help.java:1045)
    at oracle.help.Help.(Help.java:243)
    at oracle.help.Help.(Help.java:200)
    at oracle.help.Help.(Help.java:125)
    at oracle.as.install.engine.modules.presentation.ui.common.help.WizardHelpManager.configure(WizardHelpManager.java:76)
    at oracle.as.install.engine.modules.presentation.action.WizardHelpConfigAction.execute(WizardHelpConfigAction.java:228)
    at oracle.as.install.engine.modules.presentation.util.ActionQueue.run(ActionQueue.java:70)
    at oracle.as.install.engine.modules.presentation.PresentationModule.prepareAndRunActions(PresentationModule.java:281)
    at oracle.as.install.engine.modules.presentation.PresentationModule.launchModule(PresentationModule.java:235)
    ... 8 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
    at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75)
    at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93)
    at sun.font.FontDesignMetrics.initMatrixAndMetrics(FontDesignMetrics.java:359)
    at sun.font.FontDesignMetrics.(FontDesignMetrics.java:350)
    at sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:302)
    at sun.swing.SwingUtilities2.getFontMetrics(SwingUtilities2.java:1113)
    at javax.swing.JComponent.getFontMetrics(JComponent.java:1626)
    at javax.swing.text.PlainView.calculateLongestLine(PlainView.java:639)
    at javax.swing.text.PlainView.updateMetrics(PlainView.java:209)
    at javax.swing.text.PlainView.updateDamage(PlainView.java:527)
    at javax.swing.text.PlainView.insertUpdate(PlainView.java:451)
    at javax.swing.text.FieldView.insertUpdate(FieldView.java:293)
    at javax.swing.plaf.basic.BasicTextUI$RootView.insertUpdate(BasicTextUI.java:1610)
    at javax.swing.plaf.basic.BasicTextUI$UpdateHandler.insertUpdate(BasicTextUI.java:1869)
    at javax.swing.text.AbstractDocument.fireInsertUpdate(AbstractDocument.java:201)
    at javax.swing.text.AbstractDocument.handleInsertString(AbstractDocument.java:748)
    at javax.swing.text.AbstractDocument.insertString(AbstractDocument.java:707)
    at javax.swing.text.PlainDocument.insertString(PlainDocument.java:130)
    at javax.swing.text.AbstractDocument.replace(AbstractDocument.java:669)
    at javax.swing.text.JTextComponent.setText(JTextComponent.java:1669)
    at javax.swing.JTextField.(JTextField.java:243)
    at javax.swing.JTextField.(JTextField.java:183)
    at com.jgoodies.looks.plastic.PlasticComboBoxUI.(PlasticComboBoxUI.java:88)
    ... 33 more
[ERROR]: Installer has encountered an internal Error. Contact Oracle support with details
[EXCEPTION]:java.lang.reflect.InvocationTargetException
So there is a problem with running the SOA Suite installer on Oracle Linux...
The installation worked fine on my ubuntu (see here) and the rcu starts without any problem an my ubuntu:
schroff@zerberus:/home/data/opt/oracle/Middleware/Oracle_Home/oracle_common/bin$ ./rcu

    RCU-Logdatei: /tmp/RCU2017-10-14_22-36_851447466/logs/rcu.log


October 2017 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Tue, 2017-11-07 11:22

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

  • Oracle Database Proactive BP 12.1.0.2.171017
  • Oracle Database PSU 12.1.0.2.171017
  • Oracle JavaVM Component Database PSU 12.1.0.2.171017
  • Oracle Database Patch for Exadata BP 11.2.0.4.171017
  • Oracle Database PSU 11.2.0.4.171017
  • Oracle JavaVM Component Database PSU 11.2.0.4.171017
  • Microsoft Windows Database BP 12.1.0.2.170228
  • Oracle JavaVM Component 12.1.0.2.170228 on Windows
  • Microsoft Windows Database BP 11.2.0.4.170418
  • Oracle JavaVM Component 11.2.0.4.170418 on Windows

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

References

Related Articles

Categories: APPS Blogs

SQL Server Tips: Deactivate the Customer Experience Improvement Program (CEIP)

Yann Neuhaus - Tue, 2017-11-07 09:17

Before SQL Server 2016, you had the possibility to check the case “Send Windows and SQL Server Error Reports….” during the installation if you want to be a part of the Customer Experience Improvement Program (CEIP).
In SQL Server 2016, after the installation, all of the CEIP are automatically turned on.

Why?

SQL Server and SQL Azure share the same code now. On Azure, this service existed since a long time. It collects a large amount of data to automate various tasks and keeps the system functional including support for the following:

  • Incident Management (CRIs, LSIs)
  • Alert management (proactive approach)
  • Automated management via bots (based on alerts)
  • Machine learning / data science
  • Investigating potential new features that can benefit a maximum of clients

As you can see, the idea of integrating the CEIP service with SQL 2016 is to be able to extend this ability to collect “useful” data to Microsoft in order to maximize the impact on future developments.

My Thinking

In this article, I do not want to start a discussion whether to leave this service active or not.
With the guarantees given by Microsoft on the information collected, it is also not a question of security.
The SQL Server Team has published an explicit policy that spells out what and when data is collected: https://www.microsoft.com/EN-US/privacystatement/SQLServer/Default.aspx
As a lot of servers have no internet access, this service is often useless (as data cannot be sent).
In previous versions, I did not install the CEIP on Production environment. So in the same logic, I deactivated this service.

How to Deactivate the CEIP

To disable this service, we need 2 steps. I use PowerShell commands for both.
The first step is to deactivate all CEIP services.

Deactivate all CEIP services

CEIP is present for 3 SQL server services:

  • For SQL Server Engine, you have a SQL Server CEIP service
  • For SQL Server Analysis Service (SSAS), you have a SQL Analysis Services CEIP
  • For SQL Server Integration Service (SSIS), you have a SQL Server Integration Services CEIP service 13.0

CEIP01
As you can see on this picture, we have one CEIP service per instance per service. For the Engine & SSAS and one just for SSIS(shared component).
If you have a look on each service, the patterns for the name are the same:

  • For SQL Server CEIP service, you have a SQLTELEMETRY$<InstanceName>
  • For SQL Analysis Services CEIP, you have a SSASTELEMETRY$<InstanceName>
  • For SQL Server Integration Services CEIP service 13.0 CEIP, you have just SSISTELEMETRY130

CEIP02 I run PowerShell as Administrator and run these following command to have a status of these services:

Get-WMiObject win32_service |? name -Like "SQLTELEMETRY*" | Format-Table name,startname,startmode,state
Get-WMiObject win32_service |? name -Like "SSASTELEMETRY*" | Format-Table name,startname,startmode,state
Get-WMiObject win32_service |? name -Like "SSISTELEMETRY*" | Format-Table name,startname,startmode,state

CEIP03
We can also be more generic and use this command:

Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state

CEIP04
To disable these services, I do it in 2 steps. The first step is to stop the service and the second step is to disable the service:

  • Stop services
    Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | ? state -eq "running" | Stop-Service
  • Disable services
    Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Set-Service -StartMode Disabled

Here you find the “global” script:

##################################################
# Disable CEIP services  #
##################################################
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
# Stop all CEIP services
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | ? state -eq "running" | Stop-Service
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
# Disable all CEIP services
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Set-Service -StartMode Disabled
Get-WMiObject win32_service |? name -Like "*TELEMETRY*" | Format-Table name,startname,startmode,state
##################################################

CEIP05
All CEIP services are now stopped and disabled. Good job, Stéphane 8-) , but it’s not finished, we have a second step to do…
The second step is to set all CEIP registry keys to 0.

Set all CEIP registry keys to 0

This step is more complex because we have a lot of registry keys. Two parameters have to be set to 0:

  • CustomerFeedback
  • EnableErrorReporting

The first registry key is HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\130\
CEIP06
The second registry key is HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\130\
CEIP07
The other registry keys are per instance and per services(Engine, SSAS and SSRS):
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\
CEIP08
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\
CEIP09
HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\
CEIP10
To set all these keys to 0, I use “simply” PowerShell Commands:

##################################################
#  Deactivate CEIP registry keys #
##################################################
# Set all CustomerFeedback & EnableErrorReporting in the key directory HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server to 0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\***\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\***\EnableErrorReporting=0
# *** --> Version of SQL Server (100,110,120,130,140,...)
# For the Engine
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL**.<instance>\CPE\EnableErrorReporting=0
# For SQL Server Analysis Server (SSAS)
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSAS**.<instance>\CPE\EnableErrorReporting=0
# For Server Reporting Server (SSRS)
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSRS**.<instance>\CPE\EnableErrorReporting=0
# ** --> Version of SQL Server (10,11,12,13,14,...)
##################################################
$Key = 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$FoundKeys = Get-ChildItem $Key -Recurse | Where-Object -Property Property -eq 'EnableErrorReporting'
foreach ($Sqlfoundkey in $FoundKeys)
{
$SqlFoundkey | Set-ItemProperty -Name EnableErrorReporting -Value 0
$SqlFoundkey | Set-ItemProperty -Name CustomerFeedback -Value 0
}
##################################################
# Set HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\***\CustomerFeedback=0
# Set HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Microsoft SQL Server\***\EnableErrorReporting=0
# *** --> Version of SQL Server(100,110,120,130,140,...)
##################################################
$WowKey = "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server"
$FoundWowKeys = Get-ChildItem $WowKey | Where-Object -Property Property -eq 'EnableErrorReporting'
foreach ($SqlFoundWowKey in $FoundWowKeys)
{
$SqlFoundWowKey | Set-ItemProperty -Name EnableErrorReporting -Value 0
$SqlFoundWowKey | Set-ItemProperty -Name CustomerFeedback -Value 0
}

As you can see, I use only the EnableErrorReporting key in the Where-Object clause to find the impacted keys. After running this script you have all CEIP registry key set to 0…
Et voila, CEIP is totally deactivated!

To finish, I will thanks all my SQL Server colleagues for their help to have a good vision of this tricky subject. It was also a good discussion internally in our SQL Server Expert Team to define what to do by customer!   :-)

 

Cet article SQL Server Tips: Deactivate the Customer Experience Improvement Program (CEIP) est apparu en premier sur Blog dbi services.

Why We Chose Mindbreeze for Enterprise Search: Fishbowl’s Competitive Analysis Across Search Platforms

Comparing Mindbreeze to Google Cloud Search, Coveo, Lucidworks, Yippy, Elasticsearch, and Solr

Last month we discussed replacing the Google Search Appliance (GSA) and the Top 5 Reasons We Chose Mindbreeze. In this follow-up, we’ll explore the other vendors who made our shortlist and how they all stack up. In case you missed the last post, here’s a recap of the key requirements against which we were evaluating each solution:

  • Options for searching on-premise content
  • Connectors and connector frameworks for indexing non-web data sources
  • Support for public and secure use cases
  • Tools and APIs for search interface integration
  • Minimal development efforts and ongoing administration required
Mindbreeze vs. Google Cloud Search

As a Google Premier Partner and GSA implementer, we naturally looked to Google for GSA replacement options. At the time of our evaluation, Google Cloud Search did not have any features available to address indexing on-premise content or serving that content through websites or web applications other than their own cloud search interface. In addition, the status of their security integration options and administration experience remained widely unknown. While it was always clear that Google’s new enterprise search index would be cloud-based, the options for pushing enterprise content from on-premise repositories into that index remain unclear. The initial product direction for Google Cloud Search (previously referred to as Springboard) focused on indexing Google’s G Suite data sources such as Gmail, Google Calendar, and Google Drive. Google has since changed their directional statements to reemphasize their intention to implement indexing mechanisms for on-premise content, but even at the time of this writing, that technology is yet to be released.

Our decision to pursue solutions other than Google, and ultimately partner with Mindbreeze, largely came down to the fact that we couldn’t confidently assure our customers that Google would have a replacement ready (and able to meet the aforementioned requirements) in time for the GSA’s end of life. While I continue to be impressed with Google’s cloud innovations and hope those eventually materialize into enterprise search options, Google Cloud Search remains in its infancy.

Mindbreeze vs. Coveo

As a leader in the enterprise search and knowledge management space, Coveo has ranked well for the past several years among the analyst reports for this market. They have a mature product which made our short list of possible solutions. Two primary concerns surrounded Coveo when compared to Mindbreeze and other vendors. First, their product direction is heavily cloud-focused, available only on Amazon Web Services, with a decreasing investment in on-premise search. Our customer base has a strong need to index on-premise content along with a reasonable amount of customers who prefer the search engine itself be available on premise for governance reasons.

The other concern surrounding Coveo was price. By their own admittance, it is one of the most expensive solutions on the market. Mindbreeze was able to meet our requirements as well or better than Coveo, while providing a stronger commitment to on-premise indexing at a more attractive price point.

Mindbreeze vs. Lucidworks

Lucidworks offers enterprise support for the open source search platform Apache Solr. Their flagship product, Lucidworks Fusion, builds on Solr to add enterprise search features, including connectors and administration interfaces. Our primary reasons for preferring Mindbreeze over Lucidworks concern the ease and speed of both deployment and ongoing administration. While the Fusion platform goes a long way in creating a productized layer on top of Solr, the solution still requires comparatively more work to size, provision, configure, and maintain than Mindbreeze.

Another concern during evaluation was the less-flexible security model available with Lucidworks when compared to Mindbreeze. Mindbreeze supports ACL inheritance from container objects which means if a new user is granted access to a folder containing 50,000 items, only one item (the folder container) must be reindexed to apply the new permissions. Lucidworks applies permissions to each document, so all 50,000 documents would need to be reindexed. While Lucidworks was able to meet our indexing requirements, we felt Mindbreeze offered a shorter time to value, easier ongoing administration, and more flexible security options.

Mindbreeze vs. Yippy

The Yippy Search Appliance attempts to offer close feature parity to the GSA and is available as a cloud solution or an on-premise appliance. Our biggest concern with Yippy, when compared to Mindbreeze, was its immaturity as an enterprise search product. Born out of the Yippy metasearch engine, the Yippy Search Appliance was introduced in 2016 specifically in response to the GSA’s end of life.

The solution is notably absent from consideration by both Forrester and Gartner in their respective 2017 market reports which base inclusion criteria on factors such as referenceable enterprise customer base and proven market presence. The solution also lacks interfaces for customers and partners to create custom connectors to proprietary data sources, an important requirement for many of our customers. As a search appliance, we felt Mindbreeze offered a lower risk solution with a longer history, large reference customer base, and mature feature set.

What about open source options?

Open source options were considered during our evaluation but quickly eliminated due to the vastly greater amount of development time and steeper customer learning curve associated with their implementation. For these reasons, we felt open source search solutions were not a good fit for our customers. Due to the high volume of questions we get regarding these options, I felt it worthwhile to include a few comments on the most popular open sources search tools.

Elasticsearch

Elasticsearch is a popular open source search and analytics project created by Elastic.co. Elastic itself doesn’t claim to be an enterprise search solution, but they do offer enterprise analytics solutions, and the Elasticsearch technology is often embedded into enterprise applications to provide search functionality. It’s easy to see the confusion this can create. Gartner did not include Elastic in their 2017 Magic Quadrant for Insight Engines. Elastic was included in the Forrester Wave on Cognitive Search and Knowledge Discovery as a nonparticipating vendor where Forrester stated, “Elastic says that it is not in the enterprise search market, but many enterprise customers ask Forrester about Elasticsearch, so we have included Elastic…” As a search tool, we found Elastic was better suited to log analytics than enterprise search as it lacks many enterprise search features including security, connectors, and pre-built search apps.

Solr

Apache Solr is a widely used open source search project. Many contributions to the project are made by Lucidworks (mentioned above) whose Fusion platform extends this core technology. Standalone Solr is a framework for creating a custom search engine implementation. While powerful and often used to build highly specialized search tools, it is missing out-of-the-box enterprise features including connectors, administration interfaces, and mechanisms to support secure search.

Lucene

Apache Lucene is a popular open source search engine framework. It’s a low-level library which implements indexing and search functionality and must be integrated into another application for use. Lucene provides the base search engine behind both Solr and Elasticsearch.

Finding Success with Mindbreeze

After undergoing our evaluation last winter and joining the Mindbreeze partner network, we continue to find Mindbreeze offers an excellent combination of built-in features with tools for extending capabilities when necessary. In the past year we’ve released our Oracle WebCenter Content Connector for Mindbreeze, had ten employees complete the Mindbreeze Expert Certification and helped a long-time customer migrate from GSA to Mindbreeze. If you have any questions about our experience with Mindbreeze or would like to know more, please contact us or leave a comment below.

Time running out on your GSA?

Our expert team knows both GSA and Mindbreeze. We’ll help you understand your options and design a migration plan to fit your needs.

Contact Us

The post Why We Chose Mindbreeze for Enterprise Search: Fishbowl’s Competitive Analysis Across Search Platforms appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Oracle EBS Suite blank Login page - Post Exadata migration

Syed Jaffar - Tue, 2017-11-07 08:35
As part of EBS database migration on Exadata, we recently deployed a brand new Exadata X5-2L (eighth rack), and migrated an Oracle EBS Database from the typical storage/server technologies.

Below are the environment details:

EBS Suite 12.1.3 (running on two application servers with hardware LOAD BALANCER)
Database : 11.2.0.4, RAC database with 2 instance

After the database migration, buildxml and autoconfig procedures went well on both application and database tiers. However, when the EBS login page is launched, it came out as just blank page, plus, apps passwords were unable to change through the typical procedure. We wonder what went wrong, as none of the procedure gave any significant failure indications, all went fine and we could see the successful completion messages.

After a quick initial investigation, we found that there is an issue with the GUEST user, and also found that the profile was not loaded when the autoconfig was ran on the application server. In the autoconfig log file, we could see the process was failed to update the password (ORACLE). We then tried all workaround, the recommended on Oracle support and other websites. Unfortunately, none of the workarounds helped us.

After almost spending a whole day, investigating and analyzing the root cause, we looked at the DB components and their status through dba_registry. We found the JSserver Java Virtual Machine component INVALID. I then realized the issue happened during the Exadata software deployment. There was an error during the DB software installation while applying the patch due to conflict between the patches. Due to which the catbundle didn't executed.

Without wasting a single second, we ran the @catbundle command and followed by ultrp.sql.

Guess what, all issues disappeared. We run the autoconfig on the application servers. After which we could change the app user password and we could see the Login page too.

It was quite a nice experience.

This really 

Chartis Names Oracle RiskTech100 Leader

Oracle Press Releases - Tue, 2017-11-07 08:00
Press Release
Chartis Names Oracle RiskTech100 Leader Oracle wins four best-in-category awards at annual RiskTech event

Redwood Shores Calif—Nov 7, 2017

Oracle Financial Services Analytical Applications (OFSAA) has been named a leader on this year’s Chartis: RiskTech100 list. Adding to this success, Oracle brought home four category awards including: Risk Data Aggregation & Reporting (3rd year in a row), Balance Sheet Risk Management, Banking (Industry Category) and Core Technology (Chartis Category). This marks the second consecutive year that Oracle Financial Services has ranked at the top of the Chartis RiskTech100.
 
“For the second year in a row we are delighted to see our OFSAA suite recognized as an industry leader on the RiskTech100 list,” said Vice President of the Oracle Financial Services Global Business Unit, Ambreesh Khanna. “Our integrated architecture with a broad array of Risk, Finance, Compliance and Front Office applications continue to help banks manage their Risk, deploy Capital more efficiently, reduce operating costs and improve overall profitability. Our unified data architecture which allows banks to source data once and use multiple times is unique in the market. Our relentless focus on using modern technology within our platform allows us to scale our applications to meet the demands of the largest banks while significantly lowering operating costs.”
 
The Chartis RiskTech100 ranking is acknowledged globally as a comprehensive independent study of major players in risk and compliance technology. Companies featured in the RiskTech100 are drawn from a range of RiskTech specialisms and meet the needs of both financial and non-financial organizations. Rankings are determined by a focus on solutions, industry segments and success factors. Only companies that sell their own risk management software products and solutions are included within the report.
 
“Based on our extensive methodology, Oracle has once again garnered strong placement on the RiskTech100,” said Rob Stubbs, Head of Research at Chartis Research. “As the risk and financial crime landscape has evolved, Oracle Financial Services continues to keep pace with the industry to deliver strong solutions and continued growth.”
 
RiskTech100 evaluates the fast-moving RiskTech market with a considered methodology that has seen a greater focus on financial crime in 2018. The report rankings reflect Chartis analyst expert opinions along with research into market trends, participants, expenditure patterns and best practices. Chartis began accumulating data for this study in January 2017 and validated the analysis through several phases of independent verification. Individual ranking assessment criteria comprises six equally weighted categories: functionality, core technology, strategy, customer satisfaction, market presence and innovation. Oracle scored high marks across the board, re-affirming their position at the forefront of RiskTech.
 
Download an executive summary of the 2018 Chartis RiskTech100 ranking here.
Contact Info
Alex Moriconi
Oracle
+1-650-607-6598
alex.moriconi@oracle.com
About Oracle
The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at http://cloud.oracle.com.
 
Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
 
About Chartis
Chartis is the leading provider of research and analysis on the global market for risk technology, and is part of InfoPro Digital. Chartis's goal is to support enterprises as they drive business performance through better risk management, corporate governance and compliance, and to help clients make informed technology and business decisions by providing in-depth analysis and actionable advice on virtually all aspects of risk technology.
 
RiskTech Quadrant®, RiskTech100® and FinTech QuadrantTM are registered trademarks of Chartis Research (http://www.chartis-research.com).
 
Talk to a Press Contact

Alex Moriconi

  • +1-650-607-6598

Taking KSQL for a Spin Using Real-time Device Data

Rittman Mead Consulting - Tue, 2017-11-07 06:41
Taking KSQL for a Spin Using Real-time Device Data Taking KSQL for a Spin Using Real-time Device Data

Evaluating KSQL has been high on my to-do list ever since it was released back in August. I wanted to experiment with it using an interesting, high velocity, real-time data stream that would allow me to analyse events at the millisecond level, rather than seconds or minutes. Finding such a data source, that is free of charge and not the de facto twitter stream, is tricky. So, after some pondering, I decided that I'd use my Thrustmaster T300RS Steering Wheel/Pedal Set gaming device as a data source,

Taking KSQL for a Spin Using Real-time Device Data

The idea being that the data would be fed into Kafka, processed in real-time using KSQL and visualised in Grafana.

This is the end to end pipeline that I created...

Taking KSQL for a Spin Using Real-time Device Data

...and this is the resulting real-time dashboard running alongside a driving game and a log of the messages being sent by the device.

This article will explain how the above real-time dashboard was built using only KSQL...and a custom Kafka producer.

I'd like to point out, that although the device I'm using for testing is unconventional, when considered in the wider context of IoT's, autonomous driving, smart automotives or any device for that matter, it will be clear to see that the low latency, high throughput of Apache Kafka, coupled with Confluent's KSQL, can be a powerful combination.

I'd also like to point out, that this article is not about driving techniques, driving games or telemetry analysis. However, seeing as the data source I'm using is intrinsically tied to those subjects, the concepts will be discussed to add context. I hope you like motorsports!

Writing a Kafka Producer for a T300RS

The T300RS is attached to my Windows PC via a USB cable, so the first challenge was to try and figure out how I could get steering, braking and accelerator inputs pushed to Kafka. Unsurprisingly, a source connector for a "T300RS Steering Wheel and Pedal Set" was not listed on the Kafka Connect web page - a custom producer was the only option.

To access the data being generated by the T300RS, I had 2 options, I could either use an existing Telemetry API from one of my racing games, or I could access it directly using the Windows DirectX API. I didn't want to have to have a game running in the background in order to generate data, so I decided to go down the DirectX route. This way, the data is raw and available, with or without an actual game engine running.

The producer was written using the SharpDX .NET wrapper and Confluent's .NET Kafka Client. The SharpDX directinput API allows you to poll an attached input device (mouse, keyboard, game controllers etc.) and read its buffered data. The buffered data returned within each polling loop is serialized into JSON and sent to Kafka using the .NET Kafka Client library.

A single message is sent to a topic in Kafka called raw_axis_inputs every time the state of one the device's axes changes. The device has several axes, in this article I am only interested in the Wheel, Accelerator, Brake and the X button.

{  
    "event_id":4300415,         // Event ID unique over all axis state changes
    "timestamp":1508607521324,  // The time of the event
    "axis":"Y",                 // The axis this event belongs to
    "value":32873.0             // the current value of the axis
}

This is what a single message looks like. In the above message the Brake axis state was changed, i.e. it moved to a new position with value 32873.

You can see below which inputs map to the each reported axis from the device.

Taking KSQL for a Spin Using Real-time Device Data

Here is a sample from the producer's log file.

{"event_id":4401454,"timestamp":1508687373018,"axis":"X","value":33007.0}
{"event_id":4401455,"timestamp":1508687373018,"axis":"RotationZ","value":62515.0}
{"event_id":4401456,"timestamp":1508687373018,"axis":"RotationZ","value":62451.0}
{"event_id":4401457,"timestamp":1508687373018,"axis":"X","value":33011.0}
{"event_id":4401458,"timestamp":1508687373018,"axis":"RotationZ","value":62323.0}
{"event_id":4401459,"timestamp":1508687373018,"axis":"RotationZ","value":62258.0}
{"event_id":4401460,"timestamp":1508687373034,"axis":"X","value":33014.0}
{"event_id":4401461,"timestamp":1508687373034,"axis":"X","value":33017.0}
{"event_id":4401462,"timestamp":1508687373065,"axis":"RotationZ","value":62387.0}
{"event_id":4401463,"timestamp":1508687373081,"axis":"RotationZ","value":62708.0}
{"event_id":4401464,"timestamp":1508687373081,"axis":"RotationZ","value":62901.0}
{"event_id":4401465,"timestamp":1508687373081,"axis":"RotationZ","value":62965.0}
{"event_id":4401466,"timestamp":1508687373097,"axis":"RotationZ","value":64507.0}
{"event_id":4401467,"timestamp":1508687373097,"axis":"RotationZ","value":64764.0}
{"event_id":4401468,"timestamp":1508687373097,"axis":"RotationZ","value":64828.0}
{"event_id":4401469,"timestamp":1508687373097,"axis":"RotationZ","value":65021.0}
{"event_id":4401470,"timestamp":1508687373112,"axis":"RotationZ","value":65535.0}
{"event_id":4401471,"timestamp":1508687373268,"axis":"X","value":33016.0}
{"event_id":4401472,"timestamp":1508687373378,"axis":"X","value":33014.0}
{"event_id":4401473,"timestamp":1508687377972,"axis":"Y","value":65407.0}
{"event_id":4401474,"timestamp":1508687377987,"axis":"Y","value":64057.0}
{"event_id":4401475,"timestamp":1508687377987,"axis":"Y","value":63286.0}

You can tell by looking at the timestamps, it's possible to have multiple events generated within the same millisecond, I was unable to get microsecond precision from the device unfortunately. When axes, "X", "Y" and "RotationZ" are being moved quickly at the same time (a bit like a child driving one of those coin operated car rides you find at the seaside) the device generates approximately 500 events per second.

Creating a Source Stream

Now that we have data streaming to Kafka from the device, it's time to fire up KSQL and start analysing it. The first thing we need to do is create a source stream. The saying "Every River Starts with a Single Drop" is quite fitting here, especially in the context of stream processing. The raw_axis_inputs topic is our "Single Drop" and we need to create a KSQL stream based on top of it.

CREATE STREAM raw_axis_inputs ( \  
     event_id BIGINT, \
     timestamp BIGINT, \
     axis VARCHAR, \
     value DOUBLE ) \
 WITH (kafka_topic = 'raw_axis_inputs', value_format = 'JSON');

With the stream created we can we can now query it. I'm using the default auto.offset.reset = latest as I have the luxury of being able to blip the accelerator whenever I want to generate new data, a satisfying feeling indeed.

ksql> SELECT * FROM raw_axis_inputs;  
1508693510267 | null | 4480290 | 1508693510263 | RotationZ | 65278.0  
1508693510269 | null | 4480291 | 1508693510263 | RotationZ | 64893.0  
1508693510271 | null | 4480292 | 1508693510263 | RotationZ | 63993.0  
1508693510273 | null | 4480293 | 1508693510263 | RotationZ | 63094.0  
1508693510275 | null | 4480294 | 1508693510279 | RotationZ | 61873.0  
1508693510277 | null | 4480295 | 1508693510279 | RotationZ | 60716.0  
1508693510279 | null | 4480296 | 1508693510279 | RotationZ | 60267.0  
Derived Streams

We now have our source stream created and can start creating some derived streams from it. The first derived stream we are going to create filters out 1 event. When the X button is pressed it emits a value of 128, when it's released it emits a value of 0.

Taking KSQL for a Spin Using Real-time Device Data

To simplify this input, I'm filtering out the release event. We'll see what the X button is used for later in the article.

CREATE STREAM axis_inputs WITH (kafka_topic = 'axis_inputs') AS \  
SELECT  event_id, \  
        timestamp, \
        axis, \
        value \
FROM    raw_axis_inputs \  
WHERE   axis != 'Buttons5' OR value != 0.0;  

From this stream we are going to create 3 further streams, one for the brake, one the accelerator and one for the wheel.

All 3 axes emit values in the range of 0-65535 across their full range. The wheel emits a value of 0 when rotated fully left, a value of 65535 when rotated fully right and 32767 when dead centre. The wheel itself is configured to rotate 900 degrees lock-to-lock, so it would be nice to report its last state change in degrees, rather than from a predetermined integer range. For this we can create a new stream, that includes only messages where the axis = 'X', and the axis values are translated into the range of -450 degrees to 450 degrees. With this new value translation, maximum rotation left now equates to 450 degrees and maximum rotation right equates -450 degrees, 0 is now dead centre.

CREATE STREAM steering_inputs WITH (kafka_topic = 'steering_inputs') AS \  
  SELECT  axis, \
          event_id, \
          timestamp, \
          (value / (65535.0 / 900.0) - 900 / 2) * -1 as value \
  FROM    axis_inputs \
  WHERE   axis = 'X';

If we now query our new stream and move the wheel slowly around dead centre, we get the following results

ksql> select timestamp, value from steering_inputs;

1508711287451 | 0.6388888888889142  
1508711287451 | 0.4305555555555429  
1508711287451 | 0.36111111111108585  
1508711287451 | 0.13888888888891415  
1508711287451 | -0.0  
1508711287467 | -0.041666666666685614  
1508711287467 | -0.26388888888891415  
1508711287467 | -0.3333333333333144  
1508711287467 | -0.5277777777777715  
1508711287467 | -0.5972222222222285  

The same query while the wheel is rotated fully left

1508748345943 | 449.17601281757845  
1508748345943 | 449.3270771343557  
1508748345943 | 449.5330739299611  
1508748345943 | 449.67040512703136  
1508748345959 | 449.8214694438087  
1508748345959 | 449.95880064087896  
1508748345959 | 450.0  

And finally, rotated fully right.

1508748312803 | -449.3408102540627  
1508748312803 | -449.4369420920119  
1508748312818 | -449.67040512703136  
1508748312818 | -449.7390707255665  
1508748312818 | -449.9725337605859  
1508748312818 | -450.0  

Here's the data plotted in Grafana.

Taking KSQL for a Spin Using Real-time Device Data

We now need to create 2 more derived streams to handle the accelerator and the brake pedals. This time, we want to translate the values to the range 0-100. When a pedal is fully depressed it should report a value of 100 and when fully released, a value of 0.

CREATE STREAM accelerator_inputs WITH (kafka_topic = 'accelerator_inputs') AS \  
SELECT  axis, \  
        event_id, \
        timestamp, \
        100 - (value / (65535.0 / 100.0)) as value \
FROM    axis_inputs \  
WHERE   axis = 'RotationZ';  

Querying the accelerator_inputs stream while fully depressing the accelerator pedal displays the following. (I've omitted many records in the middle to keep it short)

ksql> SELECT timestamp, value FROM accelerator_inputs;  
1508749747115 | 0.0  
1508749747162 | 0.14198473282442592  
1508749747193 | 0.24122137404580712  
1508749747209 | 0.43664122137404604  
1508749747225 | 0.5343511450381726  
1508749747287 | 0.6335877862595396  
1508749747318 | 0.7312977099236662  
1508749747318 | 0.8290076335877927  
1508749747334 | 0.9267175572519051  
1508749747381 | 1.0259541984732863  
...
...
1508749753943 | 98.92519083969465  
1508749753959 | 99.02290076335878  
1508749753959 | 99.1206106870229  
1508749753959 | 99.21832061068702  
1508749753975 | 99.31603053435114  
1508749753975 | 99.41374045801527  
1508749753975 | 99.5114503816794  
1508749753990 | 99.60916030534351  
1508749753990 | 99.70687022900763  
1508749753990 | 99.80458015267176  
1508749754006 | 100.0

...and displayed in Grafana

Taking KSQL for a Spin Using Real-time Device Data

Finally, we create the brake stream, which has the same value translation as the accelerator stream, so I won't show the query results this time around.

CREATE STREAM brake_inputs WITH (kafka_topic = 'brake_inputs') AS \  
SELECT  axis, \  
        event_id, \
        timestamp, \
        100 - (value / (65535 / 100)) as value \
FROM    axis_inputs \  
WHERE   axis = 'Y';  

Braking inputs in Grafana.

Taking KSQL for a Spin Using Real-time Device Data

Smooth is Fast

It is a general rule of thumb in motorsports that "Smooth is Fast", the theory being that the less steering, accelerator and braking inputs you can make while still keeping the car on the desired racing line, results in a faster lap time. We can use KSQL to count the number of inputs for each axis over a Hopping Window to try and capture overall smoothness. To do this, we create our first KSQL table.

CREATE TABLE axis_events_hopping_5s_1s \  
WITH (kafka_topic = 'axis_events_hopping_5s_1s') AS \  
SELECT  axis, \  
        COUNT(*) AS event_count \
FROM    axis_inputs \  
WINDOW HOPPING (SIZE 5 SECOND, ADVANCE BY 1 SECOND) \  
GROUP BY axis;  

A KSQL table is basically a view over an existing stream or another table. When a table is created from a stream, it needs to contain an aggregate function and group by clause. It's these aggregates that make a table stateful, with the underpinning stream updating the table's current view in the background. If you create a table based on another table you do not need to specify an aggregate function or group by clause.

The table we created above specifies that data is aggregated over a Hopping Window. The size of the window is 5 seconds and it will advance or hop every 1 second. This means that at any one time, there will be 5 open windows, with new data being directed to each window based on the key and the record's timestamp.

You can see below when we query the stream, that we have 5 open windows per axis, with each window 1 second apart.

ksql> SELECT * FROM axis_events_hopping_5s_1s;  
1508758267000 | X : Window{start=1508758267000 end=-} | X | 56  
1508758268000 | X : Window{start=1508758268000 end=-} | X | 56  
1508758269000 | X : Window{start=1508758269000 end=-} | X | 56  
1508758270000 | X : Window{start=1508758270000 end=-} | X | 56  
1508758271000 | X : Window{start=1508758271000 end=-} | X | 43  
1508758267000 | Y : Window{start=1508758267000 end=-} | Y | 25  
1508758268000 | Y : Window{start=1508758268000 end=-} | Y | 25  
1508758269000 | Y : Window{start=1508758269000 end=-} | Y | 25  
1508758270000 | Y : Window{start=1508758270000 end=-} | Y | 32  
1508758271000 | Y : Window{start=1508758271000 end=-} | Y | 32  
1508758267000 | RotationZ : Window{start=1508758267000 end=-} | RotationZ | 67  
1508758268000 | RotationZ : Window{start=1508758268000 end=-} | RotationZ | 67  
1508758269000 | RotationZ : Window{start=1508758269000 end=-} | RotationZ | 67  
1508758270000 | RotationZ : Window{start=1508758270000 end=-} | RotationZ | 67  
1508758271000 | RotationZ : Window{start=1508758271000 end=-} | RotationZ | 39  

This data is going to be pushed into InfluxDB and therefore needs a timestamp column. We can create a new table for this, that includes all columns from our current table, plus the rowtime.

CREATE TABLE axis_events_hopping_5s_1s_ts \  
WITH (kafka_topic = 'axis_events_hopping_5s_1s_ts') AS \  
SELECT  rowtime AS timestamp, * \  
FROM    axis_events_hopping_5s_1s;  

And now, when we query this table we can see we have all the columns we need.

ksql> select timestamp, axis, event_count from axis_events_hopping_5s_1s_ts;  
1508761027000 | RotationZ | 61  
1508761028000 | RotationZ | 61  
1508761029000 | RotationZ | 61  
1508761030000 | RotationZ | 61  
1508761031000 | RotationZ | 61  
1508761028000 | Y | 47  
1508761029000 | Y | 47  
1508761030000 | Y | 47  
1508761031000 | Y | 47  
1508761032000 | Y | 47  
1508761029000 | X | 106  
1508761030000 | X | 106  
1508761031000 | X | 106  
1508761032000 | X | 106  
1508761033000 | X | 106  

This is the resulting graph in Grafana with each axis stacked on top of each other giving a visual representation of the total number of events overall and total per axis. The idea here being that if you can drive a lap with less overall inputs or events then the lap time should be faster.

Taking KSQL for a Spin Using Real-time Device Data

Calculating Lap Times

To calculate lap times, I needed a way of capturing the time difference between 2 separate events in a stream. Remember that the raw data is coming directly from the device and has no concept of lap, lap data is handled by a game engine.
I needed a way to inject an event into the stream when I crossed the start/finish line of any given race track. To achieve this, I modified the custom producer to increment a counter every time the X button was pressed and added a new field to the JSON message called lap_number.

Taking KSQL for a Spin Using Real-time Device Data

I then needed to recreate my source stream and my initial derived stream to include this new field

New source stream

CREATE STREAM raw_axis_inputs ( \  
     event_id BIGINT, \
     timestamp BIGINT, \
     lap_number BIGINT, \
     axis VARCHAR, \
     value DOUBLE ) \
 WITH (kafka_topic = 'raw_axis_inputs', value_format = 'JSON');

New derived stream.

CREATE STREAM axis_inputs WITH (kafka_topic = 'axis_inputs') AS \  
SELECT  event_id, \  
        timestamp, \
        lap_number, \
        axis, \
        value \
FROM    raw_axis_inputs \  
WHERE   axis != 'Buttons5' OR value != 0.0;  

Now when I query the axis_inputs stream and press the X button a few times we can see an incrementing lap number.

ksql> SELECT timestamp, lap_number, axis, value FROM axis_inputs;  
1508762511506 | 6 | X | 32906.0  
1508762511553 | 6 | X | 32907.0  
1508762511803 | 6 | X | 32909.0  
1508762512662 | 7 | Buttons5 | 128.0  
1508762513178 | 7 | X | 32911.0  
1508762513256 | 7 | X | 32913.0  
1508762513318 | 7 | X | 32914.0  
1508762513381 | 7 | X | 32916.0  
1508762513459 | 7 | X | 32918.0  
1508762513693 | 7 | X | 32919.0  
1508762514584 | 8 | Buttons5 | 128.0  
1508762515021 | 8 | X | 32921.0  
1508762515100 | 8 | X | 32923.0  
1508762515209 | 8 | X | 32925.0  
1508762515318 | 8 | X | 32926.0  
1508762515678 | 8 | X | 32928.0  
1508762516756 | 8 | X | 32926.0  
1508762517709 | 9 | Buttons5 | 128.0  
1508762517756 | 9 | X | 32925.0  
1508762520381 | 9 | X | 32923.0  
1508762520709 | 9 | X | 32921.0  
1508762520881 | 10 | Buttons5 | 128.0  
1508762521396 | 10 | X | 32919.0  
1508762521568 | 10 | X | 32918.0  
1508762521693 | 10 | X | 32916.0  
1508762521803 | 10 | X | 32914.0  

The next step is to calculate the time difference between each "Buttons5" event (the X button). This required 2 new tables. The first table below captures the latest values using the MAX() function from the axis_inputs stream where the axis = 'Buttons5'

CREATE TABLE lap_marker_data WITH (kafka_topic = 'lap_marker_data') AS \  
SELECT  axis, \  
        MAX(event_id) AS lap_start_event_id, \
        MAX(timestamp) AS lap_start_timestamp, \ 
        MAX(lap_number) AS lap_number \
FROM    axis_inputs \  
WHERE   axis = 'Buttons5' \  
GROUP BY axis;  

When we query this table, a new row is displayed every time the X button is pressed, reflecting the latest values from the stream.

ksql> SELECT axis, lap_start_event_id, lap_start_timestamp, lap_number FROM lap_marker_data;  
Buttons5 | 4692691 | 1508763302396 | 15  
Buttons5 | 4693352 | 1508763306271 | 16  
Buttons5 | 4693819 | 1508763310037 | 17  
Buttons5 | 4693825 | 1508763313865 | 18  
Buttons5 | 4694397 | 1508763317209 | 19  

What we can now do is join this table to a new stream.

CREATE STREAM lap_stats WITH (kafka_topic = 'lap_stats') AS \  
SELECT  l.lap_number as lap_number, \  
        l.lap_start_event_id, \
        l.lap_start_timestamp, \
        a.timestamp AS lap_end_timestamp, \
        (a.event_id - l.lap_start_event_id) AS lap_events, \
        (a.timestamp - l.lap_start_timestamp) AS laptime_ms \
FROM       axis_inputs a LEFT JOIN lap_marker_data l ON a.axis = l.axis \  
WHERE   a.axis = 'Buttons5';    

 Message
----------------
Stream created

ksql> describe lap_stats;

 Field               | Type
---------------------------------------
 ROWTIME             | BIGINT
 ROWKEY              | VARCHAR (STRING)
 LAP_NUMBER          | BIGINT
 LAP_START_EVENT_ID  | BIGINT
 LAP_START_TIMESTAMP | BIGINT
 LAP_END_TIMESTAMP   | BIGINT
 LAP_EVENTS          | BIGINT
 LAPTIME_MS          | BIGINT

This new stream is again based on the axis_inputs stream where the axis = 'Buttons5'. We are joining it to our lap_marker_data table which results in a stream where every row includes the current and previous values at the point in time when the X button was pressed.

A quick query should illustrate this (I've manually added column heading to make it easier to read)

ksql> SELECT lap_number, lap_start_event_id, lap_start_timestamp, lap_end_timestamp, lap_events, laptime_ms FROM lap_stats;

LAP  START_EV  START_TS        END_TS          TOT_EV  LAP_TIME_MS  
36 | 4708512 | 1508764549240 | 1508764553912 | 340   | 4672  
37 | 4708852 | 1508764553912 | 1508764567521 | 1262  | 13609  
38 | 4710114 | 1508764567521 | 1508764572162 | 1174  | 4641  
39 | 4711288 | 1508764572162 | 1508764577865 | 1459  | 5703  
40 | 4712747 | 1508764577865 | 1508764583725 | 939   | 5860  
41 | 4713686 | 1508764583725 | 1508764593475 | 2192  | 9750  
42 | 4715878 | 1508764593475 | 1508764602318 | 1928  | 8843

We can now see the time difference, in milliseconds ( LAP_TIME_MS ), between each press of the X button. This data can now be displayed in Grafana.

Taking KSQL for a Spin Using Real-time Device Data

The data is also being displayed along the top of the dashboard, aligned above the other graphs, as a ticker to help visualize lap boundaries across all axes.

Taking KSQL for a Spin Using Real-time Device Data

Anomaly Detection

A common use case when performing real-time stream analytics is Anomaly Detection, the act of detecting unexpected events, or outliers, in a stream of incoming data. Let's see what we can do with KSQL in this regard.

Driving Like a Lunatic?

As mentioned previously, Smooth is Fast, so it would be nice to be able to detect some form of erratic driving. When a car oversteers, the rear end of the car starts to rotate around a corner faster than you'd like, to counteract this motion, quick steering inputs are required to correct it. On a smooth lap you will only need a small part of the total range of the steering wheel to safely navigate all corners, when you start oversteering you will need make quick, but wider use of the total range of the wheel to keep the car on the track and prevent crashing.

To try and detect oversteer we need to create another KSQL table, this time based on the steering_inputs stream. This table counts steering events across a very short hopping window. Events are counted only if the rotation exceeds 180 degrees (sharp left rotation) or is less than -180 degrees (sharp right rotation)

CREATE TABLE oversteer WITH (kafka_topic = 'oversteer') AS \  
SELECT  axis, \  
        COUNT(*) \
FROM    steering_inputs \  
WINDOW HOPPING (SIZE 100 MILLISECONDS, ADVANCE BY 10 MILLISECONDS) \  
WHERE   value > 180 or value < -180 \  
GROUP by axis;  

We now create another table that includes the timestamp for InfluxDB.

CREATE TABLE oversteer_ts WITH (kafka_topic = 'oversteer_ts') AS \  
SELECT rowtime AS timestamp, * \  
FROM oversteer;  

If we query this table, while quickly rotating the wheel in the range value > 180 or value < -180, we can see multiple windows, 10ms apart, with a corresponding count of events.

ksql> SELECT * FROM oversteer_ts;  
1508767479920 | X : Window{start=1508767479920 end=-} | 1508767479920 | X | 5  
1508767479930 | X : Window{start=1508767479930 end=-} | 1508767479930 | X | 10  
1508767479940 | X : Window{start=1508767479940 end=-} | 1508767479940 | X | 15  
1508767479950 | X : Window{start=1508767479950 end=-} | 1508767479950 | X | 20  
1508767479960 | X : Window{start=1508767479960 end=-} | 1508767479960 | X | 25  
1508767479970 | X : Window{start=1508767479970 end=-} | 1508767479970 | X | 30  
1508767479980 | X : Window{start=1508767479980 end=-} | 1508767479980 | X | 35  
1508767479990 | X : Window{start=1508767479990 end=-} | 1508767479990 | X | 40  
1508767480000 | X : Window{start=1508767480000 end=-} | 1508767480000 | X | 45  
1508767480010 | X : Window{start=1508767480010 end=-} | 1508767480010 | X | 50  
1508767480020 | X : Window{start=1508767480020 end=-} | 1508767480020 | X | 50  
1508767480030 | X : Window{start=1508767480030 end=-} | 1508767480030 | X | 50  
1508767480040 | X : Window{start=1508767480040 end=-} | 1508767480040 | X | 50  
1508767480050 | X : Window{start=1508767480050 end=-} | 1508767480050 | X | 50  
1508767480060 | X : Window{start=1508767480060 end=-} | 1508767480060 | X | 47  
1508767480070 | X : Window{start=1508767480070 end=-} | 1508767480070 | X | 47  
1508767480080 | X : Window{start=1508767480080 end=-} | 1508767480080 | X | 47  
1508767480090 | X : Window{start=1508767480090 end=-} | 1508767480090 | X | 47  
1508767480100 | X : Window{start=1508767480100 end=-} | 1508767480100 | X | 47  

This data is plotted on the Y axis (we're talking graphs now) on the "Steering inputs" panel in Grafana. The oversteer metric can be seen in red and will spike when steering input exceeds 180 degrees in either direction.

Taking KSQL for a Spin Using Real-time Device Data

Braking too Hard?

Another anomaly I'd like to detect is when maximum brake pressure is applied for too long. Much like the brake pedal in a real car, the brake pedal I'm using has a very progressive feel, a fair amount of force from your foot is required to hit maximum pressure. If you do hit maximum pressure, it shouldn't be for long as you will most likely lock the wheels and skid off the race track, very embarrassing indeed.

The first thing to do is to create a table that will store the last time maximum brake pressure was applied. This table is based on the brake_inputs stream and filters where the value = 100

CREATE TABLE max_brake_power_time \  
WITH (kafka_topic = 'max_brake_power_time') AS \  
SELECT  axis, \  
        MAX(timestamp) as last_max_brake_ts \
FROM    brake_inputs \  
WHERE     value = 100 \  
GROUP by axis;  

A query of this table displays a new row each time maximum brake pressure is hit.

ksql> SELECT axis, last_max_brake_ts FROM max_brake_power_time;  
 Y | 1508769263100
 Y | 1508769267881
 Y | 1508769271568

Something worth mentioning is that if I hold my foot on the brake pedal at the maximum pressure for any period of time, only one event is found in the stream. This is because the device only streams data when the state of an axis changes. If I keep my foot still, no new events will appear in the stream. I'll deal with this in a minute.

Next we'll create a new stream based on the brake_inputs stream and join it to our max_brake_power_time table.

CREATE STREAM brake_inputs_with_max_brake_power_time \  
WITH ( kafka_topic = 'brake_inputs_with_max_brake_power_time') AS \  
SELECT  bi.value, \  
        bi.timestamp, \
        mb.last_max_brake_ts, \
        bi.timestamp - mb.last_max_brake_ts AS time_since_max_brake_released \
FROM    brake_inputs bi LEFT JOIN max_brake_power_time mb ON bi.axis = mb.axis;  

For each row in this stream we now have access to all columns in the brake_inputs stream plus a timestamp telling us when max brake power was last reached. With this data we create a new derived column bi.timestamp - mb.last_max_brake_ts AS time_since_max_brake_released which gives a running calculation of the difference between the current record timestamp and the last time maximum brake pressure was applied

For example, when we query the stream we can see that maximum pressure was applied at timestamp 1508772739115 with a value of 100.0. It's the row immediately after this row that we're are interested in 99.90234225 | 1508772740803 | 1508772739115 | 1688.

Again, I've manually added column headings to make it easier to read.

ksql> SELECT value, timestamp, last_max_brake_ts, time_since_max_brake_released FROM brake_inputs_with_max_brake_power_time;

BRAKE VALUE | TIMESTAMP     | LAST MAX BRAKE TIME | TIME SINCE MAX BRAKE RELEASED  
98.53513389 | 1508772739100 | 1508772733146       | 5954  
98.82810711 | 1508772739100 | 1508772733146       | 5954  
99.02342259 | 1508772739115 | 1508772733146       | 5969  
99.51171129 | 1508772739115 | 1508772733146       | 5969  
99.70702677 | 1508772739115 | 1508772733146       | 5969  
100.0       | 1508772739115 | 1508772733146       | 5969  
99.90234225 | 1508772740803 | 1508772739115       | 1688  
99.51171129 | 1508772740818 | 1508772739115       | 1703  
99.12108033 | 1508772740818 | 1508772739115       | 1703  
97.65621423 | 1508772740818 | 1508772739115       | 1703  
96.58197909 | 1508772740818 | 1508772739115       | 1703  
95.41008621 | 1508772740818 | 1508772739115       | 1703  
94.43350881 | 1508772740818 | 1508772739115       | 1703  
93.65224689 | 1508772740818 | 1508772739115       | 1703  
93.35927367 | 1508772740818 | 1508772739115       | 1703  
92.87098496 | 1508772740834 | 1508772739115       | 1719  
92.38269626 | 1508772740834 | 1508772739115       | 1719  
91.11314564 | 1508772740834 | 1508772739115       | 1719  
90.62485694 | 1508772740834 | 1508772739115       | 1719  
90.42954146 | 1508772740834 | 1508772739115       | 1719  
89.35530632 | 1508772740834 | 1508772739115       | 1719  
87.89044022 | 1508772740834 | 1508772739115       | 1719  
87.40215152 | 1508772740850 | 1508772739115       | 1735  
86.52323186 | 1508772740850 | 1508772739115       | 1735  

Remember, that while an axis is held at the same value, 100.0 in this case, no more events will appear in the stream until the value changes again. This is why we are interested in the row preceding the maximum value, this row is telling us how long the value of 100.0 was applied for. In this case the time it was held for was 1688 milliseconds. Notice that on subsequent rows the value increases, but we are not interested in those rows. In order to isolate what we want, we need another table. This new table takes our previously created stream, brake_inputs_with_max_brake_power_time and groups it by the last_max_brake_ts column. For each grouping we then get the MIN(time_since_max_brake_released).

CREATE TABLE hard_braking WITH ( kafka_topic = 'hard_braking') AS \  
SELECT  last_max_brake_ts, \  
        MIN(time_since_max_brake_released) AS time_spent_at_max_brake_ms \
FROM    brake_inputs_with_max_brake_power_time \  
GROUP BY last_max_brake_ts;  

When we query this table, while stepping hard on the brake pedal for a few seconds at a time, we get the information we want. We can see the timestamp for when maximum brake pressure reached and for how long it was sustained.

ksql> SELECT last_max_brake_ts, time_spent_at_max_brake_ms FROM hard_braking;  
1508775178693 | 1360  
1508775178693 | 1360  
1508775183334 | 1000  
1508775183334 | 1000  
1508775187709 | 422  
1508775187709 | 422  
1508775187709 | 422  
1508775187709 | 422  
1508775187709 | 422  
1508775187709 | 422  
1508775187709 | 422  
1508775191256 | 1344  
1508775191256 | 1344  
1508775191256 | 1344  
1508775195850 | 1687  
1508775195850 | 1687  
1508775195850 | 1687  
1508775200662 | 1922  
1508775200662 | 1922  
1508775200662 | 1922  
1508775200662 | 1922  

Here's what the above data looks like when visualised in Grafana. The bottom graph is showing when maximum brake pressure was hit and on for how long it was sustained. I've set a threshold against the graph of 1 second so any extreme braking is clearly identifiable - if you're that hard on the brakes for that long, you're probably going to end up in the scenery.

Taking KSQL for a Spin Using Real-time Device Data

The Tale of 2 Laps

After putting it all together, it's time to take to the track and see how it looks. This video shows 2 complete laps onboard with the Caterham Seven 620R around Brands Hatch in the UK. The first lap is a relatively smooth one and the second is quite ragged. Notice that the first lap ( lap 68 ) is quicker overall than the second ( lap 69 ). On lap 69, I start to drive more aggressively and oversteer spikes start to appear in the steering input graph. Lap 69 also has significantly more events overall than lap 68 as a result my more exuberant ( slower ) driving style. You'll also notice that maximum brake pressure is reached a couple of times on each lap, but for no longer than the threshold of 1 second on each occurrence.

Summary

KSQL is awesome! Although it's only a developer preview at this point, it's impressive what you can get done with it. As it evolves over time and mirrors more of the functionality of the underlying Streams API it will become even more powerful, lowering the barrier to entry for real-time stream processing further and further. Take a look at the road map to see what may be coming next.

Oh, and I recently discovered on the #KSQL community Slack group, that you can execute KSQL in Embedded Mode right inside your Java code, allowing you to mix the native Streams API with KSQL - very nice indeed !

Categories: BI & Warehousing

Guaranteed Way to Pass AWS Cloud Architect Certification Exam

Pakistan's First Oracle Blog - Tue, 2017-11-07 06:00
Today and for the sometime to come, one of the hottest IT certification to hold is AWS Cloud Architect Certification. There are various reasons for that:



  • If you pass it, it really means you know the stuff properly
  • AWS is the Cloud platform of choice world over and its not going anywhere
  • There is literally a mad rush out there as companies scramble to shift or extend their infrastructure to cloud to stay relevant and to cut costs.
  • There is a huge shortage of professional with theoretical and hands-on know-how of Cloud and this shortage is growing alarmingly.
So its not surprising that Sysadmins, developers, DBAs and other IT professionals really yearning to achieve Cloud credentials and there is no better way to do that other than getting AWS Certified.

So is there any  Guaranteed Way to Pass AWS Cloud Architect Certification Exam?

I say Yes and here is the way:

Read AWS Documentation about following AWS Services. Read about these services and read them again and again and then again. Learn them like you know your name. Get a free account and then play with these services. When you feel comfortable enough with these services and can recite them to anyone inside out then go ahead sit in exam and you will pass it for sure. So read and learn all services under sections:


  • Compute
  • Storage
  • Database 
  • Network & Content Delivery
  • Messaging
  • Identity and Access Management
Also make sure to read FAQs of all above services. Also read and remember what AWS Kinesis, WAF, Data Pipeline, EMR, Workspace are. No details are necessary for these ones but just what they stand for and what they do.

Best of Luck.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator