Feed aggregator

Manually refresh materialized view in trigger

Tom Kyte - Tue, 2018-11-13 13:46
Hello, I have a set of MV that are dependant each other and from master tables. All are elligible to be fast-refreshed but for a reason I don't know, in some tables after a delete or update the fast refresh is longer than the complete one. So ...
Categories: DBA Blogs

pragma autonomous in exception

Tom Kyte - Tue, 2018-11-13 13:46
can you create pragma autonomous in exception handling
Categories: DBA Blogs

Database security

Tom Kyte - Tue, 2018-11-13 13:46
Hello, i'd like a suggestion about the use case below We have a database with 2 schemas, for the schemas 1 and 2 every objects are full granted <code>grant all on "object_name" to public</code> Each users have default role <code>"Resourc...
Categories: DBA Blogs

November 2018 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Tue, 2018-11-13 13:24

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

  • Oracle Database Proactive BP 12.1.0.2.181016
  • Oracle Database PSU 12.1.0.2.181016
  • Oracle JavaVM Component Database PSU 12.1.0.2.181016
  • Oracle Database Patch for Exadata BP 12.1.0.2.181016
  • Oracle Database PSU 12.1.0.2.181016
  • Oracle JavaVM Component Database PSU 12.1.0.2.181016
  • Microsoft Windows Database BP 12.1.0.2.181016
  • Oracle JavaVM Component 12.1.0.2.181016 on Windows
  • Microsoft Windows Database BP 12.1.0.2.181016
  • Oracle JavaVM Component 12.1.0.2.181016 on Windows

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

References

Related Articles

Categories: APPS Blogs

Amazon SageMaker Model Endpoint Access from Oracle JET

Andrejus Baranovski - Tue, 2018-11-13 10:54
If you are implementing machine learning model with Amazon SageMaker, obviously you would want to know how to access trained model from the outside. There is good article posted on AWS Machine Learning Blog related to this topic - Call an Amazon SageMaker model endpoint using Amazon API Gateway and AWS Lambda. I went through described steps and implemented REST API for my own module. I went one step further and tested API call from JavaScript application implemented with Oracle JET JavaScript free and open source toolkit.

I will not go deep into machine learning part in this post. I will focus exclusively on AWS SageMaker endpoint. I'm using Jupyter notebook from Chapter 2 of this book - Machine Learning for Business. At the end of the notebook, when machine learning model is created, we initialize AWS endpoint (name: order-approval). Think about it as about some sort of access point. Through this endpoint we can call prediction function:


Wait around 5 minutes until endpoint starts. Then you should see endpoint entry in SageMaker:


How to expose endpoint to be accessible outside? Through AWS Lambda and AWS API Gateway.

AWS Lambda

Go to AWS Lambda service and create new function. I already have function, with Python 3.6 set for runtime. AWS Lambda acts as proxy function between endpoint and API. This is the place where we can prepare input data and parse response, before returning it to API:


Function must be granted role to access SageMaker resources:


This is function implementation. Endpoint name is moved out into environment variable. Function gets input, calls SageMaker endpoint and does some minimal processing for the response:


We can test lambda function and provide test payload. This is test payload I'm using. This is encoded list of parameters for machine learning model. Parameters describe purchase order. Model decides if manual approval is required or not. Decision rule - if PO was raised by someone not from IT, but they order IT product - manual approval is required. Read more about it in the book mentioned above. Test payload data:


Run test execution, model responds - manual approval for PO is required:


AWS API Gateway

Final step is to define API Gateway. Client will be calling Lambda function through API:


I have defined REST resource and POST method for API gateway. Client request will go through API call and then will be directed to Lambda function, which will make call for SageMaker prediction based on client input data:


POST method is set to call Lambda function (function with this name was created above):


Once API is deployed, we get URL. Make sure to add REST resource name at the end. From Oracle JET we can use simple JQuery call to execute POST method. Once asynchronous response is received, we display notification message:


Oracle JET displays prediction received from SageMaker - manual review is required for current PO:


Download Oracle JET sample application with AWS SageMaker API call from my GitHub repo.

Partitioning -- 9 : System Partitioning

Hemant K Chitale - Tue, 2018-11-13 08:59
System Partitioning, introduced in 11g, unlike all the traditional Partitioning methods, requires that all DML specify the Target Partition.  For a System Partitioned Table, the RDBMS does not use a "high value" rule to determine the Target Partition but leaves it to (actually requires) the application code (user) to specify the Partition.

In my opinion, this seems like the precursor to Oracle Database Sharding.

SQL> create table sys_part_table
2 (id_column number,
3 data_element_1 varchar2(50),
4 data_element_2 varchar2(50),
5 entry_date date)
6 partition by SYSTEM
7 (partition PART_A tablespace PART_TBS_A,
8 partition PART_B tablespace PART_TBS_B,
9 partition PART_C tablespace PART_TBS_C)
10 /

Table created.

SQL>


Notice that I did not specify a Partition Key (column).  The Partitions are not mapped to specific values / range of values in a Key column.

Any DML must specify the Target Partition.

SQL> insert into sys_part_table
2 values (1, 'First Row','A New Beginning',sysdate)
3 /
insert into sys_part_table
*
ERROR at line 1:
ORA-14701: partition-extended name or bind variable must be used for DMLs on
tables partitioned by the System method


SQL>
SQL> !oerr ora 14701
14701, 00000, "partition-extended name or bind variable must be used for DMLs on tables partitioned by the System method"
// *Cause: User attempted not to use partition-extended syntax
// for a table partitioned by the System method
// *Action: Must use of partition-extended syntax in contexts mentioned above.

SQL>
SQL> insert into sys_part_table partition (PART_A)
2 values (1, 'First Row','A New Beginning',sysdate)
3 /

1 row created.

SQL> insert into sys_part_table partition (PART_B)
2 values (2,'Second Row','And So It Continues',sysdate)
3 /

1 row created.

SQL>


I have to specify the Target Partition for my INSERT statement. This, obviously, also applies to DELETE and UPDATE statements.   However, I can run a SELECT statement without filtering (pruning) to any Target Partition(s) -- i.e. a SELECT statement that does not use the PARTITION clause will span across all the Partitions.

SQL> select * from sys_part_table;

ID_COLUMN DATA_ELEMENT_1
---------- --------------------------------------------------
DATA_ELEMENT_2 ENTRY_DAT
-------------------------------------------------- ---------
1 First Row
A New Beginning 13-NOV-18

2 Second Row
And So It Continues 13-NOV-18


SQL>


With Tablespaces assigned to the Partitions (see the CREATE table statement above),  I  can have each Partition mapped to a different underlying Disk / Disk Group.
.
.
.
Categories: DBA Blogs

Patching a virtualized ODA to patch 12.2.1.4.0

Yann Neuhaus - Tue, 2018-11-13 02:24

This article describes patching a virtualized Oracle Database Appliance (ODA) containing only an ODA_BASE virtual machine.

Do this patching first on test machines because it can not be guaranteed that all causes of failures of single VM ODAs are covered in this article. I got the experience that precheck for ODA patches does not detect some failure conditions which may lead to an unusuable ODA.

Overview:
Patch first to 12.1.2.12.0
After that patch to 12.2.1.4.0

Procedure for both patches:

Preparation:

Apply all files of the patch to repository on all nodes as user root:

oakcli unpack -package /directory_name/file_name

Verify patch and parts to be patched on all servers:

[root@xx1 ~]# oakcli update -patch 12.2.1.4.0 --verify
INFO: 2018-09-24 08:32:52: Reading the metadata file now...
Component Name Installed Version Proposed Patch Version
--------------- ------------------ -----------------
Controller_INT 4.650.00-7176 Up-to-date
Controller_EXT 13.00.00.00 Up-to-date
Expander 0291 0306
SSD_SHARED {
[ c1d20,c1d21,c1d22, A29A Up-to-date
c1d23 ] [ c1d0,c1d1,c1d2,c1d A29A Up-to-date
3,c1d4,c1d5,c1d6,c1d
7,c1d8,c1d9,c1d10,c1
d11,c1d12,c1d13,c1d1
4,c1d15,c1d16,c1d17,
c1d18,c1d19 ] }
SSD_LOCAL 0R3Q Up-to-date
ILOM 3.2.9.23 r116695 4.0.2.26.a r123797
BIOS 38070200 38100300
IPMI 1.8.12.4 Up-to-date
HMP 2.3.5.2.8 2.4.1.0.11
OAK 12.1.2.12.0 12.2.1.4.0
OL 6.8 6.9
OVM 3.4.3 3.4.4
GI_HOME 12.1.0.2.170814(2660 12.2.0.1.180417(2767
9783,26609945) 4384,27464465)
DB_HOME {
[ OraDb12102_home1 ] 12.1.0.2.170814(2660 12.1.0.2.180417(2733
9783,26609945) 8029,27338020)
[ OraDb11204_home2 ] 11.2.0.4.170418(2473 11.2.0.4.180417(2733
2075,23054319) 8049,27441052)
}

Validate the whole ODA (not during peak load):

oakcli validate -a

Show versions of all installed components (example is after patching):

[root@xx1 ~]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version Component Name Installed Version Supported Version
-------------- --------------- ------------------ -----------------
12.2.1.4.0
Controller_INT 4.650.00-7176 Up-to-date
Controller_EXT 13.00.00.00 Up-to-date
Expander 0306 Up-to-date
SSD_SHARED {
[ c1d20,c1d21,c1d22, A29A Up-to-date
c1d23 ] [ c1d0,c1d1,c1d2,c1d A29A Up-to-date
3,c1d4,c1d5,c1d6,c1d
7,c1d8,c1d9,c1d10,c1
d11,c1d12,c1d13,c1d1
4,c1d15,c1d16,c1d17,
c1d18,c1d19 ] }
SSD_LOCAL 0R3Q Up-to-date
ILOM 4.0.2.26.a r123797 Up-to-date
BIOS 38100300 Up-to-date
IPMI 1.8.12.4 Up-to-date
HMP 2.4.1.0.11 Up-to-date
OAK 12.2.1.4.0 Up-to-date
OL 6.9 Up-to-date
OVM 3.4.4 Up-to-date
GI_HOME 12.2.0.1.180417(2767 Up-to-date
4384,27464465)
DB_HOME 11.2.0.4.170418(2473 11.2.0.4.180417(2733
2075,23054319) 8049,27441052)

To dry run of ospatch (does not work for any other components than ospatch):

[root@xx1 ~]# oakcli validate -c ospatch -ver 12.2.1.4.0
INFO: Validating the OS patch for the version 12.2.1.4.0
INFO: 2018-09-25 08:34:28: Performing a dry run for OS patching
INFO: 2018-09-25 08:34:52: There are no conflicts. OS upgrade could be successful

All packages which are mentioned as incompatible must be removed before patching. Also somebody who is able to install and configure compatible versions of these packages properly after patching should be available. Also compatible versions of these packages should be prepared beforehand.

Before applying patch:
In dataguard installations, set state to APPLY-OFF for all standby databases
Disable all jobs which use Grid Infrastructure or databases
Set all ACFS replications to “pause”.
Unmount all ACFS filesystems
Stop all agents on all ODA nodes
Remove all resources from Grid Infrastructure which depend on ACFS filesystems (srvctl remove)
These resources can be determined with:

crsctl stat res -dependency | grep -i acfs

Remove all packages which were found incompatible to patch.

Note:
Scripts of both patches cannot unmount ACFS filesystems (at least filesystems mounted with registry) and usage of Grid Infrastructure files by mounted ACFS filesystems causes both patches to fail. Check scripts of both patches seem not to check for this condition. In Grid Infrastructure all resources on which other resources have dependencies must exist, otherwise their configuration must be saved and the resources must be removed from GI.

Use UNIX tool screen for applying patch because any network interruption causes patch to fail.

Patching:
Only server and storage should be patched with oakcli script, databases should be patched manually. In / filesystem at least 10 GB, in /u01 at least 15 GB available disk space must exist.

All commands have to be executed on primary ODA node as user root. The http server error at end of server patching can be ignored.


[root@xx1 ~]# screen
[root@xx1 ~]# oakcli update -patch 12.2.1.4.0 --server
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both Nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: Y
INFO: User has confirmed for the reboot
INFO: Patch bundle must be unpacked on the second Node also before applying the patch
Did you unpack the patch bundle on the second Node? : [Y/N]? : Y
INFO: All the VMs except the ODABASE will be shutdown forcefully if needed
Do you want to continue : [Y/N]? : Y
INFO: Running pre-install scripts
INFO: Running prepatching on node 0
INFO: Running prepatching on node 1
INFO: Completed pre-install scripts
INFO: Patching server component (rolling)
INFO: Stopping VMs, repos and OAKD on both nodes...
INFO: Stopped Oakd
...
INFO: Patching the server on node: xx2
INFO: it may take upto 60 minutes. Please wait
INFO: Infrastructure patching summary on node: xx1
INFO: Infrastructure patching summary on node: xx2
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the HMP
SUCCESS: 2018-09-25 09:42:24: Successfully updated the OAK
SUCCESS: 2018-09-25 09:42:24: Successfully updated the JDK
INFO: 2018-09-25 09:42:24: IPMI is already upgraded
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the OS
SUCCESS: 2018-09-25 09:42:24: Successfully updated the device OVM
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the HMP on Dom0
INFO: 2018-09-25 09:42:24: Local storage patching summary on Dom0...
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the local storage
SUCCESS: 2018-09-25 09:42:24: Successfully updated the device Ilom
SUCCESS: 2018-09-25 09:42:24: Successfully updated the device BIOS
INFO: 2018-09-25 09:42:24: Some of the components patched on node
INFO: 2018-09-25 09:42:24: require node reboot. Rebooting the node
INFO: 2018-09-25 09:42:24: rebooting xx2 via /tmp/dom0reboot...
..........
INFO: 2018-09-25 09:48:03: xx2 is rebooting...
INFO: 2018-09-25 09:48:03: Waiting for xx2 to reboot...
........
INFO: 2018-09-25 09:55:24: xx2 has rebooted...
INFO: 2018-09-25 09:55:24: Waiting for processes on xx2 to start...
..
INFO: Patching server component on node: xx1
INFO: 2018-09-25 09:59:31: Patching ODABASE Server Components (including Grid software)
INFO: 2018-09-25 09:59:31: ------------------Patching HMP-------------------------
SUCCESS: 2018-09-25 10:00:26: Successfully upgraded the HMP
INFO: 2018-09-25 10:00:26: creating /usr/lib64/sun-ssm symlink
INFO: 2018-09-25 10:00:27: ----------------------Patching OAK---------------------
SUCCESS: 2018-09-25 10:00:59: Successfully upgraded OAK
INFO: 2018-09-25 10:01:02: ----------------------Patching JDK---------------------
SUCCESS: 2018-09-25 10:01:12: Successfully upgraded JDK
INFO: 2018-09-25 10:01:12: ----------------------Patching IPMI---------------------
INFO: 2018-09-25 10:01:12: IPMI is already upgraded or running with the latest version
INFO: 2018-09-25 10:01:13: ------------------Patching OS-------------------------
INFO: 2018-09-25 10:01:36: Removed kernel-uek-firmware-4.1.12-61.44.1.el6uek.noarch
INFO: 2018-09-25 10:01:52: Removed kernel-uek-4.1.12-61.44.1.el6uek.x86_64
INFO: 2018-09-25 10:02:03: Clusterware is running on local node
INFO: 2018-09-25 10:02:03: Attempting to stop clusterware and its resources locally
SUCCESS: 2018-09-25 10:03:22: Successfully stopped the clusterware on local node
SUCCESS: 2018-09-25 10:07:36: Successfully upgraded the OS
INFO: 2018-09-25 10:07:40: ------------------Patching Grid-------------------------
INFO: 2018-09-25 10:07:45: Checking for available free space on /, /tmp, /u01
INFO: 2018-09-25 10:07:50: Attempting to upgrade grid.
INFO: 2018-09-25 10:07:50: Executing /opt/oracle/oak/pkgrepos/System/12.2.1.4.0/bin/GridUpgrade.pl...
SUCCESS: 2018-09-25 10:55:07: Grid software has been updated.
INFO: 2018-09-25 10:55:07: Patching DOM0 Server Components
INFO: 2018-09-25 10:55:07: Attempting to patch OS on Dom0...
INFO: 2018-09-25 10:55:16: Clusterware is running on local node
INFO: 2018-09-25 10:55:16: Attempting to stop clusterware and its resources locally
SUCCESS: 2018-09-25 10:56:45: Successfully stopped the clusterware on local node
SUCCESS: 2018-09-25 11:02:19: Successfully updated the device OVM to 3.4.4
INFO: 2018-09-25 11:02:19: Attempting to patch the HMP on Dom0...
SUCCESS: 2018-09-25 11:02:26: Successfully updated the device HMP to the version 2.4.1.0.11 on Dom0
INFO: 2018-09-25 11:02:26: Attempting to patch the IPMI on Dom0...
INFO: 2018-09-25 11:02:27: Successfully updated the IPMI on Dom0
INFO: 2018-09-25 11:02:30: Attempting to patch the local storage on Dom0...
INFO: 2018-09-25 11:02:30: Stopping clusterware on local node...
INFO: 2018-09-25 11:02:37: Disk : c0d0 is already running with MS4SC2JH2ORA480G 0R3Q
INFO: 2018-09-25 11:02:38: Disk : c0d1 is already running with MS4SC2JH2ORA480G 0R3Q
INFO: 2018-09-25 11:02:40: Controller : c0 is already running with 0x005d 4.650.00-7176
INFO: 2018-09-25 11:02:41: Attempting to patch the ILOM on Dom0...
SUCCESS: 2018-09-25 11:27:49: Successfully updated the device Ilom to 4.0.2.26.a r123797
SUCCESS: 2018-09-25 11:27:49: Successfully updated the device BIOS to 38100300
INFO: Infrastructure patching summary on node: xxxx1
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP
SUCCESS: 2018-09-25 11:27:54: Successfully updated the OAK
SUCCESS: 2018-09-25 11:27:54: Successfully updated the JDK
INFO: 2018-09-25 11:27:54: IPMI is already upgraded
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the OS
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded GI
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device OVM
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP on Dom0
INFO: 2018-09-25 11:27:54: Local storage patching summary on Dom0...
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the local storage
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device Ilom
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device BIOS
INFO: Infrastructure patching summary on node: xxxx2
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP
SUCCESS: 2018-09-25 11:27:54: Successfully updated the OAK
SUCCESS: 2018-09-25 11:27:54: Successfully updated the JDK
INFO: 2018-09-25 11:27:54: IPMI is already upgraded
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the OS
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device OVM
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP on Dom0
INFO: 2018-09-25 11:27:54: Local storage patching summary on Dom0...
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the local storage
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device Ilom
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device BIOS
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded GI
INFO: Running post-install scripts
INFO: Running postpatch on node 1...
INFO: Running postpatch on node 0...
...
...
INFO: Started Oakd
INFO: 2018-09-25 11:32:26: Some of the components patched on node
INFO: 2018-09-25 11:32:26: require node reboot. Rebooting the node
INFO: Rebooting Dom0 on node 0
INFO: 2018-09-25 11:32:26: Running /tmp/dom0reboot on node 0
INFO: 2018-09-25 11:33:10: Clusterware is running on local node
INFO: 2018-09-25 11:33:10: Attempting to stop clusterware and its resources locally
SUCCESS: 2018-09-25 11:35:52: Successfully stopped the clusterware on local node
INFO: 2018-09-25 11:38:54: RPC::XML::Client::send_request: HTTP server error: read timeout
[root@xx1 ~]#
Broadcast message from root@xx1
(unknown) at 11:39 ...
The system is going down for power off NOW!

[root@xx1 ~]# oakcli update -patch 12.2.1.4.0 --storage
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both Nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: Y
INFO: User has confirmed for the reboot
INFO: Running pre-install scripts
INFO: Running prepatching on node 0
INFO: Running prepatching on node 1
INFO: Completed pre-install scripts
INFO: Shared Storage components need to be patched
INFO: Stopping OAKD on both nodes...
INFO: Stopped Oakd
INFO: Attempting to shutdown clusterware (if required)..
INFO: 2018-09-25 12:07:13: Clusterware is running on one or more nodes of the cluster
INFO: 2018-09-25 12:07:13: Attempting to stop clusterware and its resources across the cluster
SUCCESS: 2018-09-25 12:07:59: Successfully stopped the clusterware
INFO: Patching storage on node xx2
INFO: Patching storage on node xx1
INFO: 2018-09-25 12:08:23: ----------------Patching Storage-------------------
INFO: 2018-09-25 12:08:23: ....................Patching Shared SSDs...............
INFO: 2018-09-25 12:08:23: Disk : d0 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:23: Disk : d1 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:23: Disk : d2 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:24: Disk : d3 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:24: Disk : d4 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:24: Disk : d5 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:25: Disk : d6 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:25: Disk : d7 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:25: Disk : d8 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:26: Disk : d9 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:26: Disk : d10 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:26: Disk : d11 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:27: Disk : d12 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:27: Disk : d13 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:27: Disk : d14 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:28: Disk : d15 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:28: Disk : d16 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:28: Disk : d17 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:29: Disk : d18 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:29: Disk : d19 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:30: Disk : d20 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:30: Disk : d21 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:30: Disk : d22 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:31: Disk : d23 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:31: ....................Patching Shared HDDs...............
INFO: 2018-09-25 12:08:31: ....................Patching Expanders...............
INFO: 2018-09-25 12:08:31: Updating the Expander : c0x0 with the Firmware : DE3-24C 0306
SUCCESS: 2018-09-25 12:09:24: Successfully updated the Firmware on Expander : c0x0 to DE3-24C 0306
INFO: 2018-09-25 12:09:24: Updating the Expander : c1x0 with the Firmware : DE3-24C 0306
SUCCESS: 2018-09-25 12:10:16: Successfully updated the Firmware on Expander : c1x0 to DE3-24C 0306
INFO: 2018-09-25 12:10:16: ..............Patching Shared Controllers...............
INFO: 2018-09-25 12:10:16: Controller : c0 is already running with : 0x0097 13.00.00.00
INFO: 2018-09-25 12:10:17: Controller : c1 is already running with : 0x0097 13.00.00.00
INFO: 2018-09-25 12:10:17: ------------ Completed Storage Patching------------
INFO: 2018-09-25 12:10:17: Completed patching of shared_storage
INFO: Patching completed for component Storage
INFO: Running post-install scripts
INFO: Running postpatch on node 1...
INFO: Running postpatch on node 0...
INFO: 2018-09-25 12:10:28: Some of the components patched on node
INFO: 2018-09-25 12:10:28: require node reboot. Rebooting the node
INFO: 2018-09-25 12:10:28: Running /tmp/pending_actions on node 1
INFO: Node will reboot now.
INFO: Please check reboot progress via ILOM interface
INFO: This session may appear to hang, press ENTER after reboot
INFO: 2018-09-25 12:12:53: Rebooting Dom1 on node 0
INFO: Running /tmp/pending_actions on node 0
Broadcast message from oracle@xx1
(/dev/pts/0) at 12:13 ...
The system is going down for reboot NOW!

After successful patching:

Install and configure compatible versions of all previously removed packages
Mount all ACFS filesystems
Recreate all deleted Grid Infrastructure resources and start them
Reenable all jobs disabled before
Resume all ACFS replications
Set state of all dataguard standby databases to APPLY-ON
Check ACFS replications
Check dataguard status
Check whether all works as before

Cet article Patching a virtualized ODA to patch 12.2.1.4.0 est apparu en premier sur Blog dbi services.

DC/OS: Install Marathon-LB with keepalived

Yann Neuhaus - Tue, 2018-11-13 01:32

After the minimal setup of the DC/OS in my further articles, I wanted to extend my DC/OS and add a loadbalancer.
There are two options for loadbalancing in DC/OS
1. Marathon-LB (a layer 7 load balancer, used for external requests, based on HAProxy)
2. Named VIPs (a layer 4 load balancer used for internal TCP traffic)

In this article we will use Marathon-LB. In case you want to read more about the VIP solution, just visit the DC/OS Documentation.

I also want to configure keepalived, which will automatically generate a unicast based failover for high-availabilty.

Preparation

To use a loadbalancer, I had to extend the DC/OS I build before Deploy DC/OS using Ansible (Part 1). The new DC/OS has the following structure:
POWERPNT_2018-11-07_10-26-14

Implementation of marathon-lb

I used the DC/OS CLI, but you can also install the marathon-lb package using the catalog on the web interface.

[root@dcos-master ~]# dcos package install marathon-lb
By Deploying, you agree to the Terms and Conditions https://mesosphere.com/catalog-terms-conditions/#community-services
We recommend at least 2 CPUs and 1GiB of RAM for each Marathon-LB instance.

*NOTE*: For additional ```Enterprise Edition``` DC/OS instructions, see https://docs.mesosphere.com/administration/id-and-access-mgt/service-auth/mlb-auth/
Continue installing? [yes/no] yes
Installing Marathon app for package [marathon-lb] version [1.12.3]
Marathon-lb DC/OS Service has been successfully installed!
See https://github.com/mesosphere/marathon-lb for documentation.
Create a keepalived configuration

To implement keepalived on the both Public agents, create two JSON Files (you can find the GitHub Guidance here). One for the master and one for the backup.
Make the IPs fitting to your environment. Be sure you did not mix up the IPs for the master and the backup. You also have to adapt the KEEPALIVED_VIRTUAL_IPADDRESS_1

[root@dcos-master ~]# cd /etc/
[root@dcos-master etc]# mkdir keepalived
[root@dcos-master etc]# cat keepalived-master.json
{
  "id": "/keepalived-master",
  "acceptedResourceRoles": [
    "slave_public"
  ],
  "constraints": [
    [
      "hostname",
      "LIKE",
      "192.168.22.104"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [],
    "docker": {
      "image": "arcts/keepalived",
      "forcePullImage": false,
      "privileged": false,
      "parameters": [
        {
          "key": "cap-add",
          "value": "NET_ADMIN"
        }
      ]
    }
  },
  "cpus": 0.5,
  "disk": 0,
  "env": {
    "KEEPALIVED_AUTOCONF": "true",
    "KEEPALIVED_VIRTUAL_IPADDRESS_1": "192.168.22.150/24",
    "KEEPALIVED_STATE": "MASTER",
    "KEEPALIVED_UNICAST_PEER_0": "192.168.22.106",
    "KEEPALIVED_INTERFACE": "en0ps8",
    "KEEPALIVED_UNICAST_SRC_IP": "192.168.22.104"
  },
  "instances": 1,
  "maxLaunchDelaySeconds": 3600,
  "mem": 100,
  "gpus": 0,
  "networks": [
    {
      "mode": "host"
    }
  ],
  "portDefinitions": [],
  "requirePorts": true,
  "upgradeStrategy": {
    "maximumOverCapacity": 1,
    "minimumHealthCapacity": 1
  },
  "killSelection": "YOUNGEST_FIRST",
  "unreachableStrategy": {
    "inactiveAfterSeconds": 0,
    "expungeAfterSeconds": 0
  },
  "healthChecks": [],
  "fetch": []
}
[root@dcos-master keepalived]# cat keepalived-backup.json
{
  "id": "/keepalived-backup",
  "acceptedResourceRoles": [
    "slave_public"
  ],
  "constraints": [
    [
      "hostname",
      "LIKE",
      "192.168.22.106"
    ]
  ],
  "container": {
    "type": "DOCKER",
    "volumes": [],
    "docker": {
      "image": "arcts/keepalived",
      "forcePullImage": false,
      "privileged": false,
      "parameters": [
        {
          "key": "cap-add",
          "value": "NET_ADMIN"
        }
      ]
    }
  },
  "cpus": 0.5,
  "disk": 0,
  "env": {
    "KEEPALIVED_AUTOCONF": "true",
    "KEEPALIVED_VIRTUAL_IPADDRESS_1": "192.168.22.150/24",
    "KEEPALIVED_STATE": "BACKUP",
    "KEEPALIVED_UNICAST_PEER_0": "192.168.22.104",
    "KEEPALIVED_INTERFACE": "en0ps8",
    "KEEPALIVED_UNICAST_SRC_IP": "192.168.22.106"
  },
  "instances": 1,
  "maxLaunchDelaySeconds": 3600,
  "mem": 124,
  "gpus": 0,
  "networks": [
    {
      "mode": "host"
    }
  ],
  "portDefinitions": [],
  "requirePorts": true,
  "upgradeStrategy": {
    "maximumOverCapacity": 1,
    "minimumHealthCapacity": 1
  },
  "killSelection": "YOUNGEST_FIRST",
  "unreachableStrategy": {
    "inactiveAfterSeconds": 0,
    "expungeAfterSeconds": 0
  },
  "healthChecks": [],
  "fetch": []
}
Add the keepalived apps to DC/OS
[root@dcos-master keepalived]# dcos marathon app add keepalived-master.json
Created deployment b41b4526-86ae-4b70-a254-429c3c212ce3
[root@dcos-master keepalived]# dcos marathon app add keepalived-backup.json
Created deployment f854a621-f402-4e72-9f46-150b11c6a7c8
Everything works as expected?

You can easily check if everything works as expected by either using the CLI

[root@dcos-master keepalived]# dcos marathon app list
ID                  MEM  CPUS  TASKS  HEALTH  DEPLOYMENT  WAITING  CONTAINER  CMD
/keepalived-backup  124  0.5    1/1    N/A       ---      False      DOCKER   N/A
/keepalived-master  100  0.5    1/1    N/A       ---      False      DOCKER   N/A
/marathon-lb        800   1     2/2    2/2       ---      False      DOCKER   N/A

Or the web interface. In this case I prefer the web interface. I think it offers a great overview. You can check the health of the Services, the IP addresses….
firefox_2018-11-01_14-21-00

Select the keepalived-master to proof the state
firefox_2018-11-07_11-10-25

Select the log tab and make sure the master is in “MASTER STATE”
firefox_2018-11-07_11-11-18

Do the same for the keepalived-backup. The log should show the backup in “BACKUP STATE”
firefox_2018-11-07_11-14-28

Cet article DC/OS: Install Marathon-LB with keepalived est apparu en premier sur Blog dbi services.

[Blog] Compartment In Oracle Cloud Infrastructure (OCI): Everything You Must Know

Online Apps DBA - Tue, 2018-11-13 01:26

The First thing you pick when you do anything on Oracle Cloud Infrastructure (OCI) is pick Compartment to host your resources (Compute, Storage, Network, Database etc). Compartment is components of Identity & Access Management in OCI and is must know for all DBAs, Apps DBAs or Architects working on Cloud Visit: https://k21academy.com/oci21 and Learn all About… […]

The post [Blog] Compartment In Oracle Cloud Infrastructure (OCI): Everything You Must Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Strange behaviour with excecute immediate.

Tom Kyte - Mon, 2018-11-12 19:26
Hi, Had problems with SQLLive, (500 response), therefore the examples are her. I have a strange behaviour with execute immediate, where it behaves differently from within a PL/SQL procedure than it does when running it standalone. Here is th...
Categories: DBA Blogs

Difference between named sequence and system auto-generated

Tom Kyte - Mon, 2018-11-12 19:26
Hello, Guys. A new db (12c) will have lots of tables with sequence used as PK. What is the difference between named sequence and system auto-generated in Oracle 12c? What would be the best approach?
Categories: DBA Blogs

Row Chaining

Tom Kyte - Mon, 2018-11-12 19:26
Hi Tom What is row chaining/migration ? what are the consequences of Row Chaining/Migration ? How I can find whether it is there in my database or not ? and if it is there what is the solution to get rid from it? Thanks in advance Howie
Categories: DBA Blogs

Run procedures or functions in parallel

Tom Kyte - Mon, 2018-11-12 19:26
hello, I've web app which is using procedures and functions inside packages most of the time, in some cases procedures and functions execution taking long time to return data (as SYS_REFCURSOR). the problem is that when other users execute other p...
Categories: DBA Blogs

Export Tables from Oracle-12c to Oracle-10g

Tom Kyte - Mon, 2018-11-12 19:26
Why the following table is not being Exported from Oracle-12c to Oracle-10g Table : <code>create table stock(ModID varchar(20) primary key, Name varchar(30), Type varchar(15) ,mQty number, cmpID number, price number, Warranty number);</code> ...
Categories: DBA Blogs

Counting specific days between to two dates

Tom Kyte - Mon, 2018-11-12 19:26
Hi Tom, ? have a case that i need to count specific days between two dates. For example i have a table that contains contract startdate, enddate and specific date like 15. 15 means every 15th day of months. i need to count specific dates. for exa...
Categories: DBA Blogs

User_dump_dest is inconsistent with the actual trace path

Tom Kyte - Mon, 2018-11-12 19:26
If my question is too simple or meaningless, you can ignore it. Why does my user_dump_dest parameter get a different path than the actual path? I run this example: <code>EODA@muphy>select c.value || '/' || d.instance_name || '_ora_' || a.spi...
Categories: DBA Blogs

copy partition table stats

Tom Kyte - Mon, 2018-11-12 19:26
Hi Team , as per the requirement from application team , we need to copy table stats from one table to other table . Both source and destination table are partition tables . here we tested out in local system below steps : 1. created dum...
Categories: DBA Blogs

Join like (1=1)

Tom Kyte - Mon, 2018-11-12 19:26
Hi All, I am sorry if this is pretty basic,but it is intriguing me a bit. I saw a join written like Inner Join table B on (1=1) Why join like this should be written and under what scenario.Thanks in advance.
Categories: DBA Blogs

Conceptions of Fudo Myoo in Esoteric Buddhism

Greg Pavlik - Mon, 2018-11-12 17:14
Admittedly, this is an esoteric topic altogether - my own interest in understanding Fudo Myoo in Mahayana Buddhism have largely stemmed from an interest in Japanese art in the Edo wood block tradition - but I thought a rather interesting exploration of esoteric Buddhism and by implication currents of Japanese culture.

https://tricycle.org/magazine/evil-in-esoteric-japanese-buddhism/

On Education

Greg Pavlik - Mon, 2018-11-12 17:07
'We study to get diplomas and degrees and certifications, but imagine a life devoted to study for no other purpose than to be educated. Being educated is not the same as being informed or trained. Education is an "education", a drawing out of one's own genius, nature, and heart. The manifestation of one's essence, the unfolding of one's capacities, the revelation of one's heretofore hidden possibilities - these are the goals of study from the point of view of the person. From another side, study amplifies the speech and song of the world so that it's more palpably present.

Education in soul leads to the enchantment of the world and the attunement of self.'

Thomas Moore, 'Meditations'

Pages

Subscribe to Oracle FAQ aggregator