Feed aggregator

Oracle 18c preinstall RPM on RedHat RHEL

Yann Neuhaus - Fri, 2018-08-03 17:03

The Linux prerequisites for Oracle Database are all documented but using the pre-install rpm makes all things easier. Before 18c, this was easy on Oracle Enterprise Linux (OEL) but not so easy on RedHat (RHEL) where the .rpm had many dependencies on OEL and UEK.
Now that 18c is there to download, there’s also the 18c preinstall rpm and the good news is that it can be run also on RHEL without modification.

This came to my attention on Twitter:

On the other hand, you may not have noticed that it no longer requires Oracle Linux specific RPMs. It can now be used on RHEL and all its derivatives.

— Avi Miller (@AviAtOracle) July 29, 2018

And of course this is fully documented:
https://docs.oracle.com/en/database/oracle/oracle-database/18/cwlin/about-the-oracle-preinstallation-rpm.html#GUID-C15A642B-534D-4E4A-BDE8-6DC7772AA9C8

In order to test it I’ve created quickly a CentOS instance on the Oracle Cloud:
CaptureCentosPreinstall

I’ve downloaded the RPM from the OEL7 repository:

[root@instance-20180803-1152 opc]# curl -o oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm https ://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1 .el7.x86_64.rpm
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18244 100 18244 0 0 63849 0 --:--:-- --:--:-- --:--:-- 63790

then ran the installation:

[root@instance-20180803-1152 opc]# yum -y localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_ 64.rpm

 
It installs automatically all dependencies:
Installed:
oracle-database-preinstall-18c.x86_64 0:1.0-1.el7
 
Dependency Installed:
compat-libcap1.x86_64 0:1.10-7.el7 compat-libstdc++-33.x86_64 0:3.2.3-72.el7 glibc-devel.x86_64 0:2.17-222.el7 glibc-headers.x86_64 0:2.17-222.el7
gssproxy.x86_64 0:0.7.0-17.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 keyutils.x86_64 0:1.5.8-3.el7 ksh.x86_64 0:20120801-137.el7
libICE.x86_64 0:1.0.9-9.el7 libSM.x86_64 0:1.2.2-2.el7 libXext.x86_64 0:1.3.3-3.el7 libXi.x86_64 0:1.7.9-1.el7
libXinerama.x86_64 0:1.1.3-2.1.el7 libXmu.x86_64 0:1.1.2-2.el7 libXrandr.x86_64 0:1.5.1-2.el7 libXrender.x86_64 0:0.9.10-1.el7
libXt.x86_64 0:1.1.5-3.el7 libXtst.x86_64 0:1.2.3-1.el7 libXv.x86_64 0:1.0.11-1.el7 libXxf86dga.x86_64 0:1.1.4-2.1.el7
libXxf86misc.x86_64 0:1.0.3-7.1.el7 libXxf86vm.x86_64 0:1.1.4-1.el7 libaio-devel.x86_64 0:0.3.109-13.el7 libbasicobjects.x86_64 0:0.1.1-29.el7
libcollection.x86_64 0:0.7.0-29.el7 libdmx.x86_64 0:1.1.3-3.el7 libevent.x86_64 0:2.0.21-4.el7 libini_config.x86_64 0:1.3.1-29.el7
libnfsidmap.x86_64 0:0.25-19.el7 libpath_utils.x86_64 0:0.2.1-29.el7 libref_array.x86_64 0:0.1.5-29.el7 libstdc++-devel.x86_64 0:4.8.5-28.el7_5.1
libverto-libevent.x86_64 0:0.2.5-4.el7 nfs-utils.x86_64 1:1.3.0-0.54.el7 psmisc.x86_64 0:22.20-15.el7 xorg-x11-utils.x86_64 0:7.5-22.el7
xorg-x11-xauth.x86_64 1:1.0.9-1.el7

Note that the limits are stored in limits.d which has priority over limits.conf:

[root@instance-20180803-1152 opc]# cat /etc/security/limits.d/oracle-database-preinstall-18c.conf
 
# oracle-database-preinstall-18c setting for nofile soft limit is 1024
oracle soft nofile 1024
 
# oracle-database-preinstall-18c setting for nofile hard limit is 65536
oracle hard nofile 65536
 
# oracle-database-preinstall-18c setting for nproc soft limit is 16384
# refer orabug15971421 for more info.
oracle soft nproc 16384
 
# oracle-database-preinstall-18c setting for nproc hard limit is 16384
oracle hard nproc 16384
 
# oracle-database-preinstall-18c setting for stack soft limit is 10240KB
oracle soft stack 10240
 
# oracle-database-preinstall-18c setting for stack hard limit is 32768KB
oracle hard stack 32768
 
# oracle-database-preinstall-18c setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle hard memlock 134217728
 
# oracle-database-preinstall-18c setting for memlock soft limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90% of RAM
oracle soft memlock 134217728

Note that memlock is set to 128GB here but can be higher on machines with huge RAM (up to 90% of RAM)

And for information, here is what is set in /etc/sysctl.conf:

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Besides that, the preinstall rpm disables NUMA and transparent huge pages (as boot options in GRUB). It creates the oracle user (id 54321 and belonging to groups oinstall,dba,oper,backupdba,dgdba,kmdba,racdba)

 

Cet article Oracle 18c preinstall RPM on RedHat RHEL est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Things to know, binaries & JMS installation

Yann Neuhaus - Fri, 2018-08-03 13:55

Documentum introduced some time ago already the silent installations for its software. The way to use this changed a little bit but it seems they finally found their way. This blog will be the first of a series to present how to work with the silent installations on Documentum because it is true that it is not really well documented and most probably not much used at the moment.

We are using this where possible for our customers and it is true that it is really helpful to avoid human errors and install components more quickly. Be aware that this isn’t perfect! There are some parameters with typos, some parameters that are really not self-explanatory, so you will need some time to understand everything but, in the end, it is still helpful.

Using the silent installation is a first step but you will still need a lot of manual interventions to execute these as well as actually making your environment working. I mean it only replaces the GUI installers so everything you were doing around that is still needed (preparation of files/folders/environment, custom start/stop scripts, service setup, Java Method Server (JMS) configuration, Security Baselines, SSL setup, aso…). That’s why we also developed internally scripts or playbooks (Ansible for example) to perform everything around AND use the Documentum silent installations. In this blog and more generally in this series, I will only talk about the silent installations coming from Documentum.

Let’s start with the basis:

  1. Things you need to know
  2. Documentum Content Server installation (binaries & JMS)

 

1. Things you need to know
  • Each and every component installation needs its own properties file that is used by the installer to know what to install and how to do it, that’s all you need to do.
  • As I mentioned above, there are some typos in a few parameters coming from the properties files like “CONGINUE” instead of “CONTINUE”. These aren’t errors in my blogs, the parameters are really like that. All the properties files I’m showing here have been tested and validated in a lot of environments, including PROD ones in High Availability.
  • To know more about the silent installation, you can check the installation documentation. There isn’t much to read about it but still some potentially interesting information.
  • The Documentum documentation does NOT contain any description of the parameters you can/should use, that’s why I will try in each blogs to describe them as much as possible.
  • You can potentially do several things at once using a single silent properties file, the only restriction for that is that it needs to use the same installer. Therefore, you could install a docbroker/connection broker, a docbase/repository and configure/enable a licence using a single properties file but you wouldn’t be able to do the silent installation of the binaries as well because it needs another installer. That’s definitively not what I’m doing because I find it messy, I really prefer to separate things, so I know I’m using only the parameters that I need for a specific component and nothing else.
  • There are examples provided when you install Documentum. You can look at the folder “$DM_HOME/install/silent/templates” and you will see some properties file. In these files, you will usually find most of the parameters that you can use but from what I remember, there are a few missing. Be aware that some files are for Windows and some are for Linux, it’s not always the same because some parameters are specific to a certain OS:
    • linux_ files are for Linux obviously
    • win_ files are for Windows obviously
    • cfs_ files are for a CFS/Remote Content Server installation (to provide High Availability to your docbases/repositories)
  • If you look at the folder “$DM_HOME/install/silent/silenttool”, you will see that there is a utility to generate silent files based on your current installation. You need to provide a silent installation file for a Content Server and it will generate for you a CFS/Remote CS silent installation file with most of the parameters that you need. Do not 100% rely on this file, there might still be some parameters missing but present ones should be the correct ones. I will write a blog on the CFS/Remote CS as well, to provide an example.
  • You can generate silent properties file by running the Documentum installers with the following command: “<installer_name>.<sh/bin> -r <path>/<file_name>.properties”. This will write the parameters you selected/enabled/configured into the <file_name>.properties file so you can re-use it later.
  • To install an additional JMS, you can use the jmsConfig.sh script or jmsStandaloneSetup.bin for an IJMS (Independent JMS – Documentum 16.4 only). It won’t be in the blogs because I’m only showing the default one created with the binaries.
  • The following components/features can be installed using the silent mode (it is possible that I’m missing some, these are the ones I know):
    • CS binaries + JMS
    • JMS/IJMS
    • Docbroker/connection broker
    • Licences
    • Docbase/repository (CS + CFS/RCS + DMS + RKM)
    • D2
    • Thumbnail

 

2. Documentum Content Server installation (binaries & JMS)

Before starting, you need to have the Documentum environment variables defined ($DOCUMENTUM, $DM_HOME, $DOCUMENTUM_SHARED), that doesn’t change. Once that is done, you need to extract the installer package (below I used the package for a CS 7.3 on Linux with an Oracle DB):

[dmadmin@content_server_01 ~]$ cd /tmp/dctm_install/
[dmadmin@content_server_01 dctm_install]$ tar -xvf Content_Server_7.3_linux64_oracle.tar
[dmadmin@content_server_01 dctm_install]$
[dmadmin@content_server_01 dctm_install]$ chmod 750 serverSetup.bin
[dmadmin@content_server_01 dctm_install]$ rm Content_Server_7.3_linux64_oracle.tar

 

Then prepare the properties file:

[dmadmin@content_server_01 dctm_install]$ vi CS_Installation.properties
[dmadmin@content_server_01 dctm_install]$ cat CS_Installation.properties
### Silent installation response file for CS binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
APPSERVER.SERVER_HTTP_PORT=9080
APPSERVER.SECURE.PASSWORD=adm1nP4ssw0rdJMS

### Common parameters
COMMON.DO_NOT_RUN_DM_ROOT_TASK=true

[dmadmin@content_server_01 dctm_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • APPSERVER.SERVER_HTTP_PORT: The port to be used by the JMS that will be installed
  • APPSERVER.SECURE.PASSWORD: The password of the “admin” account of the JMS. Yes, you need to put all passwords in clear text in the silent installation properties files so add it just before starting the installation and remove them right after
  • COMMON.DO_NOT_RUN_DM_ROOT_TASK: Whether or not you want to run the dm_root_task in the silent installation. I usually set it to true, so it is NOT executed because the Installation Owner I’m using do not have root accesses for security reasons
  • On Windows, you would need to provide the Installation Owner’s password as well and the path you want to install Documentum on ($DOCUMENTUM). On linux, the first one isn’t needed and the second one needs to be in the environment before starting.
  • You could also potentially add more properties in this file: SERVER.LOCKBOX_FILE_NAMEx and SERVER.LOCKBOX_PASSPHRASE.PASSWORDx (where x is a number starting with 1 and incrementing in case you have several lockboxes). These parameters would be used for existing lockbox files that you would want to load. Honestly, these parameters are useless. You will anyway need to provide the lockbox information during the docbase/repository creation and you will need to specify if you want a new lockbox, an existing lockbox or no lockbox at all so specifying it here is kind of useless…

 

Once the properties file is ready, you can install the Documentum binaries and the JMS in silent using the following command:

[dmadmin@content_server_01 dctm_install]$ ./serverSetup.bin -f CS_Installation.properties

 

This conclude the first blog of this series about Documentum silent installations. Stay tuned for more soon.

 

Cet article Documentum – Silent Install – Things to know, binaries & JMS installation est apparu en premier sur Blog dbi services.

Authenticate proxy user from windows credentials

Tom Kyte - Fri, 2018-08-03 07:46
I am trying to work out how to connect using a proxy but passing a windows credential in - like this: SQL> CONN proxy_user[domain\windows_user]/proxy_pass So far it doesn't seem possible. Do you know how this can happen? Thanks
Categories: DBA Blogs

Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0

Yann Neuhaus - Fri, 2018-08-03 03:26

The following blog will provide the necessary steps to upgrade the Grid Infrastructure from 12.1 to 12.2, for a Standalone Server.
One of the new features of GI 12.2 is the usage of the AFD (Oracle ASMFD Filter Driver).

Assumptions :

 You have installed Oracle GI 12.1 as grid user
 You have installed Oracle Database 12.1 as oracle user
 You have configured the groups asmadmin,asmoper,asmdba
 You installed oracle-rdbms-server-12cr2-preinstall rpm
 You patched your Oracle GI to PSU July 2017 (combo patch 25901062 to patch Oracle stack 12.1 , GI & RDBMS)
 [root]mkdir /u01/app/grid/product/12.2.0/grid/
 [root]chown -R grid:oinstall /u01/app/grid/product/12.2.0/grid/
 --stop all dbs that are using ASM
 [oracle]srvctl stop database -d ORCL

Installation : Tasks

[grid]cd /u01/app/grid/product/12.2.0/grid/
[grid]unzip /stage/linuxx64_12201_grid_home.zip
[grid]./gridSetup.sh
	Choose Upgrade Oracle Grid Infrastructure option.
	Confirm that all Oracle DBs using ASM are stopped.
	Check :
        Oracle base : /u01/app/grid/  
        Software Location : /u01/app/grid/product/12.2.0/grid/
		
	Uncheck "Automatically run configuration scripts". Is not recommanded by Oracle, but if you are doing like that 
is very possible that your upgrade process is dying without any output. 
	So at the right moment you will be asked to run rootUpgrade.sh maually.
	Click Next and validate that all the pre-requirements are confirmed.
	Monitor the progress and run the script rootUpgrade.sh when is prompted
	Once your action completed succesfully:
[grid@dbisrv04 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid

[grid@dbisrv04 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]

Migrating ASM disks from ASMlib to AFD : Tasks

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

Step1:

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:
profile:

[grid@dbisrv04 ~]$ asmcmd dsset '/dev/xvda*','ORCL:*','AFD:*'

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:/dev/xvda*, ORCL:*, AFD:*
profile:/dev/xvda*,ORCL:*,AFD:*

Step2:

[root]export ORACLE_HOME=/u01/app/grid/product/12.2.0/grid/
[root]$GRID_HOME/bin/crsctl stop has -f

Step3:

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure

ASMCMD-9524: AFD configuration failed 'ERROR: ASMLib deconfiguration failed'
Cause: acfsload is running.To configure AFD oracleasm and acfsload must be stopped
Solution: stop acfsload and rerun asmcmd afd_configure

[root@dbisrv04 ~]# oracleasm exit
[root@dbisrv04 ~]# $ORACLE_HOME/bin/acfsload stop

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'dbisrv04.localdomain'

Step5:

[root]$ORACLE_HOME/bin/crsctl stop has

Step6:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_refresh
[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step7:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_dsset '/dev/sd*'

Step8:

[root]$ORACLE_HOME/bin/crsctl stop has -f
[root]$GRID_HOME/bin/asmcmd afd_scan
[root]$GRID_HOME/bin/asmcmd afd_refresh

Step9:

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step10:

select name,label,path from v$asm_disk;SQL> SQL> SQL>

NAME       LABEL                PATH
---------- -------------------- --------------------
DISK04     DISK04               AFD:DISK04
DISK03     DISK03               AFD:DISK03
DISK02     DISK02               AFD:DISK02
DISK01     DISK01               AFD:DISK01
DISK07     DISK07               AFD:DISK07
DISK05     DISK05               AFD:DISK05
DISK06     DISK06               AFD:DISK06
DISK09     DISK09               AFD:DISK09
DISK08     DISK08               AFD:DISK08

Step11: Confirm your AFD is loaded

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE

——————————————————————————–

Step 11b: Introduce new disks with AFD

[root]. oraenv
[root]+ASM
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1 --init
ASMCMD-9521: AFD is already configured
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1
[root@dbisrv04 ~]# asmcmd afd_lslbl

Step 12: Erase Oracle ASMLib

[root] yum erase oracleasm-support.x86_64
[root] yum erase oracleasmlib.x86_64
 

Cet article Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0 est apparu en premier sur Blog dbi services.

Fishbowl Solutions Leverages Oracle WebCenter to Create Enterprise Employee Portal Solution for National Insurance Company

An insurance company that specializes in business insurance and risk management services for select industries was struggling to provide their 2,300 employees with an employee portal system that kept users engaged and informed. They desired to provide their employees with a much more modern employee portal that leveraged the latest web technologies while making it easier for business users to contribute content. With the ability for business stakeholders to own and manage content on the site, the company believed the new portal would be updated more frequently, which would make it stickier and keep users coming back.

Business Objective

The company had old intranet infrastructure that included 28 Oracle Site Studio Sites. The process for the company’s various business units to contribute content to the site basically involved emailing Word documents to the company’s IT department. IT would then get them checked into their old WebCenter Content system that supported the SiteStudio system. Once the documents were converted to a web-viewable format, it would appear on the site. Since IT did not have a dedicated administrator for the portal, change requests typically took days and sometimes even weeks. With the company’s rapid growth, disseminating information to employees quickly and effectively became a priority. The employee portal was seen as the single place where employees could access company, department and role-specific information – on their desktop or mobile device. The company needed a new portal solution backed by strong content management capabilities to make this possible. Furthermore, Oracle Site Studio was being sunsetted, so the company needed to move off an old and unsupported system and onto a modern portal platform that had a development roadmap to support their business needs now and into the future. The company chose Oracle WebCenter Content and Portal 12c as this new system.

The company’s goals for the new employee portal were:

  • Expand what the business can do without IT involvement
  • Better engage and inform employees
  • Less manual, more dynamic portal content
  • Improve overall portal usability
  • Smart navigation – filter menus by department and role
  • Mobile access

Because of several differentiators and experience, the insurance company chose Fishbowl Solutions to help them meet their goals. The company really liked that Fishbowl offered a packaged solution that they felt would enable them to go to market faster with their new portal. Effectively, the company was looking for a portal framework that included the majority of what they needed – navigation, page templates, taskflows, etc. – that could be achieved with less coding and more configuration. This solution is called Portal Solution Accelerator.

Oracle WebCenter Paired with Fishbowl’s Portal Solution Accelerator

After working together to evaluate the problems, goals, strategy, and timeline, Fishbowl created a plan to help them create their desired Portal. Fishbowl offered software and services for rapid deployment and portal set up by user experience and content integration. Fishbowl upgraded the company’s portal from SiteStudio to Oracle WebCenter Portal and Content 12c. Fishbowl’s Portal Solution Accelerator includes portal and content bundles consisting of a collection of code, pages, assets, content and templates. PSA also offers content integration, single-page application (SPA) task flows, and built-in personalization. These foundational benefits for the portal resulted in a reduction in time-to-market, speed and performance, and developer-friendly design.

Results

After implementing the new Portal and various changes, the content publishing time was reduced by 90 percent as the changes and updates now occur in hours instead of days or weeks, which encourages users to publish content. The new Framework allows for new portals to be created with little work from IT. Additionally, the in-place editor makes it easy for business users to edit their content and see changes in real-time. Role-based contribution and system-managed workflows streamline to content governance. The new mega-menu provided by the SPA provides faster, more intuitive navigation to intranet content. This navigation is overlaid with Google Search integration, further ensuring that users can find the content they need. Most of the components used in the intranet are reusable and easy to modify for unique cases. Therefore, the company can stay up-to-date with minimal effort. Finally, the Portal has phone, tablet, and desktop support making the intranet more accessible, ensuring repeat visits.

Overall, the national insurance company has seen an immense change in content publishing time reduction, ease of editing content, and managing and governing the portal since working with Fishbowl. The solutions that Fishbowl created and implemented helped decrease weekly internal support calls from twenty to one.

The post Fishbowl Solutions Leverages Oracle WebCenter to Create Enterprise Employee Portal Solution for National Insurance Company appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

The plan ignore my index

Tom Kyte - Thu, 2018-08-02 13:26
Good Afternoon, I have a table for the generation of a report by year and week but when I execute the query a TAF is marked. I tried to force the indexes but the execution plan ignores it. What can I do to take the index?, Is it necessary to cha...
Categories: DBA Blogs

How to grant v_$Session to a normal user, If we do not have access to sys user

Tom Kyte - Thu, 2018-08-02 13:26
How to grant v_$Session to a normal user, in a normal user we are using in a stored procedure. And we dont have access to sys user. By using select any dictionary privilege we can access but they do not want grant select any dictionary privilege to a...
Categories: DBA Blogs

PL/SQL query with NULL variables

Tom Kyte - Thu, 2018-08-02 13:26
What is the best way to handle a query with multiple variables, and some of the variables can be null, like: <code>FUNCTION GET_RECIPE(P_RECIPE_LIST IN VARCHAR2, P_OWNER_LIST IN VARCHAR2, ...
Categories: DBA Blogs

Global temporary table error

Tom Kyte - Thu, 2018-08-02 13:26
Hi AskTom, Can you please help me with this issue. Our application uses lot of global temporary table (GTT) has on commit preserve rows option. <code> CREATE GLOBAL TEMPORARY TABLE "ODR"."GTT_POINT" ( "POINT_ID" NUMBER(10,0) NOT NULL ENAB...
Categories: DBA Blogs

How to find all Mondays between two dates?

Tom Kyte - Thu, 2018-08-02 13:26
I have to find all mondays between two date range which can be parameterized or coming from two different columns of a table. Also need to generate a sql to get next 20 mondays from sysdate. can you please help me to get sql query for these 2 r...
Categories: DBA Blogs

Extended Histograms – 2

Jonathan Lewis - Thu, 2018-08-02 08:13

Following on from the previous posting which raised the idea of faking a frequency histogram for a column group (extended stats), this is just a brief demonstration of how you can do this. It’s really only a minor variation of something I’ve published before, but it shows how you can use a query to generate a set of values for the histogram and it pulls in a detail about how Oracle generates and stores column group values.

We’ll start with the same table as we had before – two columns which hold only the combinations (‘Y’, ‘N’) or (‘N’, ‘Y’) in a very skewed way, with a requirement to ensure that the optimizer provides an estimate of 1 if a user queries for (‘N’,’N’) … and I’m going to go the extra mile and create a histogram that does the same when the query is for the final possible combination of (‘Y’,’Y’).

Here’s the starting code that generates the data, and creates histograms on all the columns (I’ve run this against 12.1.0.2 and 12.2.0.1 so far):


rem
rem     Script:         histogram_hack_2a.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jul 2018
rem
rem     Last tested 
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

create table t1
as
select 'Y' c2, 'N' c3 from all_objects where rownum <= 71482 -- > comment to deal with wordpress format issue.
union all
select 'N' c2, 'Y' c3 from all_objects where rownum <= 1994 -- > comment to deal with wordpress format issue.
;

variable v1 varchar2(128)

begin
        :v1 := dbms_stats.create_extended_stats(null,'t1','(c2,c3)');
        dbms_output.put_line(:v1);
end;
/

execute dbms_stats.gather_table_stats(null, 't1', method_opt=>'for all columns size 10');

In a variation from the previous version of the code I’ve used the “create_extended_stats()” function so that I can return the resulting virtual column name (also known as an “extension” name) into a variable that I can use later in an anonymous PL/SQL block.

Let’s now compare the values stored in the histogram for that column with the values generated by a function call that I first referenced a couple of years ago:


select
        endpoint_value
from 
        user_tab_histograms
where
        table_name = 'T1'
and     column_name = :v1
;

select 
        distinct c2, c3, 
        mod(sys_op_combined_hash(c2,c3),9999999999) endpoint_value
from t1
;

ENDPOINT_VALUE
--------------
    4794513072
    6030031083

2 rows selected.


C C ENDPOINT_VALUE
- - --------------
N Y     4794513072
Y N     6030031083

2 rows selected.

So we have a method of generating the values that Oracle should store in the histogram; now we need to generate 4 values and supply them to a call to dbms_stats.set_column_stats() in the right order with the frequencies we want to see:


declare
        l_distcnt number;
        l_density number;
        l_nullcnt number;
        l_avgclen number;

        l_srec  dbms_stats.statrec;
        n_array dbms_stats.numarray;

begin
        dbms_stats.get_column_stats (
                ownname =>null,
                tabname =>'t1',
                colname =>:v1,
                distcnt =>l_distcnt,
                density =>l_density,
                nullcnt =>l_nullcnt,
                avgclen =>l_avgclen,
                srec    =>l_srec
        );

        l_srec.novals := dbms_stats.numarray();
        l_srec.bkvals := dbms_stats.numarray();

        for r in (
                select
                        mod(sys_op_combined_hash(c2,c3),9999999999) hash_value, bucket_size
                from    (
                        select 'Y' c2, 'Y' c3, 1 bucket_size from dual
                        union all
                        select 'N' c2, 'N' c3, 1 from dual
                        union all
                        select 'Y' c2, 'N' c3, 71482 from dual
                        union all
                        select 'N' c2, 'Y' c3, 1994 from dual
                        )
                order by hash_value
        ) loop
                l_srec.novals.extend;
                l_srec.novals(l_srec.novals.count) := r.hash_value;

                l_srec.bkvals.extend;
                l_srec.bkvals(l_srec.bkvals.count) := r.bucket_size;
        end loop;

        n_array := l_srec.novals;

        l_distcnt  := 4;
        l_srec.epc := 4;

--
--      For 11g rpcnts must not be mentioned
--      For 12c is must be set to null or you
--      will (probably) raise error:
--              ORA-06533: Subscript beyond count
--

        l_srec.rpcnts := null;

        dbms_stats.prepare_column_values(l_srec, n_array);

        dbms_stats.set_column_stats(
                ownname =>null,
                tabname =>'t1',
                colname =>:v1,
                distcnt =>l_distcnt,
                density =>l_density,
                nullcnt =>l_nullcnt,
                avgclen =>l_avgclen,
                srec    =>l_srec
        );

end;

The outline of the code is simply: get_column_stats, set up a couple of arrays and simple variables, prepare_column_values, set_column_stats. The special detail that I’ve included here is that I’ve used a “union all” query to generate an ordered list of hash values (with the desired frequencies), then grown the arrays one element at a time to copy them in place. (That’s not the only option at this point, and it’s probably not the most efficient option – but it’s good enough). In the past I’ve used this type of approach but used an analytic query against the table data to produce the equivalent of 12c Top-frequency histogram in much older versions of Oracle.

A couple of important points – I’ve set the “end point count” (l_srec.epc) to match the size of the arrays, and I’ve also changed the number of distinct values to match. For 12c to tell the code that this is a frequency histogram (and not a hybrid) I’ve had to null out the “repeat counts” array (l_srec.rpcnts). If you run this on 11g the reference to rpcnts is illegal so has to be commented out.

After running this procedure, here’s what I get in user_tab_histograms for the column:


select
        endpoint_value                          column_value,
        endpoint_number                         endpoint_number,
        endpoint_number - nvl(prev_endpoint,0)  frequency
from    (
        select
                endpoint_number,
                lag(endpoint_number,1) over(
                        order by endpoint_number
                )                               prev_endpoint,
                endpoint_value
        from
                user_tab_histograms
        where
                table_name  = 'T1'
        and     column_name = :v1
        )
order by endpoint_number
;

COLUMN_VALUE ENDPOINT_NUMBER  FREQUENCY
------------ --------------- ----------
   167789251               1          1
  4794513072            1995       1994
  6030031083           73477      71482
  8288761534           73478          1

4 rows selected.


It’s left as an exercise to the reader to check that the estimated cardinality for the predicate “c2 = ‘N’ and c3 = ‘N'” is 1 with this histogram in place.

Hitachi Content Intelligence deployment

Yann Neuhaus - Thu, 2018-08-02 07:19

Hitachi Content Intelligence (HCI) is a search and data processing solution. It allows the extraction, classification, enrichment, and categorization of data, regardless of where the data lives or what format it’s in.

Content Intelligence provides tools at large scale across multiple repositories. These tools are useful for identifying, blending, normalizing, querying, and indexing data for search, discovery, and reporting purposes.

Architecture

HCI has components called data connections that it uses to access the places where your data is stored (these places are called data sources). A data connection contains all the authentication and access information that HCI needs to read the files in the data source.

HCI is extensible with published application programming interfaces (APIs) that support customized data connections, transformation stages, or building new applications.

HCI-1

HCI is composed of many services running on Docker.

[centos@hci ~]$ sudo docker ps -a
[sudo] password for centos:
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
0547ec8761cd        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           admin-app
1f22db4aec4b        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           sentinel-service
fa54650ec03a        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           haproxy-service
6b82daf15093        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           marathon-service
a12431829a56        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_master_service
812eda23e759        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           mesos_agent_service
f444ab8e66ee        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           zookeeper-service
c7422cdf3213        com.hds.ensemble:25.0.0.1529   "/home/centos/hci/..."   23 minutes ago      Up 23 minutes                           watchdog-service

Below a representation of all services of HCI platform.

HCI-2

System Requirements

HCI has been qualified using these Linux distributions:

  • Fedora 24
  • Centos 7.2
  • Ubuntu 16.04 LTS
Docker requirements

HCI requires Docker software installed in each server running HCI. Docker version > 1.3.0 must be installed on all instances.

Network requirements

Each HCI instance must have a static IP address and the multiple ports must be opened for HCI tools such as Zookeeper, Mesos, Cassandra, Kafka, etc…

To see the list of port, refer to HCI official documentation. For our testing environment, we will stop the firewall service.

System configuration & Installation

HCI can run on physical or virtual servers, or hosted on public or private clouds. It is instantiated as a set of containers and provided to users as a self-service facility with support for detailed queries and ad hoc natural language searches. HCI can run in the single instance or in a cluster mode. For our blog, we will use a single instance.

Docker version:
[centos@hci ~]$ docker --version
Docker version 1.13.1, build 87f2fab/1.13.

If Docker is not installed, please follow the installation methods from the Docker official website.

Disable SELinux:
  • Backup current SELinux configuration:
[centos@hci ~]$ sudo cp /etc/selinux/config /etc/selinux/config.bak
  • Disable SELinux:
[centos@hci ~]$ sudo sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
 Create user and group:
[centos@hci ~]$ sudo groupadd hci -g 10001

[centos@hci ~]$ sudo useradd hci -u 10001 -g 1000
Disable firewall service:
[centos@hci ~]$ sudo service firewalld stop

Redirecting to /bin/systemctl stop firewalld.service
 Run Docker service:
[centos@hci ~]$ sudo systemctl status docker

* docker.service - Docker Application Container Engine

Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)

Active: active (running) since Thu 2018-08-02 10:08:38 CEST; 1s ago
Configure the Docker service to start automatically at boot:
[centos@hci ~]$ sudo systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service
Change the vm.max_map_count value:

Add ‘vm.max_map_count = 262144′ to /etc/sysctl.conf

[centos@hci ~]$ sudo vi /etc/sysctl.conf
[centos@hci ~]$ sudo sysctl -w vm.max_map_count=262144
Download HCI

Create first your Hitachi Vantara account, https://sso.hitachivantara.com/en_us/user-registration.html.

Then, from the Hitachi Vantara Community website https://community.hitachivantara.com/hci, clicks “Downloads”. You will have access to a 90-day trial license with the full feature set.

HCI Installation

Create a directory called /hci, in the location of your choice. We recommend you to use the largest disk partition:

[centos@hci ~]$ mkdir hci

Move the installation package to your hci directory:

[centos@hci ~]$ mv HCI-1.3.0.93.tgz hci/

Extract the installation package:

[centos@hci hci]$ sudo tar –xzf HCI-1.3.0.93.tgz

Run the installation script in the version-specific directory:

[centos@hci hci]$ sudo 1.3.0.93/bin/install

Run the hci_setup script:

[centos@hci50 bin]$ sudo ./setup -i <ip-address-instance>

Run the hci_run script, and ensure that the method you use can keep the hci_run script running and can automatically restart in case of server reboot:

We recommend running the script as a service using systemd:

In the installation package, a service file is provided and you can edit this file according to your configuration:

  1. Edit the HCI.service file:
[centos@hci bin]$ vi HCI.service
  1. Ensure the ExecStart parameter is properly set, with the right path:
ExecStart=/home/centos/hci/bin/run

If not, change it to your hci installation path.

  1. Copy the HCI.service file to the appropriate location for your OS:
[centos@hci bin]$ sudo cp /hci/bin/HCI.service /etc/systemd/system
  1. Enable and start HCI service:
[centos@hci bin]$ sudo systemctl enable HCI.service

Created symlink from /etc/systemd/system/multi-user.target.wants/HCI.service to /etc/systemd/system/HCI.service.

[centos@hci bin]$ sudo systemctl start HCI.service

Check if the service has properly started:

[centos@dhcp-10-32-0-50 bin]$ sudo systemctl status HCI.service

* HCI.service - HCI

   Loaded: loaded (/etc/systemd/system/HCI.service; enabled; vendor preset: disabled)

   Active: active (running) since Thu 2018-08-02 11:15:09 CEST; 45s ago

 Main PID: 5849 (run)

    Tasks: 6

   Memory: 5.3M

   CGroup: /system.slice/HCI.service

           |-5849 /bin/bash /home/centos/hci/bin/run

           |-8578 /bin/bash /home/centos/hci/bin/run

           `-8580 /usr/bin/docker-current start -a watchdog-service

HCI Deployment

With your favorite web browser, connect to HCI administrative App:

https://<HCI-instance-ip-address>:8000

On the Welcome page, set a password for the admin user:

HCI-3

Choose what you would like to deploy:

Screen Shot 2018-08-02 at 11.26.40

Click on Hitachi Content Search button and click on Continue button.

Click on Deploy Single Instance button:

Screen Shot 2018-08-02 at 11.28.07

Wait for the HCI deployment until it finishes.

Screen Shot 2018-08-02 at 12.15.50

 

 

 

 

 

Cet article Hitachi Content Intelligence deployment est apparu en premier sur Blog dbi services.

Partner Webcast – Building event driven microservices with Oracle Event Hub CS

If we are about to pick one word which would characterize the microservice approach, it would probably be the word freedom. It is all about freedom to change, freedom to deploy at any time, finally...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Retail Recognized as a Leader in Point of Service in Independent Research Report

Oracle Press Releases - Thu, 2018-08-02 07:00
Press Release
Oracle Retail Recognized as a Leader in Point of Service in Independent Research Report

Redwood Shores, Calif.—Aug 2, 2018

Oracle Retail has been recognized as a leader in point of sale solutions by Forrester Research. “The Forrester Wave™: Point Of Service, Q3 2018” report recognizes Oracle Retail for, “…demonstrated strength in mobile extensions, back office functionality, and the architecture to deliver resilient, responsive, modern, cloud and mobile POS deployments. Oracle clients offered a positive view of Oracle’s global footprint, stability and architecture… Oracle is a best fit for sophisticated international retailers.”

According to Forrester, “The POS market is growing because more digital business professionals see it as a way to address their top challenges. This is in large part because they increasingly trust POS providers to act as strategic partners that help them deliver compelling customer experience (CX) through the path to purchase, return, and repeat purchase.”

“Omnichannel retailers are challenged with creating differentiated customer experiences that convert sales, drive loyalty and personalize service,” said Jeff Warren, vice president, Oracle Retail. “Oracle is uniquely positioned to provide retailers with modern POS infrastructure that arms associates with the tools and insights they need to offer the innovative brand experiences that customers expect.”

Xstore Delivers Resilient, Responsive and Modern Cloud POS Innovation Optimized for Mobile

Oracle Retail Xstore Point-of-Service delivers robust functionality which enables retail associates to deliver on brand promise in store with inventory visibility, customer intelligence, and seamless transactions. Here are some new advances with Oracle Retail Xstore Point-of-Service:

  • Rapid Deployment. With Oracle’s pre-integrated omnichannel suite—which includes Oracle Retail Order Management, Oracle Retail Order Broker, Oracle Retail Customer Engagement, Oracle Commerce Cloud and Oracle Retail Xstore Point-of-Service—we’ve built in the integration, orchestrated the synchronization and engineered the hardware for the ultimate Omnichannel experience.  Retailers can focus on configuring the system to meet their business needs/rules, instead of the building of the entire back-end integration. The out-of-the-box sophisticated customer shopping journeys can be implemented in weeks or months.

  • Enabling Flexible Omnichannel Journeys. Associates are empowered to deliver an experience that matches customer expectations. Oracle Retail enables the consumer journeys required to deliver a superior experience. Associates are now able to add multiple order types in a single transaction and during fulfillment split line and split item to improve their ability to sell down to the last item.

  • Single View of Customer. Today’s shoppers share personal interests to the extent where offers should be specific to them. First-hand data, together within Oracle Retail Customer Insights Cloud Service and Oracle Retail Customer Engagement Cloud Service help retailers understand what customers want, and when they want it.

  • Personalization. Customer Entitlements are delivered as an out of the box component of this suite of solutions. And with the stronger integration between Oracle Marketing Cloud and Oracle Retail Customer Engagement Cloud Service, retailers can personalize offers.

  • Thin Deployment with Oracle MICROS Hardware. Oracle software and hardware are engineered to work better together. Oracle Retail Xstore Point of Service has been engineered for smaller footprints and portability with the Oracle MICROS hardware. The software is now optimized to support sleek and slim hardware the 610 Series 700 tablet and the Compact 310.

  • POS Integration that Streamlines Investigation.  A heightened degree of integration between Oracle Retail XBRi Cloud Service and Oracle Retail Xstore Point-of-Service further enhances the ability of XBRi’s embedded science to pinpoint new sources of risk and deliver purpose-build reports that streamline and support investigative activities while leveraging the best practices built into the Oracle Retail portfolio.

  • Modern Retailing. Oracle Retail Xstore Point-of-Service delivers associate mobility that allows them to engage with their customers on the sales floor where the purchasing decision is made. Blurring the line between the shopping and the purchasing experiences, Xstore’s IP enabled store delivers shared peripherals (printers, PIN pads, and even the cash drawer) significantly increasing capacity while reducing the overall deployment cost.  

  • Secure Payments. Oracle Retail Xstore Point-of-Service offers retailers a secure abstracted payment solution that enables the rapid uptake of emerging payment technologies while removing the burden of PCIDSS overheads.

  • Supporting the Global Footprint. Through a combination of configuration and prepackaged accelerators all within the single code base, Oracle Retail Xstore Point-of-Service delivers a global solution that addresses the many and varying requirements for retailers’ operations around the world.

Continued Global Customer Momentum

Oracle Retail customers continue to augment their transaction experience, improve employee productivity and drive long-term loyalty with Xstore Point of Service:

  • ABC Fine Wine & Spirits modernizes the customer experience with Oracle Retail Xstore Point-of-Service.

  • Luxury Retailer Chalhoub delivers the first Middle East modern, mobile deployment of Xstore in six months.

  • Helzberg Diamonds empowers associates to create meaningful customer experiences by upgrading Oracle Retail Xstore Point-of-Service solution while implementing Oracle Retail Order Broker and Oracle Retail Customer Engagement.

  • Italian Fashion Company Miroglio offers customers a rich shopping experience with Oracle Retail Xstore and Oracle Retail Customer Engagement.

  • Global luggage retailer Samsonite upgraded Oracle Retail Xstore Point-of-Service and Oracle Retail Customer Engagement while adopting Oracle Retail Order Broker Cloud Service.

  • UK specialty retailer Wyevale deploys the latest version of Oracle Retail Xstore Point-of-Service, Oracle MICROS Hardware and Oracle Retail Customer Engagement with Oracle Retail Consulting.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Consumer Research Reveals Majority of Restaurant and Hotel App Users Engage Once a Week

Oracle Press Releases - Thu, 2018-08-02 07:00
Press Release
Consumer Research Reveals Majority of Restaurant and Hotel App Users Engage Once a Week Oracle Food and Beverage Report Highlights Global Consumer Trends In Hospitality App Usage

Redwood Shores, Calif.—Aug 2, 2018

A new report titled Get Appy: Do Consumers Use Restaurant & Hotel Branded Apps revealed that a majority of global consumers (57 percent) have used or are using mobile applications to engage with hospitality operators. The study of 15,000 consumers across Europe, Latin America, Asia-Pacific and North America revealed that of the 23 percent that have at least one restaurant or hotel branded app on their mobile device, 70 percent are using them at least once a week. Increased app engagement across global consumers creates new opportunities to personalize service, incentives and menu offerings and highlights the need for modern food and beverage technology to deliver more meaningful guest experiences.

“Consumers are willing to engage with brands through mobile applications if operators can deliver differentiated value,” said Chris Adams, vice president strategy, Oracle Food and Beverage. “Operators that lean into the mobile opportunity for the food and beverage industry will have a significant competitive advantage with greater insight into service preferences and emerging menu trends.”

  • Of the 23 percent of consumers who have download a restaurant or hotel app, two-thirds have more than three apps on their devices.

  • Asia Pacific leads with 82 percent of consumers using a hospitality app at least once a week compared to 54 percent of consumers in North America.

  • One in five global consumers has at least one app for a food delivery service and 23 percent have a booking app for hotels or restaurants on their device.

  • Almost a third (28 percent) of consumers have paid for food and drink from an app on their mobile device at least once with increased adoption among Asia-Pacific consumers (37 percent)

“Historically development of food and beverage applications has been cost and labor prohibitive for a majority of the marketplace,” said Chris Adams, vice president strategy, Oracle Food and Beverage. “With a modern, cloud-based POS system operators can extend their investment and take control of guest relationships while encouraging long-term loyalty with personalized incentives through an integrated mobile experience.”

Oracle Food and Beverage provides operators with API interfaces through Oracle Hospitality Simphony Cloud Service that allow restaurants to extend their POS with mobile integrations. From mobile apps that enable loyalty or ordering flexibility, to back office applications that facilitate smoother operational efficiencies the open nature of the Simphony Cloud platform supports business growth and innovation.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Oracle Recognized as a Leader in 2018 Gartner Magic Quadrant for Mobile App Development Platforms

Oracle Press Releases - Thu, 2018-08-02 07:00
Press Release
Oracle Recognized as a Leader in 2018 Gartner Magic Quadrant for Mobile App Development Platforms For second consecutive year, Oracle positioned as leader based on ability to execute and completeness of vision

Redwood Shores Calif—Aug 2, 2018

Oracle today announced that it has been named a Leader in Gartner’s 2018 Magic Quadrant for Mobile App Development Platforms1.This is the second consecutive year Oracle Mobile Cloud has been recognized as a Leader and further underscores the strength of Oracle’s solution, which offers developers the latest emerging technologies to help build unified multi-channel applications.

As mobile user habits continue to evolve, enterprises are expanding efforts to reach audiences on a variety of platforms. According to Gartner, “55 percent of all organizations will have deployed (used in production) at least one additional app experience in addition to a mobile app by 2020”2. Additionally, the analyst firm predicts that “by 2021 at least one-third of enterprises will have deployed a multi-experience development platform to support mobile, web, conversational and augmented reality development”3. Using Oracle Mobile Cloud, businesses are able to create cohesive experiences across all platforms to engage key stakeholders on the most popular devices and platforms.

“Smartphone adoption continues to grow and mobile apps, while key to engaging customers, are beginning to give way to new technologies like conversational interfaces,” said Suhas Uliyar, vice president, product management, Oracle. “The addition of services like Oracle’s intelligent bots, which can readily build chatbots for customers, will be instrumental for digital transformation as businesses engage with audiences on new and emerging platforms.”

According to Gartner, “Leaders have a strong combination of Ability to Execute and Completeness of Vision. In the MADP sector, this means that Leaders are not only good at cross-platform development, deployment and management across the full life cycle, but also demonstrate strong vision for multiexperience development support. A Leader must also have open architectures and support open standards, while showing a solid understanding of IT requirements, and scalable sales channels and partnerships. Leaders must provide platforms that are easy to purchase, program, deploy and upgrade, and which can connect to a range of systems of record and third-party cloud services.”

Access a complimentary copy of Gartner’s 2018 Magic Quadrant for Mobile App Development Platforms here.

Part of Oracle Cloud Platform, Oracle Mobile Cloud is a complete multi-channel platform managed by Oracle to help developers and enterprises engage intelligently and contextually with customers, business partners and employees through the end user’s channel of choice. It offers no-code solutions and enables customers to deliver engaging, personalized digital experiences that will delight customers across multiple channels. Not only can customers engage via mobile and web channels, but now take advantage of the next leap in technology—artificial intelligence—for all platforms and intelligent bot services. In addition to providing a platform to build engaging experiences across mobile, bots and web, users also get actionable insights via analytics that provide deep understanding of user adoption behavior and app performance across platforms, so businesses can personalize engagement and ensure that everything is running at peak performance. The platform also enables enterprises to connect all backend applications, a key factor to creating a holistic engagement strategy.

1. Source: Gartner, Magic Quadrant for Mobile App Development Platforms, Jason Wong, Van L. Baker, Adrian Leow, Marty Resnick, 17 July 2018

2. Source: Gartner, It's Time for App Leadership to Reframe Mobile App Development Decisions, Marty Resnick, Adrian Leow, Jason Wong, 28 February 2018

3. Source: Gartner, Technology Insight for Multiexperience Development Platforms, Jason Wong, Van L. Baker, William Clark, Adrian Leow, Marty Resnick, Mark Driver, Magnus Revang, Rob Dunie, 21 February 2018

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Nicole Maloney

  • +1.650.506.0806

Keep up to Date With Critical Patches

Anthony Shorten - Wed, 2018-08-01 20:39

One of the most important recommendations I give to customers is to keep up to date with the latest patches, especially all the security patches, to improve performance and reduce risk.

For more information refer to the following sites:

Oracle WebLogic, Oracle Linux, Oracle Solaris and Oracle Database patches apply to Oracle Utilities products.

Procedure having OUT parameter vs Function

Tom Kyte - Wed, 2018-08-01 19:06
Thanks for taking up this question. Are there any guidelines regarding when to use a procedure(OUT parameter) vs Function. Both structures can be used to achieve the same objective in specific situation. I have created a function F1 and a procedu...
Categories: DBA Blogs

Oracle Linux containers security

Wim Coekaerts - Wed, 2018-08-01 13:05

I recently did a short webcast that talked about Oracle Linux & Containers and some suggestions around best practices and some security considerations.

The webcast had just a few slides and some of the feedback I received was that there could have been more textual assist to the talking so I promised I would write up a few things that came up during the webcast. Here it is:

We have been providing Oracle Linux along with great support for nearly 12 years. During those years, we have added many features and enhancements. Through upstream contributions, picked up by the various open source projects that are distributed as part of Oracle Linux (in particular UEK) or additional features/services such as Oracle Ksplice or DTrace (released under GPL), etc...

In terms of virtualization, we’ve been contributing to Xen since 2005+.  Xen is the hypervisor used in Oracle VM. A bit more recently, we are also heavily focus on kvm and qemu in Linux.  Of course, we have Oracle VM VirtualBox. So a lot of virtualization work has been going on for a very long time and will continue to be the case for a very long time. We have many developers working on this full time (and upstream).

Container work:

We were early adopters of lxc and were one of the first, if not the first, to certify lxc with enterprise applications such as our database or applications. This was before Docker existed.

Lxc was the initial push to  mainstreaming container support in Linux.  It helped push a lot of projects in the Linux kernel around resource management, namespace support, all the cgroups work,... lots of isolation support really got a big start around this time. Many developers contributed to it and certainly a bunch of openvz concepts got proposed to get merged into the mainline kernel.

A few years after lxc, Docker came to the forefront and really made containers popular - talk about mainstream… and again, we ended up providing Docker from the very beginning and saw a lot of potential in the concept of lightweight small images on Linux for our product set.

Today - everyone talks about Kubernetes, Docker or Docker-alternatives such as Rkt and microservices. We provide Oracle Container Services for use with Kubernetes and Oracle Container Runtime for Docker support to our customers as part of Oracle Linux subscriptions. Oracle also has various Oracle Cloud services that provide Kubernetes and Docker orchestration and automation. And, of course, we do a lot testing and supporting  many Oracle products running in these isolation environments.

The word isolation is very important.

For many years I have been using the world isolation when it comes to containers, not virtualization. There is a big distinction.

Running containers in a Linux environment is very different from running Solaris Zones, or running VMs with kvm or Xen. Kvm or Xen, that’s "real" virtualization. You create a virtual compute environment and boot an entire operating system inside (it has a virtual bios, boots a kernel from a virtual disk, etc). Sure-  there are some optimizations and tricks around paravirtualization but for the most part it’s a Virtual Machine on a real machine. The way Solaris Zones is implemented  is also not virtualization, since you share the same host kernel amongst all zones etc, But - the Solaris Zones  implementation is done as full fledged feature. It’s a full-on isolation layer inside Oracle Solaris top to bottom. You create a zone and the kernel does it all for you right then and there: it creates a completely separate OS container for you, with all the isolation provided across the board. It’s great. Has been around for a very long time, is used widely by almost every Oracle Solaris user and it works great. It provides a very good level of isolation for a complete operating system environment. Just like a VM provides a full virtual hardware platform for a complete operating system environment.

Linux containers, on the other hand, are implemented very differently. A container is created through using a number of different Linux kernel features and you can provide isolation at different layers. So you can create a Linux container that acts very, very similar to a Solaris zone but you can also create a Linux container that has a tremendous amount of sharing amongst other containers or just other processes. The Linux resource manager and various namespace implementations let you pick and choose. You can share what you want, and you can isolate what you want. You have a PID namespace, IPC namespace, User Namespace, Net namespace ,... each of these can be used in different ways or combined in different ways. So there’s no CONTAINER config option in linux, no container feature but there are tools, libraries, programs that use these namespaces and cgroups to create something that looks like a complete isolated environment akin to zones.

Tools like Docker and lxc do all the "dirty work" for you, so to speak. They also provide you with options to change that isolation level up and down.

Heck, you can  create a container environment using bash!  Just echo some values to a bunch of cgroups files and off you go. It’s incredibly flexible.

Having this flexibility is great as it allows for things like Docker (just isolated a process, not a whole operating environment). You don’t have to start with /bin/init or /bin/systemd and bring up all the services. You can literally just start httpd and it sees nothing but itself in its process namespace. Or… sure… you can start /bin/init and you get a whole environment, like what you get by default with lxc.

I think Docker (and things like Docker - Rkt,..) is the best user of all these namespace enhancements in the Linux kernel. I also think that, because the Linux kernel developers implemented resource and namespace management the way they did, it allowed for a project like Docker to take shape. Otherwise, this would have been very difficult to conceive. It allowed us to really enter a new world of… just start an app, just distribute the app with the libraries it needs, isolate an app from everything else, package things as small as possible as a complete standalone unit…

This,in turn, really helped the microservices concept because it makes micro really... micro... Docker-like images give a lot more flexibility to application developers because now you can have different applications running on the same host that have different library needs or different versions of the  same application without having to mess with PATH settings and carving out directories and seeing one big mess of things… Sure, you can do that with VMs… but the drawback of a VM is (typically) that you bring in an entire OS (kernel, operating environment) to then start an app. This can cause a lot of overhead. Process isolation along with small portable images gives you an incredibly amount of flexibility and...sharing...

With that flexibility also comes responsibility - whereas one would have in the order of 10-20 VMs on a given server, you can run maybe 30-40-50 containerized OS environments (using lxc) but you could run literally 1000s of application containers using docker. They are, after all, just a bunch of OS processes with some namespaces and isolation. And if all they run is the application itself, without the surrounding OS supported services, you have much less overhead per app than traditional containers.

If you run very big applications that need 100% performance and power and the best ‘isolation’... you run a single app on a single physical server.

If you have a lot of smaller apps, and you’re not worried about isolation you can just run those apps on a single physical server. Best performance, harder to manage.

If you have a lot of smaller environments that you need to host with different OSs or different OS levels,.. You typically just run tons of VMs on a physical server. Each VM boots its own kernel, has its own virtual disk, memory etc. and you can scale.. 4-16 typical.

If you want to have the best performance where you don’t need that high isolation of separate kernels and independent OS releases down the kernel version (or even something like Windows and Linux  or Oracle Linux  and Ubuntu etc)... then you can consider containers. Super light weight, super scalable and portable.

The image can range from an OS image (all binaries installed, all libraries like a vm or physical OS install) or… just an app binary, or an app binary + libraries it needs. If you create a binary that is statically linked, you can have a container that's exactly 1 file. Isn't that awesome?

Working on Operating Systems at a company that is also a major cloud provider is really great. It gives us direct access to scale. Very, very large scale... and also a direct requirement around security. As a cloud provider we have to work very, very hard towards ensuring security in a multi-tenant environment. Protect customers data from one another. Deploying systems in isolation in an enterprise can be at a reasonable scale and of course security is very important or should be but the single tenancy aspect reduces the complexity to a certain extend.

Oracle Linux is used throughout Oracle cloud as the host for running VMs, as the host for running container services or other services, in our PaaS, SaaS stacks, etc. We work very closely with the cloud development teams to provide the fastest, most scalable solutions without compromising security. We want VMs to run as fast possible, we want to provide container services, but we also make sure that a container running for tenant A doesn’t, in any way, expose any data to a container running for tenant B.

So let’s talk a little bit about security around all this. Security breaches are up. A significant increase of data breaches every month, hacking attempts… just start a server or a VM with a public IP on the internet and watch your log files - within a few minutes you see login attempts and probes. It’s really frightening.

Enterprises used to have 100s maybe 1000s of servers - you have to keep the OS and applications current with security fixes. While reasonably large, still manageable… then add in virtualization and you increase by a  factor the number of instances (10000+)… so you drastically increase your exposure … and then you go another factor or couple of factors up  to microservices and containers - deployed across huge numbers of servers… security becomes increasingly more important and more difficult. 100000+... Do you even know where they run, what they run, who owns them?

On top of all that - in the last 8 or so months: Spectre and Meltdown.  Removing years of assumptions and optimizations everyone has relied upon. We suddenly couldn't trust VMs on the same host being isolated well enough, or processes from snooping on other processes, without applying code changes on the OS side or even in some cases in the applications to prevent exposure.

Patches get introduced. Performance drops.. And it’s not always clear to everyone what the potential exposure is and where you have to really worry and where you might not have to worry too much.

When it comes to container security, there are different layers:

Getting images / content from external (or even internal sites)

There are various places where developers can download 3rd party container images. Whereas in the past one would download source code for some project or download a specific application… these container images (let’s call them docker images) are now somewhat magical blackboxes you download a filesystem layer, or a set of layers. There are tons of files inside but you don’t typically look around, you pull an image and start it… not quite knowing what’s inside… these things get downloaded onto a laptop.. Executed… and … do you know what’s inside? Do you know what it’s doing? Have these been validated? Scanned?

Never trust what you just download from random sites. Make sure you download things that are signed, or have been checksummed and come from reputable places. Good companies will run vulnerability scanners such as Clair or Qualys as part of the process, make sure developers have good security coding practices in place. When you download an image published on Oracle Container Registry, it contains code that we built, compiled, tested, scanned, put together.  When you download something from a random site, that might not be the case.

One problem: it is very easy to get things from the outside world.. # docker pull,  by default, goes to Docker hub.. Companies can’t easily put development environments in place that prevent you from doing that. One thing we are working on with Oracle Containers Runtime using Docker is adding support for access control to Docker image repos. You can lock down which repos are accessible and which aren’t. . for instance: your Docker repo list can be an internal site only, not Docker hub.

When building container images you should always run some form of image scanner.

We are experimenting with Notary - use Notary to digitally sign content so that you  can verify images that are pulled down. We are looking at providing a Notary service and the tools for you to build your own.

Building images

Aside from using Clair or Qualys in your own CI/CD environment, you also have to make sure that you update the various layers (OS, library layer, application layer(s)) with the latest patches. Security errata are released on a regular basis. With normal OS’s whether bare metal or VMs, sysadmins run management software that easily updates packages on a regular basis and keeps things up to date. It’s relatively easy to do so and it is easy to see what is installed on a given server. There might be an availability impact when it comes to kernel updates but for the most part it is a known problem...  Updating containers, while technically, you can argue, it’s easy… just rebuild your images… it does mean that you have to go to all servers running these containers and bring them down and back up. You can’t just update a running image. The ability to do anything at runtime is much more limited than when you run an OS instance with an application. From a security point of view, you have to consider that. Before you start deploying containers at scale, you have to decide on your patch strategy. How often do you update your images, how do you distribute these images, how do you know all the containers that are running and which versions they run, which layers are they running etc.. sorting this out after a critical vulnerability hits will introduce delays and have a negative impact and potentially create large exposure.

So - have a strategy in place to update your OS and application layers with security fixes, have a strategy in place on how to distribute these new image updates and refresh your container farm.

Lock down

If you are a sophisticated user/developer, you have the ability to really add very fine grained controls. With Docker you have options like privileged containers: giving extra access to devices and resources. Always verify that anything that is started privileged has been reviewed by a few people. Docker also provides Linux Capabilities control such as mknod or setgid or chroot or nice etc.. look at your default capabilities that are defined and where possible, remove any and all that are not absolutely needed.

Look into the use of SELinux policies.  While SELinux operates at the host level only, it provides you with an additional security blanket. Create policies to restrict access to files or operations.

There is no SELinux namespace support yet.  This is an important project to work on, we started investigating this, so that you can use SELnux within a container in its own namespace, with its own local container policies.

Something we use a lot as well inside Oracle: seccomp. Seccomp lets you filter syscalls (white list). Now, when you really lock down your syscalls and have a large list, there can be a bit of a performance penalty… We’re doing development work to help improve seccomp’s filter handling in the kernel. This will show up in future versions of upstream Linux and also in our UEK kernel.

What’s nice with seccomp is that if you have an app and you know exactly which few syscalls are required, you can enforce that it will only ever be allowed to access / execute those systemcalls and nothing else will get through in case a rogue library would magically get loaded and try to do something.

So if you are really in need for the highest level of lockdown, a combination of these 3 is ideal. Use seccomp to restrict your system calls exposed to your container, use SELinux policies to control access to processes that are running and what they can do with labels, use capabilities alongside / on top of seccomp to prevent privileged commands to run and run everything non-privileged.

The third major part is the host OS.

You can lock down your container images and such, but remember that these instances all run (typically) on a Linux server. This server runs an OS kernel, OS libraries (glibc)... and security vulnerability fixes need to be applied. Always ensure that you apply errata on the host OS…  I would always recommend customers to use Oracle Ksplice with Oracle Linux

Oracle Ksplice is a service that provides the ability for users to apply critical fixes (whether bugs or vulnerabilities) while the system is up and running with no impact to the applications (or containers).

While not every update can be provided as an online patch, we’ve had a very, very high success rate. Even very complex code changes been fixed or changed using Ksplice.

We have two areas that we can address. Kernel – the original functionality since 2009 and also since a number of years, a handful of userspace libraries. We are in particular focused on those libraries that are in the critical path – glibc being the most obvious one along with openssl.

While some aspects of security are the ability to lock down systems and reduce the attack surface, implement best practices, protect source of truth, prevent unauthorized access as much as possible, etc… if applying security fixes is difficult and have a high impact on availability, most companies / admins will take their time to apply them. Potentially waiting weeks or months or even longer to schedule downtime. Keep in mind that with Ksplice we provide the ability to ensure your host OS (whether using kvm or just containers) can be patched while all your VMs and/or containers continue to run without any impact whatsoever. We have a unique ability to significantly reduce the service impact of staying current with security fixes.

Some people will be quick to say that live migration can help with upgrading VM hosts by migrating VM guest off to another server and reboot the host that was freed up – while that’s definitely a possibility, it’s not always possible to offer live migrate capabilities at scale. It’s certainly difficult in a huge cloud infrastructure.

In the world of containers where we are talking about a 10-100 fold or even more number of instances running per server, this is even more critical. Also, there is no live migration yet for containers. There’s some experimental work but not production quality to migrate a container/Docker instance / Kubernetes pod from one server to another.

As we look more into the future with Ksplice: we are looking at more userspace library patching and see how can make that scale on a container level  - the ability to apply , for instance, glibc fixes within container instances directly without downtime. This is a very difficult problem to solve because there can be 100’s of different versions of glibc running and we also have to ensure images are updated on the fly so that a new instance will be ‘patched’ at startup. This is a very dynamic environment.

This brings me to a final project we are working on in the container world:

Project Kata is a hybrid model of deploying applications with the flexibility and ease of use (small, low overhead) of containers and with the security level of VMs.  The scalability of Kata containers is somewhere in between VMs and native containers. Order of low 1000s not high 1000s. Startup time is incredibly fast. Starting a VM typically take 20-30 seconds, starting Docker instances takes in the order of few milliseconds. Starting a Kata container takes between half a second and 3 seconds depending on the task you run.  A Kata container effectively creates a hardware virtualization context (like kvm uses) and boots a very,  very optimized Linux kernel, that can start up in a fraction of a second, with tiny ramdisk image that can execute the binaries in your container image. It provides enough sharing on the host to scale but it also provides a nice clean virtualization context that helps isolation between processes.

Most, if not all, cloud vendors run container services inside VMs for a given tenant. So the containers are isolated from other tenants through a VM context. But that provides a bit more overhead than is ideal. We would like to be able to provide containers that run as native and low overhead as possible.,.. We are looking into providing a preview for developers and users to play with this. Oracle Linux with UEKR5.  We have a Kata container kernel built that boots in a fraction of a second and we created a tiny package that executes a Docker instance on an Oracle Linux host. It’s experimental,  we are evaluating the advantages and disadvantages (how secure is the kernel memory sharing, how good is performance at scale, how transparent is it to run normal docker images in these kata containers, are they totally compatible etc etc).

Lots of exciting technology work happening.

UW Health Selects Oracle Cloud Applications

Oracle Press Releases - Wed, 2018-08-01 07:00
Press Release
UW Health Selects Oracle Cloud Applications Enhanced business visibility and agility help rapidly growing healthcare organization meet escalating demand and improve patient outcomes

Redwood Shores, Calif.—Aug 1, 2018

UW Health, the integrated health system of the University of Wisconsin-Madison, has selected Oracle Cloud for its enterprise resource planning (ERP) system, which includes Financials, Supply Chain and Human Capital Management. With Oracle Cloud Applications, UW Health can increase business agility, optimize the delivery of healthcare services and improve patient outcomes.

UW Health serves more than 600,000 patients annually, with 1,500 physicians and 17,000 staff at seven hospitals and 87 outpatient clinics. To meet growing demand for its nationally recognized regional health system, UW Health needed to replace its 20-year old legacy systems with one unified cloud-based ERP platform for financials, supply chain management and human resources to support its current and future needs.

“Technology and regulations have accelerated the rate of change in the healthcare industry and our legacy business systems were struggling to keep up,” said Elizabeth Bolt, senior vice president and chief operating officer, UW Health. "To ensure we could continue to deliver the best quality services as we grow, we needed agile systems that can quickly turn data into insight. Going forward, we will manage all of our HR, supply chain and financial data on a single integrated platform.”

“Healthcare organizations operate in a very dynamic environment, with changing regulations, best practices and technologies,” said Rick Jewell, senior vice president of applications development, Oracle. “With Oracle Cloud Applications, the UW Health team will always be on the latest version of our business apps enabling them to benefit from innovative new features that will enhance business agility and productivity, and, most importantly, improve patient care.”

Contact Info
Bill Rundle
Oracle
650.506.1891
bill.rundle@oracle.com
About UW Health

UW Health is the integrated health system associated with the University of Wisconsin-Madison serving more than 600,000 patients each year in the Upper Midwest and beyond with approximately 1,500 physicians and 17,000 staff at six hospitals and more than 87 outpatient sites. UW Health is governed by the UW Hospitals and Clinics Authority and partners with UW School of Medicine and Public Health to fulfill their patient care, research, education and community service missions.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Pages

Subscribe to Oracle FAQ aggregator