Feed aggregator

Backup and Restore PostgreSQL with PgBackRest I

Yann Neuhaus - Wed, 2018-02-14 09:58

Many tools can be used to backup PostgreSQL databases. In this blog I will talk about PgBackRest which is a simple tool that can be used to backup and restore a PostgreSQL database. Full, differential, and incremental backups are supported.
In this first blog I will present a basic configuration of pgbackprest. Our configuration is composed of only one cluster and pgbackrest is installed on the server hosting the database. The goal is to explain a first use of PgBackRest.
Below our configuration
Server with Oracle Linux 7
PostgreSQL 10.1
PgBackRest 1.28
We supposed that the linux box and PostgreSQL 10.1 are already installed. So let’s install PgBackRest.

root@pgserver ~]# yum search pgbackrest
Loaded plugins: langpacks, ulninfo
=========================== N/S matched: pgbackrest ============================
pgbackrest.noarch : Reliable PostgreSQL Backup & Restore
pgbackrest.x86_64 : Reliable PostgreSQL Backup & Restore
Name and summary matches only, use "search all" for everything

And then we can install PgBackRest
[root@pgserver ~]# yum install pgbackrest.x86_64
After we can check the installation using pgbackrest command

[postgres@pgserver ~]$ /usr/bin/pgbackrest
pgBackRest 1.28 - General help
Usage:
pgbackrest [options] [command] Commands:
archive-get Get a WAL segment from the archive.
archive-push Push a WAL segment to the archive.
backup Backup a database cluster.
check Check the configuration.
expire Expire backups that exceed retention.
help Get help.
info Retrieve information about backups.
restore Restore a database cluster.
stanza-create Create the required stanza data.
stanza-delete Delete a stanza.
stanza-upgrade Upgrade a stanza.
start Allow pgBackRest processes to run.
stop Stop pgBackRest processes from running.
version Get version.
Use 'pgbackrest help [command]' for more information.

The configuration of PgBackRest is very easy, it consists of a configuration pgbackrest.conf file that must be edited. In my case the file is located in /etc. As specified, we will use a very basic configuration file.
Below the contents of my configuration file

[root@pgserver etc]# cat pgbackrest.conf
[global] repo-path=/var/lib/pgbackrest
[clustpgserver] db-path=/var/lib/pgsql/10/data
retention-full=2
[root@pgserver etc]#

In the file above,
• repo-path is where backup will be stored,
• clusterpgserver is the name of my cluster stanza (free to take what you want as name). A stanza is the configuration for a PostgreSQL database cluster that defines where it is located, how it will be backed up, archiving options, etc.
• db-path is the path of my database files
• retention-full : configure retention to 2 full backups
A complete list can be found here
Once the configuration file done, we can now create the stanza with the command create-stanza. Note that my PostgreSQL cluster is using the port 5435.

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 stanza-create
2018-02-08 14:01:49.293 P00 INFO: stanza-create command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --stanza=clustpgserver
2018-02-08 14:01:50.707 P00 INFO: stanza-create command end: completed successfully
[postgres@pgserver ~]$

After we create the stanza, we can verify that the configuration is fine using the check command

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 check
2018-02-08 14:03:42.095 P00 INFO: check command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --stanza=clustpgserver
2018-02-08 14:03:48.805 P00 INFO: WAL segment 00000001000000000000000C successfully stored in the archive at '/var/lib/pgbackrest/archive/clustpgserver/10-1/0000000100000000/00000001000000000000000C-c387b901a257bac304f27865478fd9f768de83d6.gz'
2018-02-08 14:03:48.808 P00 INFO: check command end: completed successfully
[postgres@pgserver ~]$

Since we did not take yet any backup with PgBackRest, the command info for the backups returns error

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info info
stanza: clustpgserver
status: error (no valid backups)
db (current)
wal archive min/max (10-1): 00000001000000000000000C / 00000001000000000000000C
[postgres@pgserver ~]$

Now let’s take a backup

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 backup
2018-02-08 14:06:52.706 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver
WARN: no prior backup exists, incr backup has been changed to full
2018-02-08 14:06:54.734 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-08 14:06:53": backup begins after the next regular checkpoint completes
2018-02-08 14:06:55.159 P00 INFO: backup start archive = 00000001000000000000000E, lsn = 0/E000060
2018-02-08 14:07:09.867 P01 INFO: backup file /var/lib/pgsql/10/data/base/13805/1255 (592KB, 2%) checksum 61f284092cabf44a30d1442ef6dd075b2e346b7f


2018-02-08 14:08:34.709 P00 INFO: expire command begin 1.28: --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver
2018-02-08 14:08:34.895 P00 INFO: full backup total < 2 - using oldest full backup for 10-1 archive retention
2018-02-08 14:08:34.932 P00 INFO: expire command end: completed successfully
[postgres@pgserver ~]$

We can see that by default PgBackRest will try to do an incremental backup. But as there is no full backup yet, a full backup will be done. Once full backup done, all future backups will be incremental unless we specify the type of backup.

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 backup
2018-02-08 14:26:25.590 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver
2018-02-08 14:26:29.314 P00 INFO: last backup label = 20180208-140653F, version = 1.28
2018-02-08 14:26:30.135 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-08 14:26:26": backup begins after the next regular checkpoint completes
...
2018-02-08 14:27:01.408 P00 INFO: expire command begin 1.28: --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver
2018-02-08 14:27:01.558 P00 INFO: full backup total < 2 - using oldest full backup for 10-1 archive retention
2018-02-08 14:27:01.589 P00 INFO: expire command end: completed successfully
[postgres@pgserver ~]$

If we want to perform another full backup we can specify the option –type=full

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 --type=full backup
2018-02-08 14:30:05.961 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver --type=full
2018-02-08 14:30:08.472 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-08 14:30:06": backup begins after the next regular checkpoint completes
2018-02-08 14:30:08.993 P00 INFO: backup start archive = 000000010000000000000012, lsn = 0/12000028
….
….

To have info about our backups
[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver info
stanza: clustpgserver
status: ok
db (current)
wal archive min/max (10-1): 00000001000000000000000E / 000000010000000000000012
full backup: 20180208-140653F
timestamp start/stop: 2018-02-08 14:06:53 / 2018-02-08 14:08:19
wal start/stop: 00000001000000000000000E / 00000001000000000000000E
database size: 23.2MB, backup size: 23.2MB
repository size: 2.7MB, repository backup size: 2.7MB
incr backup: 20180208-140653F_20180208-142626I
timestamp start/stop: 2018-02-08 14:26:26 / 2018-02-08 14:26:52
wal start/stop: 000000010000000000000010 / 000000010000000000000010
database size: 23.2MB, backup size: 8.2KB
repository size: 2.7MB, repository backup size: 472B
backup reference list: 20180208-140653F
full backup: 20180208-143006F
timestamp start/stop: 2018-02-08 14:30:06 / 2018-02-08 14:31:30
wal start/stop: 000000010000000000000012 / 000000010000000000000012
database size: 23.2MB, backup size: 23.2MB
repository size: 2.7MB, repository backup size: 2.7MB
[postgres@pgserver ~]$

Now that we see how to perform backup with pgbackrest, let’s see how to restore.
First let identify the directory of our database files

[postgres@pgserver ~]$ psql
psql (10.1)
Type "help" for help.
postgres=# show data_directory ;
data_directory
------------------------
/var/lib/pgsql/10/data
(1 row)
postgres=#

And let’s remove all files in the directory

[postgres@pgserver data]$ pwd
/var/lib/pgsql/10/data
[postgres@pgserver data]$ ls
base pg_dynshmem pg_notify pg_stat_tmp pg_wal postmaster.pid
current_logfiles pg_hba.conf pg_replslot pg_subtrans pg_xact
global pg_ident.conf pg_serial pg_tblspc postgresql.auto.conf
log pg_logical pg_snapshots pg_twophase postgresql.conf
pg_commit_ts pg_multixact pg_stat PG_VERSION postmaster.opts
[postgres@pgserver data]$ rm -rf *
[postgres@pgserver data]$

Now if we try to connect, of course we will get errors

[postgres@pgserver data]$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5435"?
[postgres@pgserver data]$

So let’s restore with PgBackRest with the restore command

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info restore
2018-02-08 14:52:01.845 P00 INFO: restore command begin 1.28: --db1-path=/var/lib/pgsql/10/data --log-level-console=info --repo-path=/var/lib/pgbackrest --stanza=clustpgserver
2018-02-08 14:52:03.490 P00 INFO: restore backup set 20180208-143006F
2018-02-08 14:52:21.904 P01 INFO: restore file /var/lib/pgsql/10/data/base/13805/1255 (592KB, 2%) checksum 61f284092cabf44a30d1442ef6dd075b2e346b7f
….
….
2018-02-08 14:53:21.186 P00 INFO: write /var/lib/pgsql/10/data/recovery.conf
2018-02-08 14:53:23.948 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
2018-02-08 14:53:28.258 P00 INFO: restore command end: completed successfully
[postgres@pgserver ~]$

At the end of the backup, a recovery.conf file is created in the data directory

[postgres@pgserver data]$ cat recovery.conf
restore_command = '/usr/bin/pgbackrest --log-level-console=info --stanza=clustpgserver archive-get %f "%p"'

Now we can restart the PostgreSQL cluster

[postgres@pgserver data]$ pg_ctl start
waiting for server to start....2018-02-08 14:57:06.519 CET [4742] LOG: listening on IPv4 address "0.0.0.0", port 5435
2018-02-08 14:57:06.522 CET [4742] LOG: listening on IPv6 address "::", port 5435
2018-02-08 14:57:06.533 CET [4742] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5435"
2018-02-08 14:57:06.551 CET [4742] LOG: listening on Unix socket "/tmp/.s.PGSQL.5435"
2018-02-08 14:57:06.645 CET [4742] LOG: redirecting log output to logging collector process
2018-02-08 14:57:06.645 CET [4742] HINT: Future log output will appear in directory "log".
...... done
server started

And then connect

[postgres@pgserver data]$ psql
psql (10.1)
Type "help" for help.
postgres=#

Conclusion
In this blog we shown in a simple configuration how to perform backup using PgBackRest. This basic configuration can help for first use of PgBackRest. In future articles we will go further in an advanced use of this tool.

 

Cet article Backup and Restore PostgreSQL with PgBackRest I est apparu en premier sur Blog dbi services.

Join Factorization

Jonathan Lewis - Wed, 2018-02-14 09:38

This item is, by a roundabout route, a follow-up to yesterday’s note on a critical difference in cardinality estimates that appeared if you used the coalesce() function in its simplest form as a substitute for the nvl() function. Connor McDonald wrote a followup note about how using the nvl() function in a suitable predicate could lead to Oracle splitting a query into a UNION ALL (in version 12.2), which led me to go back to a note I’d written on the same topic about 10 years earlier where the precursor of this feature already existed but used CONCATENATION instead of OR-EXPANSION. The script I’d used for my earlier article was actually one I’d written in February 2003 and tested fairly regularly since – which brings me to this article, because I finally tested my script against 12.2.0.1 to discover a very cute bit of optimisation.

The business of splitting a query into two parts can be used even when the queries are more complex and include joins – this doesn’t always happen automatically and sometimes has to be hinted, but that can be a costs/statistics thing) for example, from 12.1.0.2 – a query and its execution plan:


select
        *
from
        t1, t2
where
        t1.v1 = nvl(:v1,t1.v1)
and     t2.n1 = t1.n1
;

---------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |         |  1001 |   228K|    11   (0)| 00:00:01 |
|   1 |  CONCATENATION                          |         |       |       |            |          |
|*  2 |   FILTER                                |         |       |       |            |          |
|*  3 |    HASH JOIN                            |         |  1000 |   228K|     8   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL                   | T2      |  1000 |   106K|     4   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL                   | T1      |  1000 |   122K|     4   (0)| 00:00:01 |
|*  6 |   FILTER                                |         |       |       |            |          |
|   7 |    NESTED LOOPS                         |         |     1 |   234 |     3   (0)| 00:00:01 |
|   8 |     NESTED LOOPS                        |         |     1 |   234 |     3   (0)| 00:00:01 |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1      |     1 |   125 |     2   (0)| 00:00:01 |
|* 10 |       INDEX RANGE SCAN                  | T1_IDX1 |     1 |       |     1   (0)| 00:00:01 |
|* 11 |      INDEX UNIQUE SCAN                  | T2_PK   |     1 |       |     0   (0)| 00:00:01 |
|  12 |     TABLE ACCESS BY INDEX ROWID         | T2      |     1 |   109 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(:V1 IS NULL)
   3 - access("T2"."N1"="T1"."N1")
   5 - filter("T1"."V1" IS NOT NULL)
   6 - filter(:V1 IS NOT NULL)
  10 - access("T1"."V1"=:V1)
  11 - access("T2"."N1"="T1"."N1")

You can see in this plan how Oracle has split the query into two queries combined through concatenation with FILTER operations at lines 2 (:v1 is null) and 6 (:v1 is not null) to allow the runtime engine to execute only the appropriate branch. You’ll also note that each branch can be optimised separately and in this case the two branches get dramatically different paths because of the enormous difference in the estimated volumes of data.

So let’s move up to 12.2.0.1 and see what happens to this query – but first I’m going to execute a naughty “alter session…”:


------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |  1001 |   180K|    11   (0)| 00:00:01 |
|   1 |  VIEW                                    | VW_ORE_F79C84EE |  1001 |   180K|    11   (0)| 00:00:01 |
|   2 |   UNION-ALL                              |                 |       |       |            |          |
|*  3 |    FILTER                                |                 |       |       |            |          |
|   4 |     NESTED LOOPS                         |                 |     1 |   234 |     3   (0)| 00:00:01 |
|   5 |      NESTED LOOPS                        |                 |     1 |   234 |     3   (0)| 00:00:01 |
|   6 |       TABLE ACCESS BY INDEX ROWID BATCHED| T1              |     1 |   125 |     2   (0)| 00:00:01 |
|*  7 |        INDEX RANGE SCAN                  | T1_IDX1         |     1 |       |     1   (0)| 00:00:01 |
|*  8 |       INDEX UNIQUE SCAN                  | T2_PK           |     1 |       |     0   (0)| 00:00:01 |
|   9 |      TABLE ACCESS BY INDEX ROWID         | T2              |     1 |   109 |     1   (0)| 00:00:01 |
|* 10 |    FILTER                                |                 |       |       |            |          |
|* 11 |     HASH JOIN                            |                 |  1000 |   228K|     8   (0)| 00:00:01 |
|  12 |      TABLE ACCESS FULL                   | T2              |  1000 |   106K|     4   (0)| 00:00:01 |
|* 13 |      TABLE ACCESS FULL                   | T1              |  1000 |   122K|     4   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(:V1 IS NOT NULL)
   7 - access("T1"."V1"=:V1)
   8 - access("T2"."N1"="T1"."N1")
  10 - filter(:V1 IS NULL)
  11 - access("T2"."N1"="T1"."N1")
  13 - filter("T1"."V1" IS NOT NULL)

There’s nothing terribly exciting about the change – except for the disappearence of the CONCATENATION operator and the appearance of the VIEW and UNION ALL operators to replace it (plus you’ll see that the two branches appear in the opposite order in the plan). But let’s try again, without doing that “alter session…”:


--------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                    |  1001 |   229K|    10   (0)| 00:00:01 |
|*  1 |  HASH JOIN                              |                    |  1001 |   229K|    10   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL                     | T2                 |  1000 |   106K|     4   (0)| 00:00:01 |
|   3 |   VIEW                                  | VW_JF_SET$A2355C8B |  1001 |   123K|     6   (0)| 00:00:01 |
|   4 |    UNION-ALL                            |                    |       |       |            |          |
|*  5 |     FILTER                              |                    |       |       |            |          |
|*  6 |      TABLE ACCESS FULL                  | T1                 |  1000 |   122K|     4   (0)| 00:00:01 |
|*  7 |     FILTER                              |                    |       |       |            |          |
|   8 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1                 |     1 |   125 |     2   (0)| 00:00:01 |
|*  9 |       INDEX RANGE SCAN                  | T1_IDX1            |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N1"="ITEM_1")
   5 - filter(:V1 IS NULL)
   6 - filter("T1"."V1" IS NOT NULL)
   7 - filter(:V1 IS NOT NULL)
   9 - access("T1"."V1"=:V1)

The plan now shows a VIEW which is a UNION ALL involving only table t1 in both its branches. The result set from the view is then used as the probe table of a hash join with t2. You’ll note that the name of the view is now VW_JF_SET$A2355C8B – that’s JF for “Join Factorization”, and the alter session I excecuted to get the first plan was to disable the feature: ‘alter session set “_optimizer_join_factorization”= false;’.

Join factorization can occur when the optimizer sees a union all view with some tables that are common to both (all) branches of the query, and finds that it can move those tables outside the query while getting the same end result at a lower cost. In this case it happens to be a nice example of how the optimizer can transform and transform again to get to the lowest cost plan.

It’s worth noting that Join Factorization has been around since 11.2.x.x, and Or Expansion has been around for even longer – but it’s not until 12.2 that nvl() transforms through Or Expansion, which allows it to transform through Join Factorization.

You’ll note, by the way that with this plan we always do a full tablescan of t2, whereas with just Or-Expansion it’s a potential threat that may never (or hardly ever) be realised.  That’s a point to check if you find that the transformation starts to appear inappropriately on an upgrade. There is a hint to disable the feature for a query, but it’s not trivial to get it right so if you do need to block the feature the smart hint (or SQL Patch) would be “opt_param(‘_optimizer_join_factorization’ ‘false’)”.

Footnote:

If you want to run the experiments yourself, here’s the script I used to generate the data. It’s more complicated than it needs to be because I use the same tables in several different tests:

rem
rem     Script:         null_plan_122.sql
rem     Author:         Jonathan Lewis
rem     Dated:          February 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1        Join Factorization
rem             12.1.0.2        Concatenation
rem
rem

drop table t2;
drop table t1;

-- @@setup  -- various set commands etc.

create table t1 (
        n1              number(5),
        n2              number(5),
        v1              varchar2(10),
        v2              varchar2(10),
        v3              varchar2(10),
        v4              varchar2(10),
        v5              varchar2(10),
        padding         varchar2(100),
        constraint t1_pk primary key(n1)
);

insert into t1
select
        rownum,
        rownum,
        rownum,
        trunc(100 * dbms_random.value),
        trunc(100 * dbms_random.value),
        trunc(100 * dbms_random.value),
        trunc(100 * dbms_random.value),
        rpad('x',100)
from all_objects
where
        rownum <= 1000 -- > comment to avoid WordPress format mess
;

create unique index t1_n2 on t1(n2);

create index t1_idx1 on t1(v1);
create index t1_idx2 on t1(v2,v1);
create index t1_idx3 on t1(v3,v2,v1);

create table t2 (
        n1              number(5),
        v1              varchar2(10),
        padding         varchar2(100),
        constraint t2_pk primary key(n1)
);

insert into t2
select
        rownum,
        rownum,
        rpad('x',100)
from all_objects
where
        rownum <= 1000     -- > comment to avoid WordPress format mess
;

create index t2_idx on t2(v1);

begin dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

variable n1 number
variable n2 number
variable v1 varchar2(10)
variable v2 varchar2(10)
variable v3 varchar2(10)

exec :n1 := null
exec :n2 := null
exec :v1 := null
exec :v2 := null
exec :v3 := null

spool null_plan_122

set autotrace traceonly explain

prompt  ============================================
prompt  One colx = nvl(:b1,colx) predicate with join
prompt  ============================================

select
        *
from
        t1, t2
where
        t1.v1 = nvl(:v1,t1.v1)
and     t2.n1 = t1.n1
;

alter session set "_optimizer_join_factorization" = false;

select
        *
from
        t1, t2
where
        t1.v1 = nvl(:v1,t1.v1)
and     t2.n1 = t1.n1
;

alter session set "_optimizer_join_factorization" = true;

set autotrace off

spool off

Caesars Entertainment Transforms Its Iconic Business with Oracle

Oracle Press Releases - Wed, 2018-02-14 07:15
Press Release
Caesars Entertainment Transforms Its Iconic Business with Oracle World-leading gaming and entertainment company selects Oracle Cloud Applications to transform business processes

Redwood Shores, Calif.—Feb 14, 2018

Caesars Entertainment Corporation, the world’s most diversified casino entertainment company, has selected Oracle Cloud Applications to improve the experiences of its guests and employees. With Oracle Cloud Applications, Caesars Entertainment has increased business agility and reduced costs by streamlining financial processes and improving employee productivity and engagement across its entire business operations, which includes Harrah’s, Caesars and Horseshoe.

To modernize business processes while continuing to provide its guests with unsurpassed service across 47 properties in five countries, Caesars Entertainment needed to completely rethink its existing business systems. With Oracle Enterprise Resource Planning (ERP) Cloud  and Oracle Human Capital Management (HCM) Cloud services, Caesars Entertainment has transformed its business by connecting 650 disparate systems to a cloud-based solution that unifies business and employee data on a modern, unified platform.

“We are always looking at how we can provide guests with the best possible services and products,” said Keith Causey, senior vice president and chief accounting officer, Caesars Entertainment. “While we have traditionally focused on the guest experience, a critical part of that process involves the business applications we use to run our organization. Oracle Cloud Applications enabled us to modernize our financial and HR systems so that we could quickly and easily embrace industry best practices, connect disparate applications and data sets and improve productivity.”

Oracle Cloud Applications will enable Caesars Entertainment to benefit from a complete and fully integrated suite of business applications. With Oracle ERP Cloud, Caesars Entertainment will be able to increase productivity, lower costs and improve controls by eliminating spreadsheets and manual processes. Oracle HCM Cloud will enable Caesars Entertainment to find, grow and retain the best talent and achieve complete workforce insights by streamlining access to internal and external talent and delivering advanced reporting and analytics.

“Caesars Entertainment has become part of the American culture, earning its place in the minds of millions worldwide,” said Steve Miranda, executive vice president of applications development at Oracle. “With Oracle Cloud Applications, Caesars Entertainment has embraced modern finance and HR best practices, increased productivity and dedicated more resources to its ongoing mission to provide guests with amazing experiences.”

Contact Info
Evelyn Tam
Oracle PR
+1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • +1.650.506.5936

Report: Cloud Services to Increase U.S. GDP by $2 Trillion over the Next Decade

Oracle Press Releases - Wed, 2018-02-14 07:00
Press Release
Report: Cloud Services to Increase U.S. GDP by $2 Trillion over the Next Decade New research from Dr. Michael Mandel, senior fellow at the Mack Institute of Innovation Management at Wharton, predicts cloud accessibility to technologies such as blockchain, AI, and automation, will power the next big wave of U.S. productivity

Oracle Modern Finance Experience, New York, NY—Feb 14, 2018

Oracle (NYSE: ORCL) today published a report examining the potential impact of cloud services on the United States economy. Authored by Dr. Michael Mandel, senior fellow at the Mack Institute of Innovation Management at the Wharton School and commissioned by Oracle, the report estimates that a cumulative US$2 trillion will be added to U.S. Gross Domestic Product over the next ten years as a result of the productivity and innovation gains that cloud services will deliver. These gains will be attained through the widespread diffusion of advanced technologies such as blockchain, artificial intelligence, cognitive computing, machine learning, and intelligent automation; as well as industry best practices.

The report, titled “Intelligent Finance: How CFOs Can Lead the Coming Productivity Boom builds upon a 2017 study conducted by Dr. Mandel on behalf of the Technology CEO Council. It provides CFOs and finance leaders with a deeper understanding of the potential productivity and profitability gains that can be realized by embracing cloud services and the emerging technologies and best practices they contain.

Dr. Mandel’s research shows a widening productivity divide between the organizations and industries that have invested in software technologies and those that haven’t. But, the report predicts cloud services will close the productivity gap as organizations and industries that have traditionally lagged in technology adoption begin to take advantage of more cost effective and accessible cloud-based solutions. These gains will benefit workers, shareholders, and the broader economy. 

“The cloud era will give low-productivity organizations and industries access to the same technology and best practices that companies in high-productivity industries benefit from,” said Dr. Michael Mandel. “By standardizing and automating routine tasks, the lower producers will increase efficiency and reduce the cost of many processes, which will help them self-fund further investments in technology, develop new capabilities, and redeploy and hire resources for higher-level and better-paid tasks.”

One low-productivity industry that is well-poised to take advantage of emerging cloud technologies is healthcare—a $3 trillion industry.

“Cloud services now provide us with the basis for integrating our financial data with population health data,” said Michael Murray, senior vice president and chief financial officer at Blue Shield of California, which provides healthcare coverage for more than four million Americans. “Through data analytics and better population health management, the United States could remove hundreds of millions of dollars in costs out of the healthcare system. It’s a huge economic opportunity and will ultimately enhance clinical quality and patient outcomes.”

“The potential impact of emerging technologies on society cannot be understated,” noted Dave Donatelli, executive vice president, Cloud Business Group, Oracle. “In addition to creating new jobs and industries based on technology innovations in areas such as 3D printing, artificial intelligence, and cognitive computing, new technologies can help address some of our country’s basic needs. For example, many Americans are struggling to afford the high cost of healthcare coverage and basic medical expenses. Imagine if technology could predict illness and health risks to keep the population healthier. Similarly, cloud-based technologies have the potential to bring down the cost of education, using artificial intelligence, bots, and IoT to create more efficient institutions and better student outcomes.”

As part of the research, Dr. Mandel conducted in-depth interviews with CEOs, CFOs, and other top executives at companies in key industries identified as essential to U.S. economic growth.

Oracle customers who participated in the report, include:

  • Blue Shield of California (healthcare)
  • Carbon (manufacturing)
  • ConnectOne Bank (financial services)
  • FairfieldNodal (oil & gas)
  • Oracle (high tech)
  • Shawnee State University (education)
  • The Wonderful Company (retail/consumer)


The full report can be downloaded at www.oracle.com/intelligent-finance-report.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

Statements in this article relating to Oracle’s future plans, expectations, beliefs, and intentions are “forward-looking statements” and are subject to material risks and uncertainties. Such statements are based on Oracle’s current expectations and assumptions, some of which are beyond Oracle’s control. All information in this article is current as of October 2, 2017 and Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

SQL Server on Docker and network bridge considerations

Yann Neuhaus - Wed, 2018-02-14 06:46

Let’s continue with this blog post series about SQL Server and Docker. A couple of days ago, I was in a customer shop that already implemented SQL Server 2017 on Linux as Docker containers. It was definitely a very interesting day with a lot of customer experience and feedbacks. We discussed with him about lot of architecture scenarios.

The interesting point here is I was able to compare with a previous customer who used docker containers for a while in a completely different way. Indeed, my new customer implemented a Docker infrastructure exclusively based on SQL Server containers whereas the older one already containerized its applications that were connected to an external and non-containerized SQL Server environment.

Use case 1 – Containerized apps and virtualized SQL Server environments Use case 2 – SQL Server containers and virtualized applications  blog 128 - 1- docker archi 2  blog 128 - 1 - docker archi 1

 

In this blog post I want to focus on the first use case in terms of networks.

Connecting to an outside SQL Server (from a docker perspective) is probably an intermediate solution for many customers who already deal with mission-critical environments implying very restrictive high-availability scenarios and when very high performance is required as well. Don’t get me wrong. I’m not saying docker is not designed for mission critical scenarios but let’s say that fear of unknown things, as virtualization before, is still predominant, at least for this kind of scenario. I always keep in mind the repetitive customer question: is Docker ready for production and for databases? Connecting to a non-containerized SQL Server environment may make sense here at least to speed containers adoption. That’s my guess but feel free to comment with your thoughts!

So, in this context we may use different Docker network topologies. I spent some times to study and to discuss with customers about implemented network topologies in their context. For simple Docker infrastructures (without orchestrators like Swarm or Kubernetes) Docker bridges seem to be predominant with either Docker0 bridges or user-defined bridges.

 

  • Docker default bridge (Docker0)

For very limited Docker topologies, default network settings will be probably sufficient with Docker0 bridge. It is probably the case of my latest customer with only 5 SQL Server containers on the top of one Docker engine. By default, each container created without any network specification (and any Docker engine setting customization) will have one network interface sitting on the docker0 bridge with an IP from 172.17.0.0/16 CIDR or whichever CIDR you have configured docker to use. But did you wonder what is exactly a bridge on Docker world?

Let’s have a deeper look on it with a very simple example concerning one docker engine that includes two containers based on microsoft/mssql-tools each and one outside SQL Server that runs on the top of Hyper-V virtual machine. The below picture shows some network details that I will explain later in this blog post.

blog 128 - 3 - docker network bridge

My 2 containers can communicate together because they are sitting on the same network bridge and they are also able to communicate with my database server through the NAT mechanism. IP masquerading and IP forwarding is enabled on my Docker host.

$ sudo docker run -tid --name docker1 microsoft/mssql-tools
77b501fe29af322dd2d1da2824d339a60ba3080c1e61a2332b3cf563755dd3e3

$ sudo docker run -tid --name docker2 microsoft/mssql-tools
3f2ba669591a1889068240041332f02faf970e3adc85619adbf952d5c135d3f4

$ sudo docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS               NAMES
3f2ba669591a        microsoft/mssql-tools   "/bin/sh -c /bin/bash"   7 seconds ago       Up 6 seconds                            docker2
77b501fe29af        microsoft/mssql-tools   "/bin/sh -c /bin/bash"   11 seconds ago      Up 10 seconds                           docker1

 

Let’s take a look at the network configuration of each container. As a reminder, each network object represents a layer 2 broadcast domain with a layer 3 subnet as shown below. Each container is attached to a network through a specific endpoint.

$ sudo docker inspect docker1
[
"Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "985f25500e3d0c55d419790f1ac446f92c8d1090dddfd69987a52aab0717e630",
                    "EndpointID": "bd82669031ad87ddcb61eaa2dad823d89ca86cae92c4034d4925009aae634c14",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
]

$sudo docker inspect docker2
[
"Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.3",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:03",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "985f25500e3d0c55d419790f1ac446f92c8d1090dddfd69987a52aab0717e630",
                    "EndpointID": "140cd8764506344958e9a9725d1c2513f67e56b2c4a1fc67f317c3e555764c1e",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
]

 

To summarize, two IP addresses have been assigned for Docker1 container (172.17.0.2) and Docker2 container (172.17.0.3) in the IP address interval defined by the Docker0 bridge from the Docker internal IPAM module. Each network interface is created with their own MAC address and the gateway IP address (172.17.0.1) for both containers corresponds to the Docker0 bridge interface.

$ sudo ip a show docker0
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:2a:d0:7e:76 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2aff:fed0:7e76/64 scope link
       valid_lft forever preferred_lft forever

 

Let’s try to connect from the both containers to my SQL Server database:

$ sudo docker exec -it docker1
…
$ sudo docker exec -it docker2
...

 

Then on each container let’s run the following sqlcmd command:

sqlcmd -S 192.168.40.30,1450 -Usa -Ptoto

 

Finally let’s switch on the SQL Server instance and let’s get a picture of existing connections (IP Address 192.168.40.30 and port 1450).

SELECT 
	c.client_net_address,
	c.client_tcp_port,
	c.local_net_address,
	c.protocol_type,
	c.auth_scheme,
	s.program_name,
	s.is_user_process
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s ON c.session_id = s.session_id
WHERE client_net_address <> '<local machine>'

 

blog 128 - 4 - docker network bridge sqlcmd

We may notice that the IP address is basically the same (192.168.40.50) indicating we are using NAT to connect from each container.

Let’s go back to the Docker engine network configuration. After creating my 2 containers, we may notice the creation of 2 additional network interfaces.

$ ip a show | grep veth*
12: veth45297ff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
14: veth46a8316@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP

 

What are they? At this point, we are entering to Linux network namespace world. You can read further technical details on the internet but to keep simple network namespace concepts, I would say they allow to run different and separate network instances (including routing tables) that operate independent of each other. In other words, there is a way to isolate different networks from each other based on the same physical network device. Assuming we are using docker bridge type networks, when creating a container, in background we are creating a dedicated network namespace that includes a virtual ethernet interface which comes in interconnected pairs. In fact, a virtual ethernet interface acts as a tube to connect a Docker container namespace (in this context) to the outside world via the default / global namespace where the physical interface exists.

Before digging further into details about virtual interfaces let’s say by default Docker doesn’t expose network namespace information because it uses it own libcontainer and the microsoft/mssql-tools docker image is based on a simplified Linux image that doesn’t include network tools to easily show virtual interface information. So, a workaround is to expose a Docker container namespace into the host.

First we have to find out the process id of the container and then link its corresponding proc namespace to /var/run/netns host directory as shown below:

$ sudo docker inspect --format '{{.State.Pid}}' docker1
2094
$ sudo ln -s /proc/2094/ns/net /var/run/netns/ns-2094

 

Then we may use ip netns command to extract the network information

$ sudo ip netns
ns-2094 (id: 0)
$ sudo ip netns exec ns-2094 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

 

Here we go. The interesting information is the container network interface 11: eth0@if12

So, the first pair is the eth0 interface on the Docker container and the “outside” pair corresponds to the interface number 12. On the host the interface 12 corresponds to the virtual ethernet adapter veth45297ff. Note we may also find out the pair corresponding to the container interface (@if11).

$ ip a | grep "^12"
12: veth45297ff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP

 

Finally, let’s take a look at the bridge used by the virtual ethernet adapter veth45297ff

$ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.02422ad07e76       no              veth45297ff
                                                        veth46a8316

 

The other veth (46a8316) corresponds to my second docker2 container.

 

  • User-defined network bridges

But as said previously using the Docker0 bridge is only suitable for very limited scenarios. User-defined bridges are more prevalent with more complex scenarios like microservice applications because they offer a better isolation between containers and the outside world as well as a better manageability and customization. At this stage we may also introduce macvlan networks but probably in the next blog post …

For example, let’s say you want to create 2 isolated network bridges for a 3-tiers application. The users will access the web server (from the exposed port) throughout the first network (frontend-server). But in the same time, you also want to prevent containers that sit on this network to make connections to the outside world. The second network (backend-server) will host containers that must have access to both the outside SQL Server database and the web server.

blog 128 - 5 - docker network bridge segregation

User-defined networks is a good solution to address these requirements. Let’s create two user-defined networks. Note by default containers may make connections to the outside world but the outside is not able to make connections to the containers without exposing listen ports. This is why I disabled ip masquerading (com.docker.network.bridge.enable_ip_masquerade=false) for the frontend-server network to meet the above requirements.

$sudo docker network create \
    --driver bridge \
    --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  backend-server  
$sudo docker network create \
    --driver bridge \
    --subnet 172.19.0.0/16 \
    --gateway 172.19.0.1 \
    --opt com.docker.network.bridge.enable_ip_masquerade=false \
  frontend-server
$ sudo docker network ls 
NETWORK ID          NAME                DRIVER              SCOPE
5c6f48269d2b        backend-server      bridge              local
985f25500e3d        bridge              bridge              local
b1fbde4f4674        frontend-server     bridge              local
ad52b859e3f9        host                host                local
1beda56f93d3        none                null                local

 

Let’s now take a look at the corresponding iptables masquerading rules on my host machine:

$ sudo iptables -t nat -L -n | grep -i "masquerade"
MASQUERADE  all  --  172.20.0.0/16        0.0.0.0/0
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0

 

You may notice only the Docker0 (172.17.0.0/16) and backend-server (172.20.0.0/16) bridges are allowed for ip masquerading.

Then let’s create 2 containers with the two first ones (docker1 and docker2) that will sit on the frontend-server network and the second one (docker2) on the backend-server network. For convenient purposes, I setup fixed hostnames for each container. I also used a different ubuntu image that provides this time all necessary network tools including ping command.

$ sudo docker run -d --rm --name=docker1 --hostname=docker1 --net=frontend-server -it smakam/myubuntu:v6 bash

$ sudo docker run -d --rm --name=docker2 --hostname=docker2 --net=frontend-server -it smakam/myubuntu:v6 bash

$sudo docker run -d --rm --name=docker3 --hostname=docker3 --net=backend-server -it smakam/myubuntu:v6 bash

$ sudo docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
225ee13c38f7        smakam/myubuntu:v6   "bash"              2 minutes ago       Up 2 minutes                            docker3
d95014602fe2        smakam/myubuntu:v6   "bash"              4 minutes ago       Up 4 minutes                            docker2
1d9645f61245        smakam/myubuntu:v6   "bash"              4 minutes ago       Up 4 minutes                            docker1

 

First, probably one of the biggest advantages of using user-defined networks (unlike Docker0 bridge) is the ability to use automatic DNS resolution between containers on the same user-defined subnet on the same host (this is default behavior but you can override DNS settings by specifying –dns parameter at the container creation time). In fact, Docker applies update on the /etc/hosts file of each container when adding / deleting containers.

As expected, I may ping docker2 container from docker1 container and vice-versa but the same doesn’t apply between neither docker1 and docker3 nor docker2 and docker3 because they are not sitting on the same network bridge.

$ sudo docker exec -ti docker1 ping -c2 docker2
PING docker2 (172.19.0.3) 56(84) bytes of data.
64 bytes from docker2.frontend-server (172.19.0.3): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from docker2.frontend-server (172.19.0.3): icmp_seq=2 ttl=64 time=0.058 ms
…
$ sudo docker exec -ti docker2 ping -c2 docker1
PING docker1 (172.19.0.2) 56(84) bytes of data.
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=2 ttl=64 time=0.054 ms
...
$ sudo docker exec -ti docker1 ping -c2 docker3
ping: unknown host docker3
...

 

From a network perspective, on the host we may notice the creation of two additional bridge interfaces and 3 virtual Ethernet adapters after the creation of the containers.

$ brctl show
bridge name     bridge id               STP enabled     interfaces
br-5c6f48269d2b         8000.0242ddad1660       no              veth79ae355
br-b1fbde4f4674         8000.02424bebccdd       no              vethb66deb8
                                                        vethbf4ab2d
docker0         8000.02422ad07e76       no
$ ip a | egrep "^[1-9][1-9]"
25: br-5c6f48269d2b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
28: br-b1fbde4f4674: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
58: vethb66deb8@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-b1fbde4f4674 state UP
64: veth79ae355@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5c6f48269d2b state UP

 

If I want to make the docker3 container reachable from docker2 container I may simply connect the latter to the corresponding network as shown below:

$ sudo docker network connect backend-server docker2

$ sudo docker inspect docker2
[
"Networks": {
                "backend-server": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [
                        "d95014602fe2"
                    ],
                    "NetworkID": "5c6f48269d2b752bf1f43efb94437957359c6a72675380c16e11b2f8c4ecaaa1",
                    "EndpointID": "4daef42782b22832fc98485c27a0f117db5720e11d806ab8d8cf83e844ca6b81",
                    "Gateway": "172.20.0.1",
                    "IPAddress": "172.20.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:14:00:03",
                    "DriverOpts": null
                },
                "frontend-server": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "d95014602fe2"
                    ],
                    "NetworkID": "b1fbde4f4674386a0e01b7ccdee64ed8b08bd8505cd7f0021487d32951035570",
                    "EndpointID": "651ad7eaad994a06658941cda7e51068a459722c6d10850a4b546382c44fff86",
                    "Gateway": "172.19.0.1",
                    "IPAddress": "172.19.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:13:00:03",
                    "DriverOpts": null
                }
            }
]

 

You may notice the container is connected to the frontend-server and backend-server as well thanks to an additional network interface created at same time.

$ sudo docker exec -it docker2 ip a show | grep eth
59: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.3/16 brd 172.19.255.255 scope global eth0
68: eth2@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.3/16 brd 172.20.255.255 scope global eth2

 

Pinging both docker1 container and docker3 container from docker2 container is successful now.

$ sudo docker exec -it docker2 ping -c2 docker1
PING docker1 (172.19.0.2) 56(84) bytes of data.
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=2 ttl=64 time=0.052 ms
…
$ sudo docker exec -it docker2 ping -c2 docker3
PING docker3 (172.20.0.2) 56(84) bytes of data.
64 bytes from docker3.backend-server (172.20.0.2): icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from docker3.backend-server (172.20.0.2): icmp_seq=2 ttl=64 time=0.054 ms
…

 

In this blog post, we surfaced Docker network bridges and use cases we may have to deal with SQL Server instances regarding the context. As a reminder, user-defined networks may allow to define fine-grained policy rules to interconnect containers on different subnets. This is basically what we may want to achieve with microservices applications. Indeed, such applications include some components that need to span multiple networks (backend and frontend networks) whereas other ones should by isolated (even from outside) regarding their role.

Happy containerization!

 

 

 

 

Cet article SQL Server on Docker and network bridge considerations est apparu en premier sur Blog dbi services.

Could you trust option_packs_usage_statistics.sql ?

Yann Neuhaus - Wed, 2018-02-14 04:17
Introduction

As a former Oracle LMS qualified auditor my opinion is sometimes requested before/during/after an Oracle LMS audit or simply to ensure a customer that his Oracle database is 100% in conformity with Oracle Licensing Policy. Even if

“The Oracle License Management Services (LMS) Group is the only Oracle group authorized to review and provide opinions on compliance status and will provide guidance, education and impartial opinions on a customer or partner’s compliance state. For more information please visit the following website: http://www.oracle.com/corporate/lms.”

I very do hope that you will find interesting tips in this blog.

Most of the time when a customer would like to check which Oracle options are used by his database infrastructure he is using the well known script “option_packs_usage_statistics.sql”. dbi services did checks of the options detected by the script provided by Oracle (My Oracle Support DOC ID 1317265.1). Depending on your database usage this script will detect the usage of different options, but could you really trust the output of this script and how to interpret the output ?

Could we trust the output of option_packs_usage_statistics.sql ?

The answer is quite easy and short: NO you can’t !

Why? Because as for any software there are some bugs and these bugs lead to false positive detection. The good news is that some of these false positive are documented on My Oracle Support. Indeed the script options_packs_usage_statistics.sql used and provided by Oracle has 14 documented bugs (My Oracle Support Doc ID 1309070.1) and some other non-documented bugs (eg. My Oracle Support BUG 17164904). These bugs are related to:

1.    Bug 11902001 – Exclude default users for Feature usage tracking for Securefiles option
2.    Bug 11902142 – Exclude default users for Feature usage tracking for Advanced Compression option
3.    Bug 19618850 – SOLUTION TO PREVENT UNINTENTED ORACLE OPTION USAGE
4.    Query against DBA_FEATURE_USAGE_STATISTICS is not a true test for use of SDO
5.    Bug 16088534 : RMAN default Backup BZIP2 Compression feature is reported wrongly as as an Advanced Compression feature
6.    Bug 22122625 – GETTING FALSE POSITIVES ON USAGE OF ADVANCED INDEX COMPRESSION
7.    Bug 24844549 – ADVANCED INDEX COMPRESSION SHOWS USAGE IN DBA_FEATURE_USAGE_STATISTICS WITH HCC
8.    Bug 16859747 – DBA_FEATURE_USAGE_STATISTICS SHOWS INCORRECT USAGE FOR HEAPCOMPRESSION
9.    Bug 16563444 – HEAT MAP FEATURE USAGE TRACKING IS NOT CORRECT
10.    Bug 19317899 – IMC: IN-MEMORY OPTION IS REPORTED AS BEING USED EVEN INMEMORY_SIZE IS 0
11.    Bug 19308780 – DO NOT FEATURE TRACK OBJECTS FOR IM WHEN INMEMORY_SIZE = 0
12.    Bug 21248059 – DBA_FEATURE_USAGE_STATISTICS BUG IN TRACKING “HYBRID COLUMNAR COMPRESSION” FEAT
13.    Bug 25661076 – DBA_FEATURE_USAGE_STATISTICS INCORRECTLY SHOWS SPATIAL USAGE IN 12C
14.    Bug 23734270 – DBA_FEATURE_USAGE_STATISTICS SHOWS PERMANENT USAGE OF REAL-TIME SQL MONITORING

These bugs may lead to the detection of features such as : Automatic Maintenance – SQL Tuning advisor & Automatic SQL Tuning Advisor, Real-Time SQL monitoring, Advanced security – Oracle Utility Datapump (Export) and Oracle Utility Datapump (Import), Advanced Compression – Heat Map, Advanced Compression – Oracle Utility Datapump (Export) and Oracle Utility Datapump (Import), aso….

Of course these bugs make the real options usage analysis especially difficult even for an experimented Database Administrator. Additionally the Oracle database in version 12 could make usage of options in maintenance windows without manual activation. That the case for instance of options such as : Automatic Maintenance – SQL Tuning Advisor, Automatic SQL Tuning Advisor and Automatic SQL Tuning Advisor.

14. Bug 23734270 – DBA_FEATURE_USAGE_STATISTICS SHOWS PERMANENT USAGE OF REAL-TIME SQL MONITORING
On a freshly created 12c database, DBA_FEATURE_USAGE_STATISTICS shows usage of Real-Time SQL Monitoring even if no reports have been run from OEM pages or with DBMS_SQL_MONITOR.
Reason :SQL Monitor reports are automatically generated and saved in AWR but should be considered as system usage.
This behavior is the same for all 12 releases and is not present in 11.2. – Extract of My Oracle Support Bug 23734270

Even if LMS team is not using option_packs_usage_statistics.sql script, the output of LMS_Collection_Tool (ReviewLite.sql) is quite the same. The direct consequence in case of an Oracle LMS audit is that the auditor could detect options that you simply never used and you will have to make the proof of non usage… if not you will have to pay the invoice following the final LMS report as stated in your LMS preliminary/final report.

“Based upon the information provided to License Management Services, the following licensing issues need to be resolved within 30 days from the date of the Final Report.”

“In accordance to Oracle compliance policies, backdated support charges are due for the period of unlicensed usage of Oracle Programs.
Please provide your feedback on this preliminary report within 10 days from the presentation of this report.”- extract of an Oracle LMS audit

Even if I do not have hundreds of cases where the LMS department made wrong detection, I’ve concrete stories where LMS team detected some false positives. Last case was related to the detection of more than 700 usage of Advanced compression due to unpublished BUG 17164904. Thanks to the metalink Doc ID 1993134.1, the bug is explained:

In 12.1.0.1,  the compression counter is incorrectly incremented (COMPRESSCNT=1) for compression=metadata_only (either explicitly or by default) due to unpublished BUG 17164904 – INCORRECT FEATURE USAGE STATISTICS FOR DATA PUMP COMPRESSION, fixed with 12.1.0.2.

How to interpret the output of option_packs_usage_statistics.sql ?

Sometimes this script could provide you some non sense option usage. That the case for instance for features provided only since database version 12c but detected on your old database version 11g. In such a case simply edit the option_packs_usage_statistics.sql script and have a look on the comments. A perfect example of that is illustrated by the detection of Heat Map usage in database version 11g whereas this option is available since version 12c. You can see below another example of wrong options detection related to “Automatic Maintenance – SQL Tuning Advisor” and “Automatic SQL Tuning Advisor”:


SELECT ‘Tuning Pack’                                         , ‘Automatic Maintenance – SQL Tuning Advisor’              , ‘^12\.‘                      , ‘INVALID‘ from dual union all  – system usage in the maintenance window
SELECT ‘Tuning Pack’                                         , ‘Automatic SQL Tuning Advisor’                            , ‘^11\.2|^12\.’               , ‘INVALID‘ from dual union all  — system usage in the maintenance window
SELECT ‘Tuning Pack’                                         , ‘Real-Time SQL Monitoring’                                , ‘^11\.2′                     , ‘ ‘       from dual union all

This INVALID clause explain that the detection of this option is due to system usage in the maintenance window in version 12 (Automatic Maintenance – SQL Tuning Advisor) and in version 11.2 and 12 for Automatic SQL Tuning Advisor. This is also explained few lines after in the option_packs_usage_statistics.sql script :


where nvl(CONDITION, ‘-‘) != ‘INVALID‘                   — ignore features for which licensing is not required without further conditions

    and not (CONDITION = ‘C003′ and CON_ID not in (0, 1))  — multiple PDBs are visible only in CDB$ROOT; PDB level view is not relevant
)

In such a case the option does not have to be considered since the normal behavior of an oracle database in version 12 is to use this option in the maintenance window. This is just an example to illustrate that some detected option does not have to be licensed as explained in the script.

Conclusion

I very do hope that this blog helps you to have a better understanding of how to detect what your database infrastructure really uses in terms of Oracle options. Anyway if you are convinced that you do not use an Oracle database option despite the output of scripts such as option-packs_usage_statistics or ReviewLite which proofs the opposite, have a look on My Oracle Support. Look for bug related to wrong detection of this feature and with a little bit of luck you will find something interesting. Oracle is definitively engineered for heroes…

Oracle Engineered For Heroes

Oracle Engineered For Heroes

 

Cet article Could you trust option_packs_usage_statistics.sql ? est apparu en premier sur Blog dbi services.

Three Quick Tips API Platform CS - Gateway Installation (Part 1)

OTN TechBlog - Tue, 2018-02-13 16:00

This blog post assumes some prior knowledge of API Platform Cloud Service and pertains to the on premise gateway installation steps. Here we try to list down 3 useful tips (applicable for 18.1.3+), arranged in no particular order:. 

  • Before installing the gateway, make sure you have the correct values for "listenIpAddress" and "publishAddress".  This can be done by the following checklist (Linux only):
    • Does the command "hostname -f" return a valid value ?
    • Does the command "ifconfig" list downs the ip addresses properly ?
    • Do you have additional firewall/network policies that may prevent communication with management tier?
    • Do you authoritatively know the internal and public ip/addresses to be used for the gateway node?

            If you do not know the answers to any of the questions, please contact your network administrator.

           If you see issues with gateway server not starting up properly, incorrect values of  "listenIpAddress" and "publishAddress" could be the possible cause. 

  • Before running the "creategateway" action (or any other action involving the "creategateway" like "create-join" for example), do make sure that the management tier is accessible. You can use something like:
    • wget "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"  
    • curl "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"

           If the above steps fail, then "creategateway" will also not work, so the questions to ask are:

  1. Do we need a proxy?
  2. If we have already specified a proxy , is it the correct proxy ?
  3. In case we need a proxy , have we set the "managementServiceConnectionProxy" property in gateway-props.json.

Moreover, it is better if we set the http_proxy/https_proxy to the correct proxy, if proxies are applicable.

  • Know your log location, please refer to the following list:
    • Logs for troubleshooting "install" or  "configure" actions , we have to refer to <install_dir>/logs directory.
    • Logs for troubleshooting "start" or "stop" actions, we have to refer to <install_dir>/domain/<gateway_name>/(start*.out|(stop*.out)).
    • Logs for troubleshooting "create-join"/"join" actions, we have to refer to <install_dir>/logs directory.
    • To troubleshoot issues post installation (i.e. after the physical node has joined the gateway), we can refer to <install_dir>/domain/<gateway_name>/apics/logs directory. 

We will try to post more tips in the coming weeks, so stay tuned and happy API Management.            

Oracle Expands its Global Startup Ecosystem

Oracle Press Releases - Tue, 2018-02-13 13:48
Press Release
Oracle Expands its Global Startup Ecosystem New “Virtual” Global Scaleup Program Launches; Residential Startup Program Expands to North America with Austin, Texas Location

Redwood Shores, Calif.—Feb 13, 2018

Oracle today announced the expansion of its global startup ecosystem in an effort to increase the impact and support for the wider startup community, reach more entrepreneurs worldwide, and drive cloud adoption and innovation. The expansion includes the launch of a new virtual-style, non-residential global program, named Oracle Scaleup Ecosystem, as well as the addition of Austin to the residential Oracle Startup Cloud Accelerator program. The addition of Austin brings the residential program to North America and expands the accelerator’s reach to nine total global locations.

“In 2017, we launched eight residential programs ahead of schedule and attracted almost 4,000 global startups for only 40 program slots—a clear indication of the tremendous demand,” said Reggie Bradford, Oracle senior vice president, Startup Ecosystem and Accelerator. “Oracle Scaleup Ecosystem is a new global program that allows us to reach more innovators and entrepreneurs, regardless of location, including later-stage scaleup companies who need access to Oracle Cloud solutions and resources without the hands-on offerings our residential program provides. We’re building an ecosystem that enables tangible business value, customer growth and revenue—for the startups, our customers and Oracle.”

Oracle’s global startup mission is to provide enriching, collaborative partnerships to enable next-generation growth and drive cloud-based innovation for startups throughout all stages of their journey. To that end, Oracle offers residential and non-residential startup programs that power cloud-based technology innovation and enable co-creation and co-innovation across the startups, customers, partners and Oracle.

Oracle Scaleup Ecosystem is the new non-residential, virtual-style program designed for startups and venture capital and private equity portfolio companies to enable hypergrowth and scale. Oracle’s Scaleup program is collaborating with leading PE and VC firms and will target high-growth entities across EMEA, JAPAC, and the Americas, as well as a select number of investment groups and strategic partners. The program offers mentoring, R&D support, marketing/sales enablement, migration assistance, cloud credits and discounts, and access to Oracle’s customer and product ecosystems.

"We are always exploring new opportunities and resources that can accelerate innovation and growth for our portfolio companies," said Steve Herrod, managing director at General Catalyst. "Oracle has a formidable cloud-based technology stack, product expertise and a thriving customer and partner base. Collaborating with the Oracle Scaleup program has the potential to benefit both our companies and the broader global technology ecosystem."

Industry veteran and Amazon Web Services (AWS) alum Jason Williamson has been tapped by Bradford to lead the Oracle Scaleup Ecosystem program. Williamson helped launch private equity ecosystem initiatives at AWS. Also an author, professor, entrepreneur and former Oracle employee, Williamson brings a wealth of knowledge and unique skillset to lead Oracle’s Scaleup program.

“Lightspeed works closely with global enterprise technology companies to provide resources and access to Lightspeed’s portfolio companies,” said Sunil Rao, Partner, Business Services, Lightspeed India Partners. “Oracle’s Scaleup program is aptly timed for the emerging ecosystem of enterprise software and SaaS vendors, and access to Oracle's cloud solutions, global customers, partner footprint and industry experts who work closely with startups is super useful.”

“Working with Oracle has helped us fast-track growth with business development and technology enhancements,” said Rich Joffe, CEO, Stella.ai, an AI-based recruiting marketplace platform. “Relationships with Oracle Taleo and now Oracle Scaleup will enhance our performance and capabilities as our business rapidly grows globally.”

Austin is the newest addition to Oracle’s residential startup program, Oracle Startup Cloud Accelerator, bringing the program to North America for a total of nine locations worldwide: Austin, Bangalore, Bristol, Mumbai, Delhi, Paris, Sao Paulo, Singapore and Tel Aviv. The program will select five to six startups per cohort, supporting two cohorts a year. Selected companies will be entitled to hands-on technical and business mentoring by Oracle and industry experts, state-of-the-art technology with free Oracle Cloud credits, full access to a dedicated co-working space, as well as access to Oracle’s vast global ecosystem of startup peers, customers, investors and partners. More program details will be announced in the coming month. Interested startups can sign up to receive more information at oracle.com/startup/TX.

Applications for Oracle Scaleup Ecosystem are accepted on a rolling basis at oracle.com/startup/scaleup.

Contact Info
Julia Allyn
Oracle Corporate Communications
+1.650.607.1338
julia.allyn@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe, and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Julia Allyn

  • +1.650.607.1338

When is the Best Time to Apply AD & TXK Patches?

Steven Chan - Tue, 2018-02-13 13:36

E-Business Suite 12.2's Online Patching feature allows you to apply patches to your environment while it is still running. For example, you can apply patches to Financials while your end-users are entering bills in Accounts Receivable. These patches are applied using our Applications DBA (AD) and EBS Technology Stack (TXK) tools.

A reader recently asked whether it's advisable to apply our latest EBS 12.2.7 update at the same time as our  AD-TXK Delta 10 updates. This is a great question, since it touches on both our goals for our AD and TXK tools as well as our internal testing practices for new updates and patches.

The short answer: You should apply the latest AD and TXK infrastructure tools in an initial patching cycle, and then apply EBS updates and patches (e.g. EBS 12.2.7) in a later patching cycle.

Why?

There are two reasons:

  1. Latest AD/TXK tools are better. The latest AD and TXK tool updates always include new patching-related features, as well as critical fixes for stability, security, and performance. For example, the September 2017 AD and TXK Delta 10 updates included new features for parallel index optimization, seed data handling, synchronization of theme files on cutover, and more.
     
    If you have the latest patching tools in place, the latest AD/TXK enhancements, improvements, and fixes reduce risk during the patching process itself. 
     
  2. Updates are always tested with the latest AD/TXK tools.  We do not generally have the resources to test new EBS patches with older patching tools. We always test the installation of our latest EBS updates (e.g. EBS 12.2.7) using the latest AD and TXK patches available at the time. In other words, you may encounter unexpected issues if you apply new patches using older patching tools.

Do new AD and TXK patches trigger functional re-testing?

New AD and TXK patches do not change any aspects of how EBS functional products work.  These infrastructure tools affect how patches are applied, not how patches work.  

In other words, having the latest AD/TXK updates may reduce the risk of applying, say, a new HRMS patch, but it will not affect any Human Resources-related functionality at all.

Likewise, applying the latest AD/TXK update does not affect the functionality of any existing EBS products. Infrastructure tools only affect the patching process, not how installed EBS products work. 

How do I find the latest AD and TXK tools?

Check the following reference:

You should periodically check Document 1617461.1 on My Oracle Support for updates, which are made as required. For example, new bundled fixes or critical one-off patches may become available, or the release update packs may be superseded by newer versions.

Related Articles

Categories: APPS Blogs

How we build our customized PostgreSQL Docker image

Yann Neuhaus - Tue, 2018-02-13 13:21

Docker becomes more and more popular these days and a lot of companies start to really use it. At one project we decided to build our own customized Docker image instead of using the official PostgreSQL one. The main reason for that is that we wanted to compile from source so that we only get want is really required. Why having PostgreSQL compiled with tcl support when nobody will ever use that? Here is how we did it …

To dig in right away, this is the simplified Dockerfile:

FROM debian

# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
ENV LANG en_US.utf8
ENV PG_MAJOR 10
ENV PG_VERSION 10.1
ENV PG_SHA256 3ccb4e25fe7a7ea6308dea103cac202963e6b746697366d72ec2900449a5e713
ENV PGDATA /u02/pgdata
ENV PGDATABASE "" \
    PGUSERNAME "" \
    PGPASSWORD ""

COPY docker-entrypoint.sh /

RUN set -ex \
        \
        && apt-get update && apt-get install -y \
           ca-certificates \
           curl \
           procps \
           sysstat \
           libldap2-dev \
           libpython-dev \
           libreadline-dev \
           libssl-dev \
           bison \
           flex \
           libghc-zlib-dev \
           libcrypto++-dev \
           libxml2-dev \
           libxslt1-dev \
           bzip2 \
           make \
           gcc \
           unzip \
           python \
           locales \
        \
        && rm -rf /var/lib/apt/lists/* \
        && localedef -i en_US -c -f UTF-8 en_US.UTF-8 \
        && mkdir /u01/ \
        \
        && groupadd -r postgres --gid=999 \
        && useradd -m -r -g postgres --uid=999 postgres \
        && chown postgres:postgres /u01/ \
        && mkdir -p "$PGDATA" \
        && chown -R postgres:postgres "$PGDATA" \
        && chmod 700 "$PGDATA" \
        \
        && curl -o /home/postgre/postgresql.tar.bz2 "https://ftp.postgresql.org/pub/source/v$PG_VERSION/postgresql-$PG_VERSION.tar.bz2" \
        && echo "$PG_SHA256 /home/postgres/postgresql.tar.bz2" | sha256sum -c - \
        && mkdir -p /home/postgres/src \
        && chown -R postgres:postgres /home/postgres \
        && su postgres -c "tar \
                --extract \
                --file /home/postgres/postgresql.tar.bz2 \
                --directory /home/postgres/src \
                --strip-components 1" \
        && rm /home/postgres/postgresql.tar.bz2 \
        \
        && cd /home/postgres/src \
        && su postgres -c "./configure \
                --enable-integer-datetimes \
                --enable-thread-safety \
                --with-pgport=5432 \
                --prefix=/u01/app/postgres/product/$PG_VERSION \\
                --with-ldap \
                --with-python \
                --with-openssl \
                --with-libxml \
                --with-libxslt" \
        && su postgres -c "make -j 4 all" \
        && su postgres -c "make install" \
        && su postgres -c "make -C contrib install" \
        && rm -rf /home/postgres/src \
        \
        && apt-get update && apt-get purge --auto-remove -y \
           libldap2-dev \
           libpython-dev \
           libreadline-dev \
           libssl-dev \
           libghc-zlib-dev \
           libcrypto++-dev \
           libxml2-dev \
           libxslt1-dev \
           bzip2 \
           gcc \
           make \
           unzip \
        && apt-get install -y libxml2 \
        && rm -rf /var/lib/apt/lists/*

ENV LANG en_US.utf8
USER postgres
EXPOSE 5432
ENTRYPOINT ["/docker-entrypoint.sh"]

We based the image on the latest Debian image, that is line 1. The following lines define the PostgreSQL version we will use and define some environment variables we will user later. What follows is basically installing all the packages required for building PostgreSQL from source, adding the operating system user and group, preparing the directories, fetching the PostgreSQL source code, configure, make and make install. Pretty much straight forward. Finally, to shrink the image, we remove all the packages that are not any more required after PostgreSQL was compiled and installed.

The final setup of the PostgreSQL instance happens in the docker-entrypoint.sh script which is referenced at the very end of the Dockerfile:

#!/bin/bash

# this are the environment variables which need to be set
PGDATA=${PGDATA}/${PG_MAJOR}
PGHOME="/u01/app/postgres/product/${PG_VERSION}"
PGAUTOCONF=${PGDATA}/postgresql.auto.conf
PGHBACONF=${PGDATA}/pg_hba.conf
PGDATABASENAME=${PGDATABASE}
PGUSERNAME=${PGUSERNAME}
PGPASSWD=${PGPASSWORD}

# create the database and the user
_pg_create_database_and_user()
{
    ${PGHOME}/bin/psql -c "create user ${PGUSERNAME} with login password '${PGPASSWD}'" postgres
    ${PGHOME}/bin/psql -c "create database ${PGDATABASENAME} with owner = ${PGUSERNAME}" postgres
}

# start the PostgreSQL instance
_pg_prestart()
{
    ${PGHOME}/bin/pg_ctl -D ${PGDATA} -w start
}

# start postgres and do not disconnect
# required for docker
_pg_start()
{
    ${PGHOME}/bin/postgres "-D" "${PGDATA}"
}

# stop the PostgreSQL instance
_pg_stop()
{
    ${PGHOME}/bin/pg_ctl -D ${PGDATA} stop -m fast
}

# initdb a new cluster
_pg_initdb()
{
    ${PGHOME}/bin/initdb -D ${PGDATA} --data-checksums
}


# adjust the postgresql parameters
_pg_adjust_config() {
    # PostgreSQL parameters
    echo "shared_buffers='128MB'" >> ${PGAUTOCONF}
    echo "effective_cache_size='128MB'" >> ${PGAUTOCONF}
    echo "listen_addresses = '*'" >> ${PGAUTOCONF}
    echo "logging_collector = 'on'" >> ${PGAUTOCONF}
    echo "log_truncate_on_rotation = 'on'" >> ${PGAUTOCONF}
    echo "log_filename = 'postgresql-%a.log'" >> ${PGAUTOCONF}
    echo "log_rotation_age = '1440'" >> ${PGAUTOCONF}
    echo "log_line_prefix = '%m - %l - %p - %h - %u@%d '" >> ${PGAUTOCONF}
    echo "log_directory = 'pg_log'" >> ${PGAUTOCONF}
    echo "log_min_messages = 'WARNING'" >> ${PGAUTOCONF}
    echo "log_autovacuum_min_duration = '60s'" >> ${PGAUTOCONF}
    echo "log_min_error_statement = 'NOTICE'" >> ${PGAUTOCONF}
    echo "log_min_duration_statement = '30s'" >> ${PGAUTOCONF}
    echo "log_checkpoints = 'on'" >> ${PGAUTOCONF}
    echo "log_statement = 'none'" >> ${PGAUTOCONF}
    echo "log_lock_waits = 'on'" >> ${PGAUTOCONF}
    echo "log_temp_files = '0'" >> ${PGAUTOCONF}
    echo "log_timezone = 'Europe/Zurich'" >> ${PGAUTOCONF}
    echo "log_connections=on" >> ${PGAUTOCONF}
    echo "log_disconnections=on" >> ${PGAUTOCONF}
    echo "log_duration=off" >> ${PGAUTOCONF}
    echo "client_min_messages = 'WARNING'" >> ${PGAUTOCONF}
    echo "wal_level = 'replica'" >> ${PGAUTOCONF}
    echo "hot_standby_feedback = 'on'" >> ${PGAUTOCONF}
    echo "max_wal_senders = '10'" >> ${PGAUTOCONF}
    echo "cluster_name = '${PGDATABASENAME}'" >> ${PGAUTOCONF}
    echo "max_replication_slots = '10'" >> ${PGAUTOCONF}
    echo "work_mem=8MB" >> ${PGAUTOCONF}
    echo "maintenance_work_mem=64MB" >> ${PGAUTOCONF}
    echo "wal_compression=on" >> ${PGAUTOCONF}
    echo "max_wal_senders=20" >> ${PGAUTOCONF}
    echo "shared_preload_libraries='pg_stat_statements'" >> ${PGAUTOCONF}
    echo "autovacuum_max_workers=6" >> ${PGAUTOCONF}
    echo "autovacuum_vacuum_scale_factor=0.1" >> ${PGAUTOCONF}
    echo "autovacuum_vacuum_threshold=50" >> ${PGAUTOCONF}
    # Authentication settings in pg_hba.conf
    echo "host    all             all             0.0.0.0/0            md5" >> ${PGHBACONF}
}

# initialize and start a new cluster
_pg_init_and_start()
{
    # initialize a new cluster
    _pg_initdb
    # set params and access permissions
    _pg_adjust_config
    # start the new cluster
    _pg_prestart
    # set username and password
    _pg_create_database_and_user
}

# check if $PGDATA exists
if [ -e ${PGDATA} ]; then
    # when $PGDATA exists we need to check if there are files
    # because when there are files we do not want to initdb
    if [ -e "${PGDATA}/base" ]; then
        # when there is the base directory this
        # probably is a valid PostgreSQL cluster
        # so we just start it
        _pg_prestart
    else
        # when there is no base directory then we
        # should be able to initialize a new cluster
        # and then start it
        _pg_init_and_start
    fi
else
    # initialze and start the new cluster
    _pg_init_and_start
    # create PGDATA
    mkdir -p ${PGDATA}
    # create the log directory
    mkdir -p ${PGDATA}/pg_log
fi
# restart and do not disconnect from the postgres daemon
_pg_stop
_pg_start

The important point here is: PGDATA is a persistent volume that is linked into the Docker container. When the container comes up we need to check if something that looks like a PostgreSQL data directory is already there. If yes, then we just start the instance with what is there. If nothing is there we create a new instance. Remember: This is just a template and you might need to do more checks in your case. The same is true for what we add to pg_hba.conf here: This is nothing you should do on real systems but can be handy for testing.

Hope this helps …

 

Cet article How we build our customized PostgreSQL Docker image est apparu en premier sur Blog dbi services.

Oracle IoT Cloud for Industry 4.0 Helps Organizations Make Dramatic Process Improvements for More Intelligent Supply Chains

Oracle Press Releases - Tue, 2018-02-13 10:30
Press Release
Oracle IoT Cloud for Industry 4.0 Helps Organizations Make Dramatic Process Improvements for More Intelligent Supply Chains New Augmented Reality, Machine Vision, Digital Twin and Automated Data Science capabilities enhance production, logistics, warehousing and maintenance

Oracle Modern Finance Experience, New York, NY—Feb 13, 2018

Empowering modern businesses to improve production intelligence and market responsiveness, Oracle today unveiled new Industry 4.0 capabilities for Oracle Internet of Things (IoT) Cloud. The advanced monitoring and analytics capabilities of the new offering enables organizations to improve efficiency, reduce costs, and identify new sources of revenue through advanced tracking of assets, workers, and vehicles; real-time issue detection; and predictive analytics.

According to The Economist Intelligence Unit, 63 percent of manufacturers have either undergone substantial digital transformation or are in the process of transforming parts of their organization, and 19 percent are developing transformation strategies. To remain competitive in the modern economy, businesses need to leverage new technologies and data to modernize their supply chains and improve visibility, predictive insights, and automation through connected workflows.

With new augmented reality, machine vision, digital twin and data science capabilities, Oracle IoT Cloud enables organizations to gain rich insight into the performance of assets, machines, workers, and vehicles so they can optimize their supply chain, manufacturing, and logistics, reduce time to market for new products; and enable new business models. 

“IoT is the great enabler of Industry 4.0’s potential, providing real-time visibility and responsiveness at every step of the production process – from raw materials to customer fulfillment,” said Bhagat Nainani, group vice president, IoT Applications at Oracle.

“Oracle empowers organizations to create smart factories and modern supply chains with seamless interaction models between business applications and physical equipment. By receiving real-time data streams enhanced with predictive insights, our IoT applications provide intelligent business processes that deliver quick ROI.”

Today’s expansion follows the recent announcement of artificial Intelligence, digital thread and digital twin for supply chain, as well as industry-specific solutions for Oracle IoT Cloud. Oracle IoT Cloud is offered both as Software-as-a-Service (SaaS) applications, as well as Platform-as-a-Service (PaaS) offerings, enabling a high degree of adaptability for even the most demanding implementations.

“We plan to leverage Oracle IoT Cloud and its machine learning capabilities to automatically analyze information gathered from the robot and process-monitoring systems. These analytics could help Noble identify ways to reduce cycle time, improve the manufacturing process, enhance product quality, and cut downtime,” said Scott Rogers, technical director at Noble Plastics.

Oracle plans to add the new capabilities across the entire range of IoT Cloud Applications – Asset Monitoring, Production Monitoring, Fleet Monitoring, Connected Worker, and Service Monitoring for Connected Assets:

  • Digital Twin: Enables remote users to monitor the health of assets and prevent failures before they occur, as well as running simulations of “what-if” scenarios in the context of the business processes. With Digital Twin, organizations have a new operational paradigm to interact with the physical world, allowing lower operational and capital expenditures, minimizing downtime, and optimizing asset performance.
  • Augmented Reality: Gives operators and plant managers the ability to view operational metrics and related equipment information in the context of the physical asset for faster troubleshooting and assisted maintenance. In addition, the use of AR in training technicians reduces errors and on-boarding time, and improves user productivity.
  • Machine Vision: Provides detailed non-intrusive visual inspections, which can detect defects invisible to the naked eye, at high speed and scale. Following the rapid inspection, Machine Vision sets in motion appropriate corrective actions when anomalies and errors are spotted.
  • Auto Data Science: Automated business-specific data science and artificial intelligence algorithms continuously analyze asset utilization, production yield and quantity, inventory, fleet performance, as well as worker safety concerns, to predict issues before they arise. Auto Data Science features enable users to see performance metrics of each step in the modern supply chain with the ability to drill down into specific issues at each location without employing an army of data scientists.
 

Oracle IoT Cloud enables companies to monitor capital intensive assets to reduce downtime and servicing costs, and track utilization for accurate lifecycle insights and asset depreciation data, which improves enterprise procurement efficiency. The rich pool of data created by sensors within products enables organizations to offer their products as a service, gain insight into how customers are using their products, and offer improved value-added services that drive new sources of revenue.

Contact Info
Vanessa Johnson
Oracle PR
+1.650.607.1692
vanessa.n.johnson@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Vanessa Johnson

  • +1.650.607.1692

Oracle Powers the Next Generation of Finance with New Artificial Intelligence Applications

Oracle Press Releases - Tue, 2018-02-13 10:15
Press Release
Oracle Powers the Next Generation of Finance with New Artificial Intelligence Applications Intelligent applications drive smarter finance decisions, increase efficiencies, and improve collaboration for higher revenues and reduced costs

Modern Finance Experience 2018, New York, NY—Feb 13, 2018

To empower the office of the CFO with data-driven insights they need to adapt to change, capitalize on new market opportunities, and increase profitability, Oracle today announced new artificial intelligence-based applications for finance. By applying advanced data science and machine learning to data from the Oracle Data Cloud and from partners, the new Oracle Adaptive Intelligent Applications for Enterprise Resource Planning (ERP) help automate transactional work and streamline business processes.

Oracle Adaptive Intelligent Applications for ERP are designed to enhance existing applications, including financials, procurement, enterprise performance management, order management, and manufacturing within the ERP Cloud suite.

CFOs and modern finance professionals are under pressure to increase the agility and effectiveness of their organizations. As such, they need to constantly monitor and assess what is working and what is not and redeploy resources for maximum returns.

“To increase their agility, organizations need to leverage the right tools to help improve process efficiency and uncover insights that can guide a business forward,” said Rondy Ng, senior vice president, Oracle Applications Development. “Oracle helps finance leaders drive business transformation with ready-to-go applications that combine advanced machine learning with the industry’s most comprehensive suite of cloud applications to deliver immediate value and results.”

With Oracle Adaptive Intelligent Applications for ERP, finance leaders can benefit from:

  • Better insight: Applying analytics and artificial intelligence to finance can improve performance and increases agility across payables, receivables, procurement, and fiscal period close processes. Intelligent applications are also able to provide suggested actions to help mitigate vendor risk and fraud activity by detecting exceptions in vendor selection criteria.
  • Greater efficiency: Robotic process automation and artificial intelligence capabilities enable touchless transaction processing, minimizing the chance of human error.
  • Smarter business outcomes: Oracle delivers immediate impact by infusing machine learning across the entire suite of business applications; this is done by leveraging data from the Oracle Data Cloud and from partners to derive insights across multiple channels and platforms, including finance, HR, and project management to support strategic business decision-making.
  • Increased influence: The rich insights available to finance leaders via artificial intelligence empower CFOs to anticipate what comes next for the business and to make wise decisions, increasing the influence of the CFO and finance team in the organization.

For example, using Oracle Adaptive Intelligent Applications for ERP can help a finance team at a large national retail brand collect first-party data on their suppliers, such as supplier purchase history, percentage of revenue, discounts taken with third-party data on supplier revenue, credit score, and other company attributes. The finance organization can then decide which suppliers to double down on and which to cease doing business with for maximum cost savings, while maintaining quality standards. The ability to quickly fine tune the business based on data-driven insights will increase the finance function’s value in the organization and CEOs will increasingly rely on the CFO and finance team for strategic recommendations to improve business performance.

By applying advanced data science and machine learning to Oracle’s web-scale data and an organization’s own data, the new Adaptive Intelligent Apps can react, learn, and adapt in real time based on historical and dynamic data, while continuously delivering better business insights.

According to the Gartner report, “Impacts of Artificial Intelligence on Financial Management Applications,” written by Nigel Rayner and Christopher Iervolino, “The transformational potential of AI in financial management applications will come in the next two to three years as more AI technologies are embedded directly into financial management processes to automate complex, non-routine activities with little or no human intervention. Also, using AI to improve the accuracy and effectiveness of financial forecasting and planning will transform these processes.”1

The Oracle Adaptive Intelligent Apps are built into the existing Oracle Cloud Applications to deliver the industry’s most powerful AI-based modern business applications across finance, human resources, supply chain and manufacturing, commerce, customer service, marketing, and sales. The apps are powered by insights from the Oracle Data Cloud, which is the largest third-party data marketplace in the world with a collection of more than 5 billion global consumer and business IDs and more than 7.5 trillion data points collected monthly.

1 Gartner Report: “Impacts of Artificial Intelligence on Financial Management Applications,” Analysts Nigel Rayner, Christopher Iervolino, Published November 7, 2017.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

Statements in this article relating to Oracle’s future plans, expectations, beliefs, and intentions are “forward-looking statements” and are subject to material risks and uncertainties. Such statements are based on Oracle’s current expectations and assumptions, some of which are beyond Oracle’s control. All information in this article is current as of October 2, 2017 and Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

Using Let's Encrypt with Oracle Linux in Oracle Cloud Infrastructure

Wim Coekaerts - Tue, 2018-02-13 10:13

I stole Sergio's headline here and I am just going to link to his blog :)...

Sergio wrote up a how-to on using a let's encrypt cert and installing it on OL using nginx in an Oracle Cloud instance created and deployed with Terraform.

That 's a lot of words right there but it should demonstrate a few things:

  • All the extra packages we have been publishing of late in the Oracle Linux EPEL (Extra Packages for Enterprise Linux) mirror. (yes they're the same packages but they're built on Oracle Linux, the packages are signed by us and they're on the same yum repo so you don't have to install separate files to get to it.) This includes certbot etc.. that you need for this.
  • The convenience of having terraform, terraform-provider-oci RPMs to easily get going without downloading anything elsewhere.
  • Integration of Oracle Linux yum servers inside Oracle Cloud Infrastructure for fast and easy access with no external network traffic charges.

So you can find his blog here.

Oracle Transforms Enterprise Data Management

Oracle Press Releases - Tue, 2018-02-13 10:00
Press Release
Oracle Transforms Enterprise Data Management New Oracle Enterprise Data Management Cloud extends industry leading enterprise performance management suite

Oracle Modern Finance Experience, New York, NY—Feb 13, 2018

To meet growing demand from customers engaged in transformation efforts and to improve business agility, Oracle today announced Oracle Enterprise Data Management Cloud. Part of Oracle Enterprise Performance Management (EPM) Cloud, the new offering provides a single platform for easy management of critical enterprise data assets (such as the Chart of Accounts), and improved data integrity and alignment.

Today’s rapidly changing business environment presents multiple data alignment challenges. Cloud adoption, mergers and acquisitions, reorganizations and restructuring can create data inconsistencies that require finance teams to reconcile disparate data sets and associated metadata. Changes to application metadata, dimensions, hierarchies, mappings and related attributes are often handled manually through spreadsheets, email, and in-person meetings.

To help finance leaders eliminate manual errors and inconsistencies, create a single view of all enterprise data, and realize their vision for front and back-office business transformation, Oracle Enterprise Data Management Cloud provides centralized, self-service enterprise data maintenance, and data sharing and rationalization.

“As organizations grow and evolve, business and finance leaders face an increasingly complex range of challenges in managing and governing their enterprise data assets that cannot be successfully addressed through traditional approaches,” said Hari Sankar, group vice president, EPM product management at Oracle. “With Oracle Enterprise Data Management Cloud, we are providing a modern platform that streamlines business transformation efforts and enables organizations to maintain data integrity, accuracy and consistency across all their applications – in the cloud and on-premises.”

Key benefits that Oracle Enterprise Data Management Cloud can provide include:

  • Faster cloud adoption: Migrate and map enterprise data elements and on-going changes across public, private and hybrid cloud environments from Oracle or third parties.
  • Enhanced business agility: Drive faster business transformation through modeling M&A scenarios, reorganizations and restructuring, chart of accounts standardization and redesign.
  • Better alignment of enterprise applications: Manage on-going changes across front-office, back-office and performance management applications through self-service enterprise data maintenance, sharing and rationalization.
  • System of reference for all your enterprise data: Support enterprise data across business domains including: master data, reference data, dimensions, hierarchies, business taxonomies, associated relationships, mappings and attributes across diverse business contexts.

The addition of Oracle Enterprise Data Management Cloud rounds out Oracle’s industry-leading EPM Cloud suite, which has been adopted by thousands of organizations around the world. The new offering has already garnered significant attention with customers such as SunTrust Bank, Baha Mar, Diversey, and others selecting the service to support their business transformation efforts.

Additional Information

For additional information on Oracle EPM Cloud, visit Oracle Enterprise Performance Management (EPM) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

Four New Partners Join NetSuite Partner Program to Drive Growth, Customer Success

Oracle Press Releases - Tue, 2018-02-13 08:45
Press Release
Four New Partners Join NetSuite Partner Program to Drive Growth, Customer Success ERA Consulting Group, Smartbridge, Softengine and Vibrant to Build Cloud ERP Practices with NetSuite

SUITECONNECT—NEW YORK, N.Y.—Feb 13, 2018

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERPHRProfessional Services Automation (PSA) and omnichannel commerce software suites, today announced that four new partners have joined the NetSuite Solution Provider Program. ERA Consulting Group, Smartbridge, Softengine and Vibrant Inc. have teamed with NetSuite to launch new cloud ERP practices to help clients accelerate growth while improving efficiency and insights into business operations. The new partner relationships come amid continued strong demand for cloud-based business management as organizations unshackle themselves from outdated legacy applications and costly in-house infrastructure to improve performance and profitability with the greater business agility of cloud systems. With consulting, implementation and optimization services spanning a range of industries, the new partners are positioned to rapidly expand their client base while enjoying the high margins and recurring revenue available through the NetSuite Solution Provider Program.

“Our four new partners deliver deep industry-specific expertise and a proven track record of excellence in helping their clients meet and exceed business objectives,” said Craig West, Oracle NetSuite Vice President of Alliances and Channels. “We’re delighted to welcome them aboard and look forward to long and prosperous relationships that help customers grow, realizing the transformative advantages that NetSuite offers.”

ERA Consulting Group Targets Growing Cloud Opportunities in Canada

ERA Consulting Group (www.groupe-era.com), an IT services firm based in Montreal, Quebec, is launching its NetSuite practice to address rising demand for cloud-based business management solutions among Canadian organizations. ERA Consulting, created in 2004, is expanding its long-term relationship with Oracle around the JD Edwards ERP system by adding NetSuite to its technology portfolio. The firm, with 100 clients, is focused on the manufacturing, distribution and services industries, making NetSuite an ideal fit for ERA’s target of small to midsized customers. ERA employs 40 ERP specialists with subject matter expertise in areas ranging from finance to HR, inventory, planning, manufacturing and logistics. With NetSuite’s offerings for unified commerce, ERA Consulting also sees opportunities in the retail industry as it looks to build on its continued growth.

“As a cloud solution, we’re able to implement NetSuite faster so that clients can focus on optimizing and growing their business,” said Benoit Gagnon, ERA Consulting CEO. “Canada has been traditionally a bit behind the U.S. in cloud adoption, but we have seen a shift as more organizations are rapidly embracing cloud solutions, and it will only go up from here.”

Smartbridge Expands on Oracle JD Edwards Expertise by Adding NetSuite

Smartbridge (www.smartbridge.com), a Houston-based services firm that simplifies business transformation, is introducing cloud ERP into its broad portfolio through its NetSuite partner relationship. The 15-year-old company views NetSuite as aligning with its focus areas of ERP and supply chain, as well as systems integration, business intelligence, advanced analytics, enterprise mobility and digital transformation. In addition, NetSuite fits well with Smartbridge’s key industries of manufacturing, distribution, food service, oil and gas, and facility services. An Oracle JD Edwards partner, Smartbridge is expanding its Oracle relationship by offering NetSuite cloud ERP along with ongoing support and enhancements for JD Edwards. With NetSuite, Smartbridge will provide food service operators and Houston-based SMBs with the agile and flexible solutions they need to grow and innovate. Smartbridge’s core customer base, including some of the largest global restaurant chains and Fortune 500 facility services businesses, are also seeking the simplification of cloud ERP solutions for acquisitions, divestitures and managing franchises.

“Our intention is to provide cloud-based enterprise solutions to help our customers succeed by growing our ERP practice around NetSuite with BI, integration and enterprise mobility as natural extensions,” said Sanat Nileshwar, Director, Enterprise Systems at Smartbridge. “NetSuite’s single database and real-time information has generated a lot of interest in the market for modern, innovative companies that want to get out of managing infrastructure and application hairballs.”

For more information about Smartbridge’s NetSuite practice, visit http://www.Smartbridge.com/NetSuite.

Softengine Broadens Solutions Portfolio with NetSuite Cloud ERP

Softengine (www.softengine.com), an ERP and business process solutions provider based in Woodland Hills, Calif. has joined the NetSuite Solution Provider Program to broaden its portfolio and meet growing customer demand for cloud-based business management. Founded in 1996, Softengine is a longtime SAP Gold level partner. The company has expertise in a range of industries including wholesale distribution, manufacturing, food production, retail, apparel and nonprofits. Softengine will align that industry expertise with the NetSuite SuiteSuccess methodology and its tailored vertical offerings to deliver impactful industry-specific solutions. With many fast-growing businesses among its hundreds of clients, Softengine views NetSuite as a good fit to help companies accelerate growth while improving business efficiency and visibility. The firm will focus on NetSuite cloud ERP, and offer implementation and optimization services in areas including CRM and ecommerce.

“We’re continually looking at growing our business and we felt we could make even more progress by creating a NetSuite practice. We want to offer the best platforms as we see them on the market, and NetSuite is one of them,” said Gil Lasman, General Manager at Softengine.

Vibrant, Inc. Aims New NetSuite Practice at Fast-Growing Companies

Vibrant, Inc. (www.vibrantinc.com), a full-service IT provider based in Princeton, N.J., is focusing its new NetSuite practice on growing companies that need the scalability, agility and on-demand data access that cloud business management solutions deliver. Founded in 2000, Vibrant has extensive experience implementing such enterprise solutions as Oracle PeopleSoft, Oracle JD Edwards, Oracle eBusiness Suite and Microsoft Dynamics. The company, with operations across the U.S. and in India, chose to join the NetSuite Partner Program after evaluating other solutions including Workday and SAP Business ByDesign. Key targets for Vibrant’s NetSuite practice include fast-growing companies, and organizations in the life sciences, retail and manufacturing industries. Seeing strong market recognition of NetSuite’s leadership in cloud business management, Vibrant’s services will cover financials / ERP as well as ecommerce, CRM, and HR to deliver a complete end-to-end cloud environment.

“Looking at our target segment of growing companies, NetSuite is a perfect fit as a cloud solution with low up-front investment that grows as the business grows,” said Pannala Suresh, Vibrant founder. “NetSuite’s multi-functional scope and tight integration means a company could start with financials and scale into supply chain, sales, marketing and HR as they grow.”

Launched in 2002, the NetSuite Solution Provider Program is the industry’s leading cloud channel partner program. Since its inception, NetSuite has been a leader in partner success, breaking new ground in building and executing on the leading model to make the channel successful with NetSuite. A top choice for partners who are building new cloud ERP practices or for those expanding their existing practice to meet the demand for cloud ERP, NetSuite has enabled partners to transform their business model to fully capitalize on the revenue growth opportunity of the cloud. The NetSuite Solution Provider Program delivers unprecedented benefits that include highly attractive margins and range from business planning, sales, marketing and professional services enablement, to training and education. For more information about the NetSuite Solution Provider Program, visit www.netsuite.com/partners.

Contact Info
Michael Robinson
Oracle NetSuite
781-974-9401
michael.s.robinson@oracle.com
About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blogFacebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Michael Robinson

  • 781-974-9401

Brightpoint Health remains committed to New York’s neediest after more than 25 years

Oracle Press Releases - Tue, 2018-02-13 08:30
Press Release
Brightpoint Health remains committed to New York’s neediest after more than 25 years Nonprofit leverages NetSuite software donation, pro bono volunteers to grow, further its mission

SUITECONNECT—NEW YORK, N.Y.—Feb 13, 2018

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced that Brightpoint Health, a New York-based nonprofit is using NetSuite OneWorld to expand and further its mission to bring compassionate health care and services to New York’s underserved, coordinating behavioral health, primary care and social services.

Founded in 1990 during the AIDS epidemic, Brightpoint Health began as a 66-bed residential facility in the Bronx, serving people living with AIDS who were also struggling with substance abuse. The organization was able to provide comfort and support to thousands of patients, many of whom had been rejected by their families and communities. As AIDS became a treatable condition, Brightpoint expanded its program to provide medical and dental care, behavioral health care, substance abuse treatment and referrals, even as it continues to offer Adult Day Health Care services for its HIV-positive clients. Today, Brightpoint serves more than 30,000 New Yorkers annually across all five boroughs. Among the clients it serves, over 80 percent are recipients of Medicaid and about 70 percent are homeless. For many clients, in addition to instability due to poverty and inadequate housing, their health challenges often include multiple chronic medical and behavioral problems. As a Federally-Qualified Health Center, no one is turned away, regardless of whether or not they have insurance.

“We’re serving the population that needs us the most and putting the patient at the center,” said Dr. Barbara Zeller, chief clinical officer, Brightpoint Health.

To achieve that goal, Brightpoint strives to maintain continuous engagement with clients so they can develop the tools to care for themselves, manage their medical and behavioral challenges, and transform their health and their lives for the better.

“Just to be able to come here and see the doctor, it lightens my load a lot,” said Ebony Towns, a Brightpoint patient. “It helps to know that people actually care still. They care.”

In 2013, Brightpoint began a concentrated expansion effort, opening 11 new sites in a five-year period. As part of that expansion, it invested significantly in its Quality Management Department, seeking to become a data-driven organization with an electronic health record for primary care, behavioral health and dental services. That year, Brightpoint served 4,735 patients, including 2,510 new patients, for a total of 23,357 visits.

Amidst that expansion, Brightpoint quickly determined that its existing DOS-based financial system could no longer meet its needs. After surveying other health care practices and evaluating Microsoft Dynamics GP, Brightpoint selected NetSuite OneWorld for its breadth of functionality, reporting, real-time view of operations and multi-subsidiary management to help manage the organization’s four subsidiaries. Month-end close, which once took 90 days, can now be done in 15 days. The comprehensive view of the organization also allows management to make informed decisions, whether that’s merging an outside physician’s practice into the organization or devoting additional resources to specific locations. In the first two years after implementing NetSuite OneWorld, revenue went up 33 percent and visits went up 77 percent, while the organization continued to open new locations. Now with 13 health centers and multiple service locations, Brightpoint served close to 34,000 patients in 2017 with 188,790 visits.

As a nonprofit, Brightpoint qualified for a donation from Oracle NetSuite Social Impact, which makes available free and discounted software to qualified nonprofits and social enterprises. It has also taken advantage of Suite Pro Bono, where NetSuite employees provided their time and expertise to help Brightpoint Health with NetSuite training and customizations. Brightpoint was also the nonprofit that received technical expertise from NetSuite customers, partners and employees during the Hackathon 4Good at SuiteWorld17. That’s allowed Brightpoint to funnel more of its resources toward helping those in need.

“We’re very mission focused,” said Zeller. “NetSuite gives us the support and visibility into operations to continue to serve the neediest New Yorkers.”

Contact Info
Michael Robinson
Oracle NetSuite
781-974-9401
michael.s.robinson@oracle.com
About Oracle NetSuite Social Impact

Founded in 2006, Oracle NetSuite Social Impact is empowering nonprofits to use NetSuite to further their mission, regardless of their ability to pay. More than 1,000 nonprofits and social enterprises around the world are supported by NetSuite Social Impact, which provides software donations to qualified organizations. The program also includes Suite Pro Bono, under which NetSuite employees provide their expertise to help nonprofits with training and customizations to make the most of the platform. To learn more about NetSuite Social Impact, please visit www.netsuite.com/socialimpact and follow on Twitter at @NS_SocialImpact.

About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Michael Robinson

  • 781-974-9401

Food and Beverage Manufacturers Primed for Growth, Positioned for Success with NetSuite

Oracle Press Releases - Tue, 2018-02-13 08:15
Press Release
Food and Beverage Manufacturers Primed for Growth, Positioned for Success with NetSuite Innovative Manufacturers Aspire Group, Dyla and The PUR Company Ready Themselves for Growth with NetSuite’s Industry Cloud Solution

SUITECONNECT—NEW YORK, N.Y.—Feb 13, 2018

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced that three food and beverage manufacturers have implemented the NetSuite SuiteSuccess Manufacturing edition, enabling their businesses to grow, scale and adapt to change at the pace of modern business. Aspire Food Group, which raises crickets for a wide variety of uses including high protein snacks and food additives, Dyla, maker of Forto all natural ready-to-drink coffee shots and Stur water flavor enhancers, and PUR, a fast-growing manufacturer of aspartame-free gum, were all able to take advantage of NetSuite’s decades of experience with thousands of manufacturers to get their businesses up and running quickly with a leading cloud ERP system.

Food and beverage manufacturers like Aspire, Dyla and PUR confront a wide array of challenges that can hinder growth and customer satisfaction. Consumers increasingly demand more choice and information around the ingredients they’re consuming from GMO-free, to locally sourced and other health-based requirements. Meanwhile, governments are imposing stricter regulations and manufacturers still confront increased global competition and complex supply chains. As they seek to address these challenges, food and beverage manufacturers are concerned about the time and capital required to fix their back-end systems. However, they realize that the ability to rapidly adapt and scale is critical to their success. Companies looking to overcome these challenges to grow and diversify can be held back by entry-level systems that do not have the functionality to manage business at scale, or older, on-premise systems built before the internet that cannot take full advantage of the cloud. NetSuite SuiteSuccess for Manufacturing delivers a unique set of processes, activities and systems specifically designed to deliver rapid value. It provides customers with a strong foundation to transform their business with a pre-configured solution and methodology.

Aspire Food Group Grows from Idea to Prize to Enterprise

In 2013, five MBA students from McGill University came up with an idea to help solve the problem of food insecurity, which ultimately won the Hult Prize Competition, besting 10,000 other teams from around the world. That prize-winning idea gave birth to Aspire Food Group (www.aspirefg.com), a social enterprise specializing in farming insects for human consumption. Crickets and palm weevils, the two insect species Aspire is currently raising and researching, provide an excellent source of protein without the resource-intensive demands of other proteins like beef or chicken. With a palm weevil facility in Ghana and a cricket-raising facility in Austin, Texas, Aspire is focused not only on raising awareness around the possibilities of insect-based protein, but developing a way to farm insects year-round in an efficient and scalable manner. The company currently provides cricket-based additives for protein powders, flavored and roasted whole crickets, which are currently being sold at sports stadiums, as well as a handful of restaurants and other food service customers. As demand for its products emerged quicker than the company anticipated, it soon discovered that its existing financial management system based on QuickBooks and Excel spreadsheets would not keep up with its growth. It selected NetSuite after an extensive selection process and in just four months went live using SuiteSuccess for Manufacturing to run financials, CRM and inventory management. As a result, the company is now efficiently processing payments with the click of a button, tracking customers and invoices and it has substantially improved its budgeting process. Reports that used to take six hours to create are now done in under an hour. Aspire has plans to implement additional manufacturing functionality to better track the costs associated with the manufacturing process and livestock processes in a second phase.

“A year from now, I know the volume of orders is only going to increase and our capacity to track the business is going to be critical,” said Abir Syed, Aspire’s Director of Finance. “I wanted to get ahead of it and NetSuite and SuiteSuccess ensured we have a scalable, modern system in place quickly to continue our growth without interruption.”

Specialty Drink Manufacturer Manages Rapid Growth with NetSuite

Dyla LLC saw rapid growth after its Forto ready-to-drink coffee shots and Stur water flavor enhancers struck a chord with a market thirsty for more natural alternatives to energy-boosting and healthy drinks. The West New York, NJ-based company saw growth of 500 percent year-over-year in 2017, becoming one of the fastest growing businesses in New Jersey. As Forto expanded across airports and convenience stores, and Stur attracted major retailers, running two separate businesses on QuickBooks wasn’t sustainable. Dyla’s operations team spent half its time in spreadsheets synchronizing inventory, order and customer information, even manually entering EDI transactions. After a comprehensive evaluation process, Dyla selected NetSuite over SAP and Microsoft Dynamics, swayed by its ease of use, customization and cloud architecture that would allow real-time information sharing by its employees located throughout the U.S. With SuiteSuccess, Dyla went live on NetSuite in May 2017 in just 90 days to manage contract manufacturers, a distribution hub and 117 SKUs across the two brands and tens of thousands of locations. Just a month after go-live, Dyla had processed a record number of orders processed in the same period the year before. With NetSuite, Dyla has streamlined its month-end close from five days to three hours and automated communication with 3PL providers.

“We have grown 500 percent year-over-year because we were able to put our resources into selling the product, not the backend,” said Justin Lawrence, Head of Supply Chain/Operations and Finance, Dyla LLC.

Aspartame-free Gum Maker Sweetens its Business Growth

Founded in 2010 with a business plan based on direct contact with retail locations, The PUR Company saw its business explode as demand for its aspartame-free gum grew. The company expanded rapidly, from 33 retail partner locations in its first month to 500 only a few months later. But, as it grew beyond Canada to the US, UK, Australia and Germany, PUR knew it would need a modern system to manage that growth and adapt to future business demands. When new leadership was hired to advance the company to its next stage of growth, they quickly determined that the company needed to replace its existing QuickBooks and Excel-based processes. After evaluating SAP, The PUR Company selected NetSuite and implemented it in just 100 days with SuiteSuccess. The PUR Company now has complete traceability across products in its Toronto warehouse while the unified ERP and CRM gives better insights into customer service and sales strategy. Meanwhile, OneWorld lets PUR transact in Canadian and US dollars and Swiss franc. NetSuite can continue to scale as the company grows while NetSuite’s unified platform combines financial, inventory and customer data – vital for a company heavily focused on customer loyalty.

“NetSuite is helping us transition our dreams into realities by building a better organization that is more structured and organized and advanced, bringing science to our art,” said Jay Klein, Founder and CEO of The PUR Company.

SuiteSuccess is the culmination of a multi-year transformation effort to combine the NetSuite unified suite, 20 years of industry leading practices, a new customer engagement model, and business optimization methods into a unified, industry cloud solution. SuiteSuccess was engineered to solve unique industry challenges that historically have limited a company’s ability to grow, scale and adapt to change. Most ERP vendors have tried to solve the industry solution problem with templates, rapid implementation methodologies, and custom code. NetSuite took a holistic approach to the problem and productized domain knowledge, leading practices, KPIs, and an agile approach to product adoption. The benefits of this are faster time to value, increased business efficiency, flexibility, and greater customer success.

The key components of SuiteSuccess for Manufacturing include:

  • Tailored roles with built-in workflows specific to wholesale distribution such as supply chain manager, warehouse operations and production manager.

  • Industry leading best practices built into the system spanning inventory utilization and visibility, order orchestration and more.

  • More than 150 Pre-built KPIs and reports, giving manufacturers real-time insight into the business from Day 1.

For more information on SuiteSuccess for Manufacturers, visit: http://www.netsuite.com/portal/services/suitesuccess/manufacturing.shtml

Contact Info
Michael Robinson
Oracle NetSuite
781-974-9401
michael.s.robinson@oracle.com
About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Michael Robinson

  • 781-974-9401

Steiner Sports Scores Big Victories in Crowded Sports Marketing

Oracle Press Releases - Tue, 2018-02-13 08:00
Press Release
Steiner Sports Scores Big Victories in Crowded Sports Marketing Adding Collectibles Business to Unique Sports Experiences Drives Growth for Leader in the Field

NEW YORK, N.Y.—Feb 13, 2018

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced that Steiner Sports, a company specializing in sports memorabilia and sports marketing, has implemented NetSuite to help drive business growth. Steiner Sports is using NetSuite for financials, inventory management, order managements, ecommerce and point of sale (PoS).

Equipped with only $4,000, a Macintosh II computer and one intern, Brandon Steiner founded Steiner Sports in 1987. The business has since grown to become the largest company of its kind, even as demand for sports-related marketing has exploded. It is now a part of the Omnicom Group.  Brandon Steiner’s love of sports, particularly the New York teams, and the persistence and patience to wait outside of stadiums, arenas and parking lots for the chance to approach star athletes, eventually turned into a successful business. The core of Steiner Sports’ value proposition allows companies to leverage its expertise, existing relationships, and $25 million annual athlete procurement spend to create marketing efficiencies, and maximize limited marketing budgets. Sports memorabilia and collectibles went on to become a significant part of the Steiner Sports business as clients began to ask for autographed items.

Steiner Sports now works with more than 2,000 athletes and counts more than 100,000 SKUs of collectible inventory worth more than $20 million. Steiner Sports notably purchased the old Yankee Stadium when it was demolished in 2008 selling items off in pieces, everything from the actual seats to bathroom fixtures to the dirt on the old ballfield.

“We have sold millions of dollars’ worth of dirt from the old Yankee Stadium,” said Kelvin Joseph, Steiner Sports COO and CMO.

Similar to how the meet and greet promotions that Steiner once created sparked demand for memorabilia, current clients are often seeking an experience with athletes, such as a selfie or having them speak at an event. Its long history in the market and focus on personal relationships and helping clients to grow through sports is what helps Steiner Sports stand out in a crowded marketplace, according to Joseph.

As the company grew and expanded beyond professional services and into a B2C business with its collectibles, the previous Microsoft Dynamics AX system it was using no longer met its needs. Steiner Sports had heavily customized the system to help it manage diverse customers that ranged from the Fortune 500 to small businesses to consumers buying collectibles. Steiner Sports selected NetSuite for its cloud-based architecture and regular, painless updates. In addition, NetSuite’s unified platform, real-time visibility and focus on customer success were key differentiators.

“NetSuite’s business model of really caring about its customers and helping them to grow was attractive,” Joseph said. “It’s in NetSuite’s best interest for their customers to grow and to grow with them. That’s what makes it authentic.”

Contact Info
Michael Robinson
Oracle NetSuite Corporate Communications
781-974-9401
Michael.s.robinson@oracle.com
About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Michael Robinson

  • 781-974-9401

crs processes

Tom Kyte - Tue, 2018-02-13 06:26
when we start CRS which process will start first and explain its background
Categories: DBA Blogs

Using defined variable (ampersand) for a part of column name

Tom Kyte - Tue, 2018-02-13 06:26
Good day. I am having problem to define <b><u>only part</u></b> of a column name as a variable (using ampersand). Script example: <b><i>Select &Product._ID, &Product._NAME, &Product._SALES, &Product._DEBT From TABL...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator