Feed aggregator

Integrigy COLLABORATE 17 Sessions - Presentations on Oracle Database, Oracle E-Business Suite, and PeopleSoft Security

Integrigy is presenting nine papers this year at COLLABORATE 17 (https://collaborate.oaug.org/). The COLLABORATE 17 conference is a joint conference for the Oracle Applications User Group (OAUG), Independent Oracle Users Group (IOUG), and Quest International Users Group.

Here is our schedule. If you have questions or would like to meet with us while at COLLABORTE 17, please conact us at info@integrigy.com.

Sunday Apr 02, 2017

1:45 PM - 2:45 PM

Oracle E-Business Suite 12.2 Security Enhancements

https://app.attendcollaborate.com/event/member?item_id=5621519

Banyan E

Speaker: Stephen Kost

1:45 PM - 2:45 PM

How to Control and Secure Your DBAs and Developers in Oracle E- Business Suite

https://app.attendcollaborate.com/event/member?item_id=5740411

South Seas F

Speaker: Michael Miller

Monday Apr 03, 2017

9:45 AM - 10:45 AM

The Thrifty DBA Does Database Security

https://app.attendcollaborate.com/event/member?item_id=5660960

Jasmine D

Speaker: Stephen Kost

1:00 PM - 4:30 PM

Integrigy team available for meetings and discussions Contacts us at info@integrigy.com to arrange

 

 

Tuesday Apr 04, 2017

9:45 AM - 10:45 AM

Solving Application Security Challenges with Database Vault

https://app.attendcollaborate.com/event/member?item_id=5660961

Jasmine D

Speaker: Stephen Kost

1:00 PM - 4:30 PM

Integrigy team available for meetings and discussions Contacts us at info@integrigy.com to arrange

 

 

Wednesday Apr 05, 2017

9:45 AM - 10:45 AM

When You Can't Apply Database Security Patches

https://app.attendcollaborate.com/event/member?item_id=5660962

Jasmine D

Speaker: Stephen Kost

11:00 AM - 12:00 PM

Common Mistakes When Deploying Oracle E-Business Suite to the Internet

https://app.attendcollaborate.com/event/member?item_id=5621520

South Seas B

Speaker: Stephen Kost

1:30 PM - 2:30 PM

Securing Oracle 12c Multitenant Pluggable Databases

https://app.attendcollaborate.com/event/member?item_id=5660950

Palm A

 

Speaker: Michael Miller

2:45 PM - 3:45 PM

How to Control and Secure Your DBAs and Developers in PeopleSoft

https://app.attendcollaborate.com/event/member?item_id=5617942

Ballroom  J

Speaker: Michael Miller

Thursday Apr 06, 2017

8:30 AM - 9:30 AM

Oracle E-Business Suite Mobile and Web Services Security

https://app.attendcollaborate.com/event/member?item_id=5621407

South Seas B

Speaker: Michael Miller

 

You can download a complete listing of Integrigy's sessions at Integrigy COLLABORATE 17 Sessions.

Oracle Database, Oracle E-Business Suite, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

PeopleSoft Security

This is a quick summary of Integrigy’s latest research on PeopleSoft. Was sending this to a client and decided it was a good posting:

Guide to PeopleSoft Logging and Auditing

How to Control and Secure PeopleSoft DBAs and Developers

PeopleSoft Database Security

PeopleSoft Database Secure Baseline Configuration

PeopleSoft Security Quick Reference

If you have any questions, please contact us at info@integrigy.com

 

 
 
Oracle PeopleSoft, Whitepaper
Categories: APPS Blogs, Security Blogs

Deploying Oracle E-Business Suite 12.2 REST Web Services

This is the forth posting in a blog series summarizing the new Oracle E-Business Suite 12.2 Mobile and web services functionality and recommendations for securing them.

Physically deploying REST services with 12.2 is straightforward. REST is an architectural style and not a protocol and is best used to support lightweight and “chatty” interfaces such as Mobile applications.  With 12.2, REST Web Application Description Language (WADL) interface definition files are generated within the E-Business Suite's WebLogic server and run through the OAFM Application. The OAFM application created with the installation of the Oracle E-Business Suite.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 

 

     
     
     
     
     
     
    Web Services, DMZ/External, Oracle E-Business Suite
    Categories: APPS Blogs, Security Blogs

    Links for 2017-03-16 [del.icio.us]

    Categories: DBA Blogs

    Two identical queries with same parameter values have different execution plans ..

    Tom Kyte - Fri, 2017-03-17 00:26
    Hello and thanks for your time. We noticed an odd behavior of Oracle 12.1 query plan selection for the same query. The query generated from .Net Entity Framework has an inefficient query plan than if we run the same query in SQL Developer. When we...
    Categories: DBA Blogs

    Can I set the basis for SYSDATE within a session?

    Tom Kyte - Fri, 2017-03-17 00:26
    When writing code that deals with time, it would be very useful to be able to "set" the starting point for SYSDATE within a session. For example, suppose I want to select one set of data if the query runs in March and a different set if it runs in...
    Categories: DBA Blogs

    Update a nested column with a database tool like Oracle sql developer

    Tom Kyte - Fri, 2017-03-17 00:26
    Hello , i have another question about nested tables. When i created the table projects with the column project_name and categories. categories is the nested column. Can i easy update the column categories (viz : i write direct in the column usin...
    Categories: DBA Blogs

    Accessing Nested Tables Elements

    Tom Kyte - Fri, 2017-03-17 00:26
    Hello , i want to have multiple value per cell and i used nested tables. After creating my table projects with a the nested column categories, i insert some rows. but when i do select * from tables , i have the error unsupported data type by the ...
    Categories: DBA Blogs

    Purpose of clauses in SQL*Loader control file

    Tom Kyte - Fri, 2017-03-17 00:26
    Hi, I am Learning Data Loading With Sql Loader.i written some basic control files(with out using Clauses) then data loaded well. but i did not understand the clauses like NULLIF, CHAR, DEFAULTIF etc. So please tell me why we use this Clauses in Co...
    Categories: DBA Blogs

    Converting quarter number to dates of months of this quarter

    Tom Kyte - Fri, 2017-03-17 00:26
    Hi Tom! I have a table containing a column with a QUARTER of the year (1, 2, 3, 4). I'd like to multiply each row into another table while substituting the quarter column with two columns - containing the DATE of the FIRST and the LAST day of each...
    Categories: DBA Blogs

    ASP.NET Core app deployed to Pivotal Cloud Foundry

    Pas Apicella - Thu, 2017-03-16 22:37
    This post will show you how to write your first ASP.NET Core application on macOS or Linux and push it to Pivotal Cloud Foundry without having to PUBLISH it for deployment.

    Before getting started you will need the following

    1. Download and install .NET Core
    2. Visual Studio Code with the C# extension.
    3. CF CLI installed https://github.com/cloudfoundry/cli

    Steps

    Note: Assumes your already logged into Pivotal Cloud Foundry and connected to Pivotal Web Services (run.pivotal.io), the command below shows I am connected and targeted

    pasapicella@pas-macbook:~$ cf target
    API endpoint:   https://api.run.pivotal.io
    API version:    2.75.0
    User:           papicella@pivotal.io
    Org:            apples-pivotal-org
    Space:          development

    1. Create new project

    pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet new mvc --auth None --framework netcoreapp1.0
    Content generation time: 278.4748 ms
    The template "ASP.NET Core Web App" created successfully.

    2. Restore as follows

    pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
      Restoring packages for /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/dotnet-core-mvc.csproj...
      Generating MSBuild file /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvco/obj/dotnet-core-mvc.csproj.nuget.g.props.
      Generating MSBuild file /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/obj/dotnet-core-mvc.csproj.nuget.g.targets.
      Writing lock file to disk. Path: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/obj/project.assets.json
      Restore completed in 1.09 sec for /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc/dotnet-core-mvc.csproj.

      NuGet Config files used:
          /Users/pasapicella/.nuget/NuGet/NuGet.Config

      Feeds used:
          https://api.nuget.org/v3/index.json

    3. At this point we can run the application and see what it looks like in a browser

    pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet run
    Hosting environment: Production
    Content root path: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc
    Now listening on: http://localhost:5000
    Application started. Press Ctrl+C to shut down.


    Now to prepare this demo for Pivotal Cloud Foundry we need to make some changes to he generated code as shown in the next few steps

    4. In Visual Studio Code, under the menu item “File/Open” select the “dotnet-core-mvc” folder and open it. Confirm all messages from Visual Studio Code.



    The .NET Core buildpack configures the app web server automatically so you don’t have to handle this yourself, but you have to prepare your app in a way that allows the buildpack to deliver this information via the command line to your app

    5. Open "Program.cs" and modify the Main() method as follows adding "var config = ..." and ".UseConfiguration(config)" as shown below
      
    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Linq;
    using System.Threading.Tasks;
    using Microsoft.AspNetCore.Hosting;
    using Microsoft.Extensions.Configuration;

    namespace dotnet_core_mvc
    {
    public class Program
    {
    public static void Main(string[] args)
    {
    var config = new ConfigurationBuilder()
    .AddCommandLine(args)
    .Build();

    var host = new WebHostBuilder()
    .UseKestrel()
    .UseConfiguration(config)
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();

    host.Run();
    }
    }
    }

    6. Open "dotnet-core-mvc.csproj" and add the following dependency "Microsoft.Extensions.Configuration.CommandLine" as shown below
      
    <Project Sdk="Microsoft.NET.Sdk.Web">

    <PropertyGroup>
    <TargetFramework>netcoreapp1.0</TargetFramework>
    </PropertyGroup>


    <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="1.0.4" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.0.3" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="1.0.2" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="1.0.2" />
    <PackageReference Include="Microsoft.Extensions.Configuration.CommandLine" Version="1.0.0" />
    <PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="1.0.1" />
    </ItemGroup>


    </Project>


    7. File -> Save All

    8. Jump back out to a terminal windows, you can actually restore from Visual Studio Code IDE BUT I still like to do it from the command line

    pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ dotnet restore
    ...

    9. Deploy to Pivotal Cloud Foundry as follows, you will need to use a unique name so replace "pas" with your own name that should do it.

    $ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m

    ** Output **

    pasapicella@pas-macbook:~/pivotal/software/dotnet/dotnet-core-mvc$ cf push pas-dotnetcore-mvc-demo -b https://github.com/cloudfoundry/dotnet-core-buildpack -m 512m
    Creating app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
    OK

    Using route pas-dotnetcore-mvc-demo.cfapps.io
    Binding pas-dotnetcore-mvc-demo.cfapps.io to pas-dotnetcore-mvc-demo...
    OK

    Uploading pas-dotnetcore-mvc-demo...
    Uploading app files from: /Users/pasapicella/pivotal/software/dotnet/dotnet-core-mvc
    Uploading 208.7K, 84 files
    Done uploading
    OK

    Starting app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
    Creating container
    Successfully created container
    Downloading app package...
    Downloaded app package (675.5K)
    ASP.NET Core buildpack version: 1.0.13
    ASP.NET Core buildpack starting compile
    -----> Restoring files from buildpack cache
           OK
    -----> Restoring NuGet packages cache
           OK
    -----> Extracting libunwind
           libunwind version: 1.2
           https://buildpacks.cloudfoundry.org/dependencies/manual-binaries/dotnet/libunwind-1.2-linux-x64-f56347d4.tgz
           OK
    -----> Installing .NET SDK
           .NET SDK version: 1.0.1
           OK
    -----> Restoring dependencies with Dotnet CLI

           Welcome to .NET Core!
           ---------------------
           Telemetry
           The .NET Core tools collect usage data in order to improve your experience. The data is anonymous and does not include command-line arguments. The data is collected by Microsoft and shared with the community.
           You can opt out of telemetry by setting a DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 using your favorite shell.
           You can read more about .NET Core tools telemetry @ https://aka.ms/dotnet-cli-telemetry.
           Configuring...
           -------------------
           A command is running to initially populate your local package cache, to improve restore speed and enable offline access. This command will take up to a minute to complete and will only happen once.
           Decompressing 100% 16050 ms
    -----> Buildpack version 1.0.13
           https://buildpacks.cloudfoundry.org/dependencies/dotnet/dotnet.1.0.1.linux-amd64-99324ccc.tar.gz
           Learn more about .NET Core @ https://aka.ms/dotnet-docs. Use dotnet --help to see available commands or go to https://aka.ms/dotnet-cli-docs.

           --------------

           Expanding 100% 13640 ms
             Restoring packages for /tmp/app/dotnet-core-mvc.csproj...
             Installing Microsoft.Extensions.Configuration 1.0.0.
             Installing Microsoft.Extensions.Configuration.CommandLine 1.0.0.
             Generating MSBuild file /tmp/app/obj/dotnet-core-mvc.csproj.nuget.g.props.
             Writing lock file to disk. Path: /tmp/app/obj/project.assets.json
             Restore completed in 2.7 sec for /tmp/app/dotnet-core-mvc.csproj.

             NuGet Config files used:
                 /tmp/app/.nuget/NuGet/NuGet.Config

             Feeds used:
                 https://api.nuget.org/v3/index.json

             Installed:
                 2 package(s) to /tmp/app/dotnet-core-mvc.csproj
           OK
           Detected .NET Core runtime version(s) 1.0.4, 1.1.1 required according to 'dotnet restore'
    -----> Installing required .NET Core runtime(s)
           .NET Core runtime 1.0.4 already installed
           .NET Core runtime 1.1.1 already installed
           OK
    -----> Publishing application using Dotnet CLI
           Microsoft (R) Build Engine version 15.1.548.43366
           Copyright (C) Microsoft Corporation. All rights reserved.

             dotnet-core-mvc -> /tmp/app/bin/Debug/netcoreapp1.0/dotnet-core-mvc.dll
           Copied 38 files from /tmp/app/libunwind to /tmp/cache
    -----> Saving to buildpack cache
           OK
           Copied 850 files from /tmp/app/.dotnet to /tmp/cache
           Copied 19152 files from /tmp/app/.nuget to /tmp/cache
           OK
    -----> Cleaning staging area
           Removing /tmp/app/.nuget
           OK
    ASP.NET Core buildpack is done creating the droplet
    Exit status 0
    Uploading droplet, build artifacts cache...
    Uploading droplet...
    Uploaded build artifacts cache (359.9M)
    Uploaded droplet (131.7M)
    Uploading complete
    Successfully destroyed container

    0 of 1 instances running, 1 starting
    1 of 1 instances running

    App started


    OK

    App pas-dotnetcore-mvc-demo was started using this command `cd .cloudfoundry/dotnet_publish && dotnet dotnet-core-mvc.dll --server.urls http://0.0.0.0:${PORT}`

    Showing health and status for app pas-dotnetcore-mvc-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
    OK

    requested state: started
    instances: 1/1
    usage: 512M x 1 instances
    urls: pas-dotnetcore-mvc-demo.cfapps.io
    last uploaded: Fri Mar 17 03:19:51 UTC 2017
    stack: cflinuxfs2
    buildpack: https://github.com/cloudfoundry/dotnet-core-buildpack

         state     since                    cpu    memory          disk           details
    #0   running   2017-03-17 02:26:03 PM   0.0%   39.1M of 512M   302.7M of 1G

    10. Finally invoke the application using the URL which can be determined by the output at the end of the PUSH above or using "cf apps"



    More Information

    https://docs.microsoft.com/en-us/aspnet/core/tutorials/your-first-mac-aspnet
    Categories: Fusion Middleware

    ORA-54002 when trying to create Virtual Column using REGEXP_REPLACE on Oracle 12cR2

    Jeff Moss - Thu, 2017-03-16 17:50

    I encountered an issue today trying to create a table in an Oracle 12cR2 database, the DDL for which, I extracted from an Oracle 11gR2 database. The error returned when trying to create the table was:

    ORA-54002: only pure functions can be specified in a virtual column expression

    The definition of the table included a Virtual Column which used a REGEXP_REPLACE call to derive a value from another column on the table.

    Here is a simplified test case illustrating the scenario (Thanks Tim for the REGEXP_REPLACE example code):

    select * from v$version
    /
    create table test_ora54002_12c(
     col1 VARCHAR2(20 CHAR) NOT NULL
     ,virtual_column1 VARCHAR2(4000 CHAR) GENERATED ALWAYS AS(REGEXP_REPLACE(col1, '([A-Z])', ' \1', 2)) VIRTUAL VISIBLE
    )
    /
    drop table test_ora54002_12c purge
    /

    Running this on 11gR2 gives:

    SQL> select * from v$version
     2 /
    
    BANNER
    --------------------------------------------------------------------------------
    Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    PL/SQL Release 11.2.0.4.0 - Production
    CORE 11.2.0.4.0 Production
    TNS for Linux: Version 11.2.0.4.0 - Production
    NLSRTL Version 11.2.0.4.0 - Production
    
    5 rows selected.
    
    Elapsed: 00:00:00.40
    SQL> create table test_ora54002_12c(
     2 col1 VARCHAR2(20 CHAR) NOT NULL
     3 ,virtual_column1 VARCHAR2(4000 CHAR) GENERATED ALWAYS AS(REGEXP_REPLACE(col1, '([A-Z])', ' \1', 2)) VIRTUAL VISIBLE
     4 )
     5 /
    
    Table created.
    
    Elapsed: 00:00:00.24
    SQL> drop table test_ora54002_12c purge
     2 /
    
    Table dropped.

    Running this on 12cR2 gives:

    SQL> select * from v$version
    /
     2
    BANNER CON_ID
    -------------------------------------------------------------------------------- ----------
    Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production 0
    PL/SQL Release 12.2.0.1.0 - Production 0
    CORE 12.2.0.1.0 Production 0
    TNS for Linux: Version 12.2.0.1.0 - Production 0
    NLSRTL Version 12.2.0.1.0 - Production 0
    
    SQL> create table test_ora54002_12c(
     col1 VARCHAR2(20 CHAR) NOT NULL
     ,virtual_column1 VARCHAR2(4000 CHAR) GENERATED ALWAYS AS(REGEXP_REPLACE(col1, '([A-Z])', ' \1', 2)) VIRTUAL VISIBLE
    )
    /
     2 3 4 5 ,virtual_column1 VARCHAR2(4000 CHAR) GENERATED ALWAYS AS(REGEXP_REPLACE(col1, '([A-Z])', ' \1', 2)) VIRTUAL VISIBLE
     *
    ERROR at line 3:
    ORA-54002: only pure functions can be specified in a virtual column expression
    
    
    SQL> drop table test_ora54002_12c purge
    /
     2 drop table test_ora54002_12c purge
     *
    ERROR at line 1:
    ORA-00942: table or view does not exist

    As you can see, 12cR2 gives the ORA-54002 error.

    Looking on MOS, highlights this article, which suggests that you shouldn’t have been able to do this in 11gR2, i.e. it was a bug and that 12cR2 has fixed this bug and thus you can no longer create such a virtual column (the article refers to functional index and check constraint use cases as well).

    In my case, I was able to rewrite the virtual column to use simple string functions such as SUBSTR, TRANSLATE and INSTR to achieve what I wanted and the virtual column was allowed to be created with these – problem solved – a shame really as the REGEXP_REPLACE approach was far neater.

    Oracle 12cR2 on Windows: Virtual Accounts

    Yann Neuhaus - Thu, 2017-03-16 16:52

    Oracle Database 12.2.0.1 is released for Windows, just 2 weeks after the Linux release, and this is a very good news. Let’s see something new you will encounter in the first screens of Oracle 12.2 installer. Don’t worry, the default choice is the right one. But better understand it.

    SYSTEM

    On Linux, you don’t install Oracle Database as root. You create a user, usually called oracle, which will be the owner of the database files and the instance processes and shared memory. This looks obvious be before 12c the Oracle Instance is running as the root equivalent, the built-in SYSTEM user. This was very bad from a security point of view: running a software with the most powerful user on the system.

    12.1 Oracle Home User

    This has changed in 12.1 with the possibility to define another user, which already exists, or which you create at installation providing user name and password.
    CaptureWinVA000

    This user is called the Oracle Home user. Just to get it clear, it is the user which will run the instance. You still install the software as Administrator.
    So, in 12.1 the choice is existing user, new user or SYSTEM and the recommandation is to create a user. But it is quite annoying to have to provide a user and password for a user you will never use to log in.

    12.2 Virtual Accounts

    Windows 2008 R2 has introduced two new local service users: Managed Service Accounts (MSA) and Virtual Accounts.

    Managed Service Accounts are created by the administrator in the Active Directory (using New-ADServiceAccount). And you can use them in 12c by mentioning the name in ‘Use Existing Windows User’.

    Virtual Accounts are enabled by default in Windows. In 12.2 you can use this feature for Oracle Home account. It is the first option, the default one, and the one recommended if you have no reason to use another user:

    CaptureWinVA001

    oracle.key

    If you don’t know what has been defined, look at the registry. Find the ORACLE_HOME you run from, read the registry key from %ORACLE_HOME%\bin\oracle.key and look at the keys:

    CaptureOradimDBCA004

    Here ORACLE_SVCUSER_TYPE is new with value ‘V’ which means that the ORACLE_SVCUSER is a Virtual Account. ORACLE_SVCUSER_PWDREQ mentions that no password has to be provided for the instances services.

    Note that the old method, the ‘built-in account’ had the following, mentioning the internal SYSTEM, and without a TYPE:

    ORACLE_SVCUSER REG_SZ NT AUTHORITY\SYSTEM
    ORACLE_SVCUSER_PWDREQ REG_SZ 0

    The 12.1 method of non-privileged user had ORACLE_SVCUSER_PWDREQ=1 and requires the password for the services.

    Back to virtual account, I said that they are used for instance services and database files. Let’s have a look at services and file security properties:

    CaptureOradimDBCA005

    The database file owner is the user we have seen above as defined by ORACLE_SVCUSER but the service ‘Log On As’ has the special ‘NT SERVICE\ServiceName‘ which is the Virtual Account. It is not a real account like built-in, local or domain ones. It is more a service that is displayed as an account here.

    So what?

    Don’t panic in front of this additional choice. Virtual Account is the right choice to run with a minimal privilege user and no additional complexity.

     

    Cet article Oracle 12cR2 on Windows: Virtual Accounts est apparu en premier sur Blog dbi services.

    April 5: Creighton University—Oracle HCM Cloud Customer Forum

    Linda Fishman Hoyle - Thu, 2017-03-16 15:04

    Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, April 5, 2017, at 9:00 a.m. PDT.

    You'll hear Molly Billings, Senior Director, Human Resources: Compensation, HRMS and Payroll, discuss why Creighton University, located in Omaha, NE, decided to move its Oracle E-Business Suite HR on premises to the Oracle HCM Cloud.

    Register now to attend the live forum and learn more about Creighton University’s experience with Oracle HCM Cloud.

    April 5: Creighton University—Oracle HCM Cloud Customer Forum

    Linda Fishman Hoyle - Thu, 2017-03-16 15:04

    Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, April 5, 2017, at 9:00 a.m. PDT.

    You'll hear Molly Billings, Senior Director, Human Resources: Compensation, HRMS and Payroll, discuss why Creighton University, located in Omaha, NE, decided to move its Oracle E-Business Suite HR on premises to the Oracle HCM Cloud.

    Register now to attend the live forum and learn more about Creighton University’s experience with Oracle HCM Cloud.

    HelloSign for Content and Experience Cloud Streamlines Crucial Document Processes in ...

    WebCenter Team - Thu, 2017-03-16 13:12
    Authored by: Sarah Gabot, Partner Marketing Manager, HelloSign

    It’s no surprise that educational institutions use a lot of paper to keep their systems running smoothly. There are admissions forms, loan documents, grant paperwork, and many other documents that need to be signed regularly. 

    Paper is slower to turnaround, and it can be really costly to manage. That’s why the educational institutions are turning to online signing solutions like HelloSign to remove the frustrations and challenges caused by paperwork. 

    HelloSign and Content and Experience Cloud brings convenience to the admissions and financial aid process in schools and universities by streamlining the document signing process. Simply upload the document needed to be signed in Content and Experience Cloud, and use the HelloSign integration to request for signature. 

    We have educational institutions using HelloSign for:  
    • Admissions
    • Permission Slips
    • Student Loan Documents
    • Financial Aid Documents
    • Procurement
    • Research Grants
    The benefits of using eSignatures in education

    Faster document turnaround time. 
    Many documents requiring signatures in educational institutions have hard deadlines. These deadlines are important for students to be able to get their applications in, research proposals approved, or receive financial aid, among other things. This makes document completion not only important, but crucial for students to achieve their educational goals. 

    HelloSign cuts out the manual effort associated with submitting paperwork for students, since they can fly through filling out their documents online. Students will be happier because they can get their paperwork in faster, and administrators will be happy that all their documents are electronically stored in a central location. 

    Increased document accuracy. 
    Dealing with thousands of students’ paperwork every year can also mean that paperwork sometimes comes back incomplete, illegible, or incorrect. It’s a pain and waste of time for administrators to request changes or corrections on such time sensitive documents. 

    HelloSign improves document accuracy with features like data validation. Data validation gives schools the power to proactively protect against signer errors by setting rules for document fields. Offices processing the paperwork will get better, more useful data, and avoid having to sort through inaccurate data.

    Improved student experience. 
    Filling out paperwork isn’t fun for anyone. Documents get mixed up or even lost, making a stressful experience for students or offices managing the documents. 

    Using eSignatures gives students a convenient signing experience when filling out important school-related documents. They can also sign from whatever device is most convenient for them: smartphone, tablet, or computer. When students can sign their documents from anywhere, they’re happier with the improved efficiency. 

    Get started with HelloSign today

    Interested in learning in more detail how HelloSign and the Content and Experience Cloud can help educational systems? Contact our sales team at oracle-sales@hellosign.com or your Oracle Account rep for a custom demo of our eSignature solution. 

    HelloSign for Content and Experience Cloud Streamlines Crucial Document Processes in Educational Institutions

    WebCenter Team - Thu, 2017-03-16 13:12
    Authored by: Sarah Gabot, Partner Marketing Manager, HelloSign

    It’s no surprise that educational institutions use a lot of paper to keep their systems running smoothly. There are admissions forms, loan documents, grant paperwork, and many other documents that need to be signed regularly. 

    Paper is slower to turnaround, and it can be really costly to manage. That’s why the educational institutions are turning to online signing solutions like HelloSign to remove the frustrations and challenges caused by paperwork. 

    HelloSign and Content and Experience Cloud brings convenience to the admissions and financial aid process in schools and universities by streamlining the document signing process. Simply upload the document needed to be signed in Content and Experience Cloud, and use the HelloSign integration to request for signature. 

    We have educational institutions using HelloSign for:  
    • Admissions
    • Permission Slips
    • Student Loan Documents
    • Financial Aid Documents
    • Procurement
    • Research Grants
    The benefits of using eSignatures in education

    Faster document turnaround time. 
    Many documents requiring signatures in educational institutions have hard deadlines. These deadlines are important for students to be able to get their applications in, research proposals approved, or receive financial aid, among other things. This makes document completion not only important, but crucial for students to achieve their educational goals. 

    HelloSign cuts out the manual effort associated with submitting paperwork for students, since they can fly through filling out their documents online. Students will be happier because they can get their paperwork in faster, and administrators will be happy that all their documents are electronically stored in a central location. 

    Increased document accuracy. 
    Dealing with thousands of students’ paperwork every year can also mean that paperwork sometimes comes back incomplete, illegible, or incorrect. It’s a pain and waste of time for administrators to request changes or corrections on such time sensitive documents. 

    HelloSign improves document accuracy with features like data validation. Data validation gives schools the power to proactively protect against signer errors by setting rules for document fields. Offices processing the paperwork will get better, more useful data, and avoid having to sort through inaccurate data.

    Improved student experience. 
    Filling out paperwork isn’t fun for anyone. Documents get mixed up or even lost, making a stressful experience for students or offices managing the documents. 

    Using eSignatures gives students a convenient signing experience when filling out important school-related documents. They can also sign from whatever device is most convenient for them: smartphone, tablet, or computer. When students can sign their documents from anywhere, they’re happier with the improved efficiency. 

    Get started with HelloSign today

    Interested in learning in more detail how HelloSign and the Content and Experience Cloud can help educational systems? Contact our sales team at oracle-sales@hellosign.com or your Oracle Account rep for a custom demo of our eSignature solution. 

    Vertically scale your PostgreSQL infrastructure with pgpool – 1 – Basic setup and watchdog configuration

    Yann Neuhaus - Thu, 2017-03-16 12:21

    I have written some posts on how you can make your PostgreSQL deployment high available by using PostgreSQL’s streaming replication feature in the past ( 1, 2 ). The main issue you’ll have to resolve with such a setup is how the application can be made aware of a new master when a fail over happened. You could use EDB Failover Manager (1, 2, 3, 4) for that because it provides the functionality to move a VIP from one host to another so the application can always connect to the very same IP address no matter where the current master is running (EDB EFM requires a subscription). You could also use Pacemaker and Corosync for that. But, which is the scope of this post, you can also use pgpool which is widely known in the PostgreSQL community. When you configure it the right way you can even spread your read operations over all hot standby servers in your configuration and only write operations go to the master. This allows you to vertically scale your PostgreSQL deployment by adding more standby nodes when you need more resources. Lets go …

    To start with a picture is always a good idea. This is what we want to setup:

    pgpool-architecture

    We will have two nodes dedicated to pgpool (centos7_pgpool_m1/m2). pgpool will be running in a watchdog configuration so that one node can take over in case the other goes down. pgpool will provide a virtual IP address for the clients to connect to (which fails over to the surviving node in case a node goes down for any reason). In the background there are two nodes which host the PostgreSQL 9.6.2 primary and hot standby instances (centos7_pgpool_1/2). At the very beginning the master is running on centos7_pgpool_1 although that does not really matter once the whole setup is completed.

    I’ll not describe the setup of the PostgreSQL master->standby setup. When you need assistance there take a look here, here or search the web, there are many great howtos.

    Lets start by installing pgpool onto the hosts dedicated for pgpool (centos7_pgpool_m1/m2):

    You can download pgpool here. As pgpool requires libpq we’ll just install the PostgreSQL binaries on the hosts dedicated for pgpool as well before proceeding with the installation of pgpool. Of course these steps need to be done on both hosts (centos7_pgpool_m1/m2):

    [root@centos7_pgpool_m1 ~]$ groupadd postgres
    [root@centos7_pgpool_m1 ~]$ useradd -g postgres postgres
    [root@centos7_pgpool_m1 ~]$ passwd postgres
    [root@centos7_pgpool_m1 ~]$ mkdir -p /u01/app/postgres/software
    [root@centos7_pgpool_m1 ~]$ chown -R postgres:postgres /u01/app/postgres
    [root@centos7_pgpool_m1 ~]$ su - postgres
    [postgres@centos7_pgpool_m1 ~]$ cd /u01/app/postgres/software/
    [postgres@centos7_pgpool_m1 software]$ wget https://ftp.postgresql.org/pub/source/v9.6.2/postgresql-9.6.2.tar.bz2
    [postgres@centos7_pgpool_m1 software]$ tar -axf postgresql-9.6.2.tar.bz2
    [postgres@centos7_pgpool_m1 software]$ cd postgresql-9.6.2
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ PGHOME=/u01/app/postgres/product/96/db_2
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ SEGSIZE=2
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ BLOCKSIZE=8
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ WALSEGSIZE=16
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ ./configure --prefix=${PGHOME} \
    >             --exec-prefix=${PGHOME} \
    >             --bindir=${PGHOME}/bin \
    >             --libdir=${PGHOME}/lib \
    >             --sysconfdir=${PGHOME}/etc \
    >             --includedir=${PGHOME}/include \
    >             --datarootdir=${PGHOME}/share \
    >             --datadir=${PGHOME}/share \
    >             --with-pgport=5432 \
    >             --with-perl \
    >             --with-python \
    >             --with-tcl \
    >             --with-openssl \
    >             --with-pam \
    >             --with-ldap \
    >             --with-libxml \
    >             --with-libxslt \
    >             --with-segsize=${SEGSIZE} \
    >             --with-blocksize=${BLOCKSIZE} \
    >             --with-wal-segsize=${WALSEGSIZE}  \
    >             --with-extra-version=" dbi services build"
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ make world
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ make install-world
    [postgres@centos7_pgpool_m1 postgresql-9.6.2]$ cd ..
    [postgres@centos7_pgpool_m1 software]$ rm -rf postgresql-9.6.2*
    ### download pgpool
    [postgres@centos7_pgpool_m1 software]$ ls
    pgpool-II-3.6.1.tar.gz
    [postgres@centos7_pgpool_m1 software]$ tar -axf pgpool-II-3.6.1.tar.gz 
    [postgres@centos7_pgpool_m1 software]$ cd pgpool-II-3.6.1
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ export PATH=/u01/app/postgres/product/96/db_2/bin/:$PATH
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ ./configure --prefix=/u01/app/postgres/product/pgpool-II
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ make
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ make install
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ cd src/sql/pgpool-recovery/
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ make
    [postgres@centos7_pgpool_m1 pgpool-recovery]$ make install
    [postgres@centos7_pgpool_m1 pgpool-recovery]$ cd ../pgpool-regclass/
    [postgres@centos7_pgpool_m1 pgpool-regclass]$ make
    [postgres@centos7_pgpool_m1 pgpool-regclass]$ make install
    

    Copy the generated extensions to the PostgreSQL master and standby servers:

    [postgres@centos7_pgpool_m1 ~]$ cd /u01/app/postgres/software/pgpool-II-3.6.1
    # master node
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so 192.168.22.34:/u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control 192.168.22.34:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so 192.168.22.34:/u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so
    # standby node
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery.control
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_recovery--1.1.sql
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool-recovery.sql
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so 192.168.22.35:/u01/app/postgres/product/96/db_2/lib/pgpool-recovery.so
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass--1.0.sql
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control 192.168.22.35:/u01/app/postgres/product/96/db_2/share/extension/pgpool_regclass.control
    [postgres@centos7_pgpool_m1 pgpool-II-3.6.1]$ scp /u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so 192.168.22.35:/u01/app/postgres/product/96/db_2/lib/pgpool-regclass.so
    

    Install the extensions on the master node only (this will be replicated to the standby node automatically as the PostgreSQL instances already operate in hot_standby mode):

    postgres@pgpool1:/u01/app/postgres/product/96/db_2/ [PG1] psql template1
    psql (9.6.2 dbi services build)
    Type "help" for help.
    
    (postgres@[local]:5432) [template1] > create extension pgpool_recovery;
    CREATE EXTENSION
    (postgres@[local]:5432) [template1] > create extension pgpool_regclass;
    CREATE EXTENSION
    (postgres@[local]:5432) [template1] > \dx
                                    List of installed extensions
          Name       | Version |   Schema   |                    Description                     
    -----------------+---------+------------+----------------------------------------------------
     pgpool_recovery | 1.1     | public     | recovery functions for pgpool-II for V3.4 or later
     pgpool_regclass | 1.0     | public     | replacement for regclass
     plpgsql         | 1.0     | pg_catalog | PL/pgSQL procedural language
    (3 rows)
    

    Create the pgpool.conf configuration file on both nodes. For node 1 (centos7_pgpool_m1):

    echo "echo "listen_addresses = '*'
    port = 5432
    socket_dir = '/tmp'
    pcp_port = 9898
    pcp_socket_dir = '/tmp'
    backend_hostname0 = '192.168.22.34'
    backend_port0 = 5432
    backend_weight0 = 1
    backend_data_directory0 = '/u02/pgdata/PG1'
    backend_flag0 = 'ALLOW_TO_FAILOVER'
    backend_hostname1 = '192.168.22.35'
    backend_port1 = 5432
    backend_weight1 = 1
    backend_data_directory1 = '/u02/pgdata/PG1'
    backend_flag1 = 'ALLOW_TO_FAILOVER'
    enable_pool_hba = off
    pool_passwd = 'pool_passwd'
    authentication_timeout = 60
    ssl = off
    num_init_children = 32
    max_pool = 4
    child_life_time = 300
    child_max_connections = 0
    connection_life_time = 0
    client_idle_limit = 0
    log_destination = 'stderr'
    print_timestamp = on
    log_connections = off
    log_hostname = off
    log_statement = off
    log_per_node_statement = off
    log_standby_delay = 'none'
    syslog_facility = 'LOCAL0'
    syslog_ident = 'pgpool'
    debug_level = 0
    pid_file_name = '/tmp/pgpool.pid'
    logdir = '/tmp/pgpool'
    connection_cache = on
    reset_query_list = 'ABORT; DISCARD ALL'
    replication_mode = off
    replicate_select = off
    insert_lock = on
    lobj_lock_table = ''
    replication_stop_on_mismatch = off
    failover_if_affected_tuples_mismatch = off
    load_balance_mode = off
    ignore_leading_white_space = on
    white_function_list = ''
    black_function_list = 'nextval,setval'
    master_slave_mode = on
    master_slave_sub_mode = 'stream'
    sr_check_period = 0
    sr_check_user = 'postgres'
    sr_check_password = ''
    delay_threshold = 0
    follow_master_command = ''
    parallel_mode = off
    pgpool2_hostname = 'centos7_pgpool_m2'
    system_db_hostname  = 'localhost'
    system_db_port = 5432
    system_db_dbname = 'pgpool'
    system_db_schema = 'pgpool_catalog'
    system_db_user = 'pgpool'
    system_db_password = ''
    health_check_period = 20
    health_check_timeout = 20
    health_check_user = 'postgres'
    health_check_password = ''
    health_check_max_retries = 0
    health_check_retry_delay = 1
    failover_command = '/home/postgres/failover.sh %d "%h" %p %D %m %M "%H" %P'
    failback_command = ''
    fail_over_on_backend_error = on
    search_primary_node_timeout = 10
    recovery_user = 'postgres'
    recovery_password = ''
    recovery_1st_stage_command = 'resync_master.sh'
    recovery_2nd_stage_command = ''
    recovery_timeout = 90
    client_idle_limit_in_recovery = 0
    use_watchdog = on
    trusted_servers = ''
    ping_path = '/usr/bin'
    wd_hostname = 'centos7_pgpool_m1'
    wd_port = 9000
    wd_authkey = ''
    other_pgpool_hostname0 = 'centos7_pgpool_m2'
    other_pgpool_port0 = 5432
    other_wd_port0 = 9000
    delegate_IP = '192.168.22.38'
    ifconfig_path = '/usr/bin'
    if_up_cmd = 'ifconfig enp0s8:0 inet \$_IP_\$ netmask 255.255.255.0'
    if_down_cmd = 'ifconfig enp0s8:0 down'
    arping_path = '/usr/sbin'
    arping_cmd = 'arping -U \$_IP_\$ -w 1'
    clear_memqcache_on_escalation = on
    wd_escalation_command = ''
    wd_lifecheck_method = 'heartbeat'
    wd_interval = 10
    wd_heartbeat_port = 9694
    wd_heartbeat_keepalive = 2
    wd_heartbeat_deadtime = 30
    heartbeat_destination0 = 'host0_ip1'
    heartbeat_destination_port0 = 9694
    heartbeat_device0 = ''
    wd_life_point = 3
    wd_lifecheck_query = 'SELECT 1'
    wd_lifecheck_dbname = 'template1'
    wd_lifecheck_user = 'nobody'
    wd_lifecheck_password = ''
    relcache_expire = 0
    relcache_size = 256
    check_temp_table = on
    memory_cache_enabled = off
    memqcache_method = 'shmem'
    memqcache_memcached_host = 'localhost'
    memqcache_memcached_port = 11211
    memqcache_total_size = 67108864
    memqcache_max_num_cache = 1000000
    memqcache_expire = 0
    memqcache_auto_cache_invalidation = on
    memqcache_maxcache = 409600
    memqcache_cache_block_size = 1048576
    memqcache_oiddir = '/var/log/pgpool/oiddir'
    white_memqcache_table_list = ''
    black_memqcache_table_list = ''
    " > /u01/app/postgres/product/pgpool-II/etc/pgpool.conf
    

    For node 2 (centos7_pgpool_m2):

    echo "echo "listen_addresses = '*'
    port = 5432
    socket_dir = '/tmp'
    pcp_port = 9898
    pcp_socket_dir = '/tmp'
    backend_hostname0 = '192.168.22.34'
    backend_port0 = 5432
    backend_weight0 = 1
    backend_data_directory0 = '/u02/pgdata/PG1'
    backend_flag0 = 'ALLOW_TO_FAILOVER'
    backend_hostname1 = '192.168.22.35'
    backend_port1 = 5432
    backend_weight1 = 1
    backend_data_directory1 = '/u02/pgdata/PG1'
    backend_flag1 = 'ALLOW_TO_FAILOVER'
    enable_pool_hba = off
    pool_passwd = 'pool_passwd'
    authentication_timeout = 60
    ssl = off
    num_init_children = 32
    max_pool = 4
    child_life_time = 300
    child_max_connections = 0
    connection_life_time = 0
    client_idle_limit = 0
    log_destination = 'stderr'
    print_timestamp = on
    log_connections = off
    log_hostname = off
    log_statement = off
    log_per_node_statement = off
    log_standby_delay = 'none'
    syslog_facility = 'LOCAL0'
    syslog_ident = 'pgpool'
    debug_level = 0
    pid_file_name = '/tmp/pgpool.pid'
    logdir = '/tmp/pgpool'
    connection_cache = on
    reset_query_list = 'ABORT; DISCARD ALL'
    replication_mode = off
    replicate_select = off
    insert_lock = on
    lobj_lock_table = ''
    replication_stop_on_mismatch = off
    failover_if_affected_tuples_mismatch = off
    load_balance_mode = off
    ignore_leading_white_space = on
    white_function_list = ''
    black_function_list = 'nextval,setval'
    master_slave_mode = on
    master_slave_sub_mode = 'stream'
    sr_check_period = 0
    sr_check_user = 'postgres'
    sr_check_password = ''
    delay_threshold = 0
    follow_master_command = ''
    parallel_mode = off
    pgpool2_hostname = 'centos7_pgpool_m2'
    system_db_hostname  = 'localhost'
    system_db_port = 5432
    system_db_dbname = 'pgpool'
    system_db_schema = 'pgpool_catalog'
    system_db_user = 'pgpool'
    system_db_password = ''
    health_check_period = 20
    health_check_timeout = 20
    health_check_user = 'postgres'
    health_check_password = ''
    health_check_max_retries = 0
    health_check_retry_delay = 1
    failover_command = '/home/postgres/failover.sh %d "%h" %p %D %m %M "%H" %P'
    failback_command = ''
    fail_over_on_backend_error = on
    search_primary_node_timeout = 10
    recovery_user = 'postgres'
    recovery_password = ''
    recovery_1st_stage_command = 'resync_master.sh'
    recovery_2nd_stage_command = ''
    recovery_timeout = 90
    client_idle_limit_in_recovery = 0
    use_watchdog = on
    trusted_servers = ''
    ping_path = '/usr/bin'
    wd_hostname = 'centos7_pgpool_m2'
    wd_port = 9000
    wd_authkey = ''
    other_pgpool_hostname0 = 'centos7_pgpool_m1'
    other_pgpool_port0 = 5432
    other_wd_port0 = 9000
    delegate_IP = '192.168.22.38'
    ifconfig_path = '/usr/sbin'
    if_up_cmd = 'ifconfig enp0s8:0 inet \$_IP_\$ netmask 255.255.255.0'
    if_down_cmd = 'ifconfig enp0s8:0 down'
    arping_path = '/usr/sbin'
    arping_cmd = 'arping -U \$_IP_\$ -w 1'
    clear_memqcache_on_escalation = on
    wd_escalation_command = ''
    wd_lifecheck_method = 'heartbeat'
    wd_interval = 10
    wd_heartbeat_port = 9694
    wd_heartbeat_keepalive = 2
    wd_heartbeat_deadtime = 30
    heartbeat_destination0 = 'host0_ip1'
    heartbeat_destination_port0 = 9694
    heartbeat_device0 = ''
    wd_life_point = 3
    wd_lifecheck_query = 'SELECT 1'
    wd_lifecheck_dbname = 'template1'
    wd_lifecheck_user = 'nobody'
    wd_lifecheck_password = ''
    relcache_expire = 0
    relcache_size = 256
    check_temp_table = on
    memory_cache_enabled = off
    memqcache_method = 'shmem'
    memqcache_memcached_host = 'localhost'
    memqcache_memcached_port = 11211
    memqcache_total_size = 67108864
    memqcache_max_num_cache = 1000000
    memqcache_expire = 0
    memqcache_auto_cache_invalidation = on
    memqcache_maxcache = 409600
    memqcache_cache_block_size = 1048576
    memqcache_oiddir = '/var/log/pgpool/oiddir'
    white_memqcache_table_list = ''
    black_memqcache_table_list = ''
    " > /u01/app/postgres/product/pgpool-II/etc/pgpool.conf
    

    For switching the VIP from one host to another pgpool must be able to bring up and shutdown the virtual interface. You could use sudo for that or change the suid bit on the ifconfig and arping binaries:

    [postgres@centos7_pgpool_m1 pgpool-II]$ sudo chmod u+s /usr/sbin/arping
    [postgres@centos7_pgpool_m1 pgpool-II]$ sudo chmod u+s /sbin/ifconfig
    

    The other important configuration file for pgpool is the pcp.conf file. This file holds the authentication for pgpool itself and requires a user name and a md5 hashed password. To generate the password you can use the pg_md5 utility which comes with the installation of pgpool:

    [postgres@centos7_pgpool_m1 ~]$ /u01/app/postgres/product/pgpool-II/bin/pg_md5 --prompt --username postgres
    password: 
    e8a48653851e28c69d0506508fb27fc5
    

    Once you have the hashed password we can create the pcp.conf file (on both pgpool nodes of course):

    [postgres@centos7_pgpool_m1 ~]$ echo "postgres:e8a48653851e28c69d0506508fb27fc5" > /u01/app/postgres/product/pgpool-II/etc/pcp.conf
    

    Before doing anything else we need to allow connections from the pgpool nodes to the database nodes by adjusting the pg_hba.conf file for both PostgreSQL instances. On both nodes:

    postgres@pgpool1:/home/postgres/ [PG1] echo "host    all             postgres        192.168.22.36/32         trust" >> /u02/pgdata/PG1/pg_hba.conf
    postgres@pgpool1:/home/postgres/ [PG1] echo "host    all             postgres        192.168.22.37/32         trust" >> /u02/pgdata/PG1/pg_hba.conf
    postgres@pgpool1:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/PG1/ reload
    

    Before we start pgpool on both pgpool nodes lets take a look at the important watchdog parameters on node 1:

    ping_path = '/usr/bin'
    wd_hostname = 'centos7_pgpool_m2'
    wd_port = 9000
    wd_authkey = ''
    other_pgpool_hostname0 = 'centos7_pgpool_m1'
    other_pgpool_port0 = 5432
    other_wd_port0 = 9000
    delegate_IP = '192.168.22.38'
    ifconfig_path = '/usr/sbin'
    if_up_cmd = 'ifconfig enp0s8:0 inet \$_IP_\$ netmask 255.255.255.0'
    if_down_cmd = 'ifconfig enp0s8:0 down'
    arping_path = '/usr/sbin'
    arping_cmd = 'arping -U \$_IP_\$ -w 1
    

    The various *path* variables are obvious, they tell pgpool where to find the binaries for ping, arping and ifconfig (you can also use the ip command instead). The other0* variables specify which other host runs a pgpool instance on which pgpool and watchdog ports. This is essential for the communication between the two pgpool hosts. And then we have the commands to bring up the virtual interface and to bring it down (if_up_cmd,if_down_cmd). In addition we need an address for the virtual interface which is specified by the “delegate_IP” variable. Lets see if it works and start pgpool on both nodes:

    # node 1
    [postgres@centos7_pgpool_m1 ~]$ /u01/app/postgres/product/pgpool-II/bin/pgpool
    [postgres@centos7_pgpool_m1 ~]$ 
    # node 2
    [postgres@centos7_pgpool_m2 ~]$ /u01/app/postgres/product/pgpool-II/bin/pgpool
    [postgres@centos7_pgpool_m2 ~]$ 
    

    Looks not so bad as no issues are printed to the screen. When everything went fine we should see the a new virtual IP (192.168.22.38) on one of the nodes (node2 in my case):

    [postgres@centos7_pgpool_m2 ~]$ ip a
    1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 08:00:27:d6:95:ab brd ff:ff:ff:ff:ff:ff
        inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
           valid_lft 85216sec preferred_lft 85216sec
        inet6 fe80::a00:27ff:fed6:95ab/64 scope link 
           valid_lft forever preferred_lft forever
    3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 08:00:27:5c:b0:e5 brd ff:ff:ff:ff:ff:ff
        inet 192.168.22.37/24 brd 192.168.22.255 scope global enp0s8
           valid_lft forever preferred_lft forever
        inet 192.168.22.38/24 brd 192.168.22.255 scope global secondary enp0s8:0
           valid_lft forever preferred_lft forever
        inet6 fe80::a00:27ff:fe5c:b0e5/64 scope link tentative dadfailed 
           valid_lft forever preferred_lft forever
    

    When we shutdown pgpool on the node where the VIP is currently running it should be switched to the other node automatically, so shutdown pgpool on the node where it is running currently:

    [postgres@centos7_pgpool_m2 ~]$ /u01/app/postgres/product/pgpool-II/bin/pgpool -m fast stop
    2017-03-16 17:54:02: pid 2371: LOG:  stop request sent to pgpool. waiting for termination...
    .done.
    

    Check the other host for the VIP:

    [postgres@centos7_pgpool_m1 ~]$ ip a
    1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 08:00:27:d6:95:ab brd ff:ff:ff:ff:ff:ff
        inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
           valid_lft 85067sec preferred_lft 85067sec
        inet6 fe80::a00:27ff:fed6:95ab/64 scope link 
           valid_lft forever preferred_lft forever
    3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 08:00:27:5c:b0:e5 brd ff:ff:ff:ff:ff:ff
        inet 192.168.22.36/24 brd 192.168.22.255 scope global enp0s8
           valid_lft forever preferred_lft forever
        inet 192.168.22.38/24 brd 192.168.22.255 scope global secondary enp0s8:0
           valid_lft forever preferred_lft forever
        inet6 fe80::a00:27ff:fe5c:b0e5/64 scope link tentative dadfailed 
           valid_lft forever preferred_lft forever
    

    Cool, now we have a VIP the application can connect to which switches between the pgpool hosts automatically in case the host where it currently runs on experiences an issue or is shutdown intentionally. There is a pcp command which shows you more details in regards to the watchdog:

    [postgres@centos7_pgpool_m1 ~]$ /u01/app/postgres/product/pgpool-II/bin/pcp_watchdog_info 
    Password: 
    2 YES centos7_pgpool_m1:5432 Linux centos7_pgpool_m1 centos7_pgpool_m1
    
    centos7_pgpool_m1:5432 Linux centos7_pgpool_m1 centos7_pgpool_m1 5432 9000 4 MASTER
    centos7_pgpool_m2:5432 Linux centos7_pgpool_m2 centos7_pgpool_m2 5432 9000 7 STANDBY
    

    As we now have a VIP we should be able to connect to the PostgreSQL backends by connecting to this VIP:

    [postgres@centos7_pgpool_m1 ~]$ psql -h 192.168.22.38
    psql (9.6.2 dbi services build)
    Type "help" for help.
    
    postgres=# \l
                                      List of databases
       Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
    -----------+----------+----------+-------------+-------------+-----------------------
     postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
     template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
               |          |          |             |             | postgres=CTc/postgres
     template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
               |          |          |             |             | postgres=CTc/postgres
    (3 rows)
    

    Ok, that works as well. What do we see on the PostgreSQL instances? On the master:

    (postgres@[local]:5432) [postgres] > select datname,client_addr,client_hostname from pg_stat_activity where client_addr is not null;
     datname  |  client_addr  | client_hostname 
    ----------+---------------+-----------------
     postgres | 192.168.22.36 | NULL
    (1 row)
    

    We see one connection from the first pgpool node. What do we see on the standby?

    (postgres@[local]:5432) [postgres] > select datname,client_addr,client_hostname from pg_stat_activity where client_addr is not null;
     datname  |  client_addr  | client_hostname 
    ----------+---------------+-----------------
     postgres | 192.168.22.36 | NULL
    (1 row)
    

    One connection as well. Looks good.

    When you connect the PostgreSQL instances though pgpool there is a sql like syntax for displaying pgpool stuff as well:

    postgres=# show pool_nodes;
     node_id |   hostname    | port | status | lb_weight |  role   | select_cnt | load_balance_node | replicati
    on_delay 
    ---------+---------------+------+--------+-----------+---------+------------+-------------------+----------
    ---------
     0       | 192.168.22.34 | 5432 | up     | 0.500000  | primary | 1          | true              | 0
     1       | 192.168.22.35 | 5432 | up     | 0.500000  | standby | 0          | false             | 0
    (2 rows)
    

    To summarize: We now have a pgpool instance running on two nodes. Only one of these nodes hosts the VIP and the VIP switches to the other host in case there is an issue. Client connections from now on can go the VIP and pgpool will redirect the connection to one of the PostgreSQL nodes (depending if it is a write or a pure read operation).

    In the next post we’ll dig deeper into the pgpool configuration, how you can tell on which instance you actually landed and how we can instruct pgpool to automatically promote a new master, dsiconnect the old master and the rebuild the old master as a new standby that follows the new master.

     

    Cet article Vertically scale your PostgreSQL infrastructure with pgpool – 1 – Basic setup and watchdog configuration est apparu en premier sur Blog dbi services.

    Theo & Philo Sweetens Business Operations with NetSuite Cloud ERP

    Oracle Press Releases - Thu, 2017-03-16 09:54
    Press Release
    Theo & Philo Sweetens Business Operations with NetSuite Cloud ERP Social Enterprise Expands Internationally and Aids Filipino Nonprofit with NetSuite

    San Mateo, Calif. and Makati City, Philippines—Mar 16, 2017

    Oracle NetSuite Global Business Unit, one of the industry's leading providers of cloud financials/ERP and omnichannel commerce software suites and a wholly-owned subsidiary of Oracle, today announced that Theo & Philo, a bean-to-bar maker of single-origin Philippine artisanal chocolate, replaced QuickBooks and numerous Excel spreadsheets with NetSuite to gain scalability and flexibility to power its growth.

    Since going live on NetSuite in November 2015, Theo & Philo has been using NetSuite for accounting, inventory management, order management, invoicing and purchasing – all within one cloud ERP system. With NetSuite, Theo & Philo is able to better plan and manage sourcing, production and distribution and increase efficiency while saving both time and money compared to the error-prone manual work required by their previous software and paper-based processes.

    Founded in 2010, Theo & Philo (www.theoandphilo.com) grew steadily offering more than a dozen varieties of premium, locally-sourced chocolate products. As production volume soared 700 percent to about 14,000 bars a month, the organization faced challenges keeping pace with its previous system and inventory management was handled manually and tracked on paper.

    “NetSuite is an integral part of our day-to-day operations and a scalable platform for our continued growth,” Theo and Philo Founder Philo Chua said. “We’re a lot more efficient and, as we grow, the automated process flows and checks and balances that we need are already in place within NetSuite.”

    Through its continued global growth and success, Theo & Philo, a social enterprise and grantee of the NetSuite Citizenship software donation program, has been able to help improve social welfare in the Philippines in partnership with Gawad Kalinga, a nonprofit organization committed to ending poverty for 5 million Filipino families by 2024.

    The features and benefits that Theo & Philo has been able to achieve with NetSuite include:

    • International growth. Theo & Philo has expanded sales to foreign distributors in recent months, using NetSuite’s multi-currency capabilities for transactions in the Euro for Germany and the U.S. dollar for Japan.
    • Real-time access. Compared to the limitations of its previous QuickBooks desktop application, Theo & Philo now enjoys anywhere, anytime cloud-based capable access to NetSuite with no need for on-premise software and servers.
    • Streamlined inventory and distribution. Real-time data has helped Theo & Philo control inventory and accelerate production and distribution with visibility that helps spotlight issues and areas for improvement.
    • Flexible customizations. Leveraging the NetSuite SuiteCloud Platform, Theo & Philo can adapt NetSuite to unique business needs, for instance, a customization by Tech for Good, a PGE Solutions sister company, enables the company to seamlessly manage consignment inventory and associated sales orders.
    • Future-proofed for growth. As a unified suite, NetSuite offers Theo & Philo functionality for CRM, ecommerce, product assembly, lot tracking and more as the social enterprise continues to grow.
    Contact Info
    Christine Allen
    Public Relations, Oracle NetSuite Global Business Unit
    603-743-4534
    PR@netsuite.com
    About Oracle NetSuite Global Business Unit

    Oracle NetSuite Global Business Unit, a wholly-owned subsidiary of Oracle, pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the Internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials / Enterprise Resource Planning (ERP) and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit www.netsuite.com.

    Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

    About Oracle

    Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

    Trademarks

    Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Safe Harbor

    The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

    Talk to a Press Contact

    Christine Allen

    • 603-743-4534

    Apache JMeter and Cross-Site Request Forgery (CSRF) token management

    Yann Neuhaus - Thu, 2017-03-16 08:45

    Introduction

    In Nowadays web technologies a common defensive mechanism against Cross-Site Request Forgery (CSRF) attacks is to use a synchronizer token. This token might be unique for each request and thus it blocks us from using the recorded JMeter test session off the shelf.

    This blog will describe how this CSRF feature can be handled in JMeter.

    How to implement this feature

    The solution is to identify and extract the CSRF token from the response data or header depending how is it has been set.
    The site I was doing the Load test using JMeter is using a cookie to set the CSRF Token and adds a X-CSRFToken header to the following HTTP requests.

    The HTTP Response header contains something like:

    Set-Cookie: csrftoken=sTrKh7qgnuKtuNTkbwlyCv45W2sqOaiY; expires=Sun, 21-Jan-2017 11:34:43 GMT; Max-Age=31449600; Path=/

    To extract the CSRF token value from the HTTP Response header, add a Regular Expression Extractor Post Processor globally.
    This way if the token value is reset to a new value somehow, it will be dynamically updated in the following response.

    Now configure it as follows:

    Apply to: Main sample only
    Field to check: Response Headers
    Reference Name: CSRF_TOKEN
    Regular Expression: Set-Cookie: csrftoken=(.+?);
    Template: $1$

    Get the Response Cookie via the Regular Expression Extractor

    DynCSRF_Regular_Expression

    It is always better to have a user variable attached to the extracted value to be kept during the complete load test run.
    select user defined variables and add a new variable with the same name as the reference name declared above in the regular expression Extractor.

    DynCSRF_variable

    The next step is to analyse each HTTP Request recorded in the scenario to replace the hard coded value for the X_CSRFToken header with the variable set by the Post Processor as shown below:

    DynCSRF_HTTP_Header

    To avoid having to check every request HTTP Header Manager as displayed above which can take some time and might introduce errors, a pre-processor can be used that checks the headers
    and replace automatically the X_CSRFToekn hard coded value with the variable set by the post processor task. This kind of pre-processor can be time consuming and should be as simplest as possible. Thus I decided to not check if the X_CSRFToken exist in the request header and just call the remove header attribute and add the X_CSRFToken one to all requests. This worked fine for the site I was working on.

    The pre-processor code used was the following:

    import org.apache.jmeter.protocol.http.control.Header;
    
    sampler.getHeaderManager().removeHeaderNamed("X-CSRFToken");
    newValue=vars.get("csrfToken");
    sampler.getHeaderManager().add(new Header("X-CSRFToken",newValue));

    DynCSRF_BeasnShell

     

    Cet article Apache JMeter and Cross-Site Request Forgery (CSRF) token management est apparu en premier sur Blog dbi services.

    Pages

    Subscribe to Oracle FAQ aggregator