Feed aggregator

Partner Webcast – Single Sign-on to Applications in Oracle Identity Cloud Service

Cloud adoption promises the benefit of increased flexibility, agility, and significant cost savings, so migrating more and more applications including business-critical applications to the cloud is...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle DBA to Cloud DBA Live Interactive Training: Day 1

Online Apps DBA - Tue, 2019-04-02 05:02

Searching for How to go from DBA to Oracle Cloud DBA in 6 weeks? And Do Not Know Where To Start From? Begin your learning with Our New Video below on Oracle DBA to Cloud DBA Live Interactive Training: Day 1 by Oracle ACE & Cloud Expert, Atul Kumar Start learning Now: https://k21academy.com/clouddba43 Searching for […]

The post Oracle DBA to Cloud DBA Live Interactive Training: Day 1 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Indexing The Oracle Autonomous Data Warehouse (Autobahn)

Richard Foote - Tue, 2019-04-02 02:39
When the Autonomous Data Warehouse was initially released in March 2018, one of the notable restrictions was that it did not support typical Data Warehouse performance related structures such as Indexes, Partitioning or Materialized Views. The platform was deemed efficient enough to not require these features, which would unnecessarily complicate the autonomous environment. In September […]
Categories: DBA Blogs

The EDB filter log extension

Yann Neuhaus - Mon, 2019-04-01 13:20

This is another post dedicated to EnterpriseDB Postgres. Sometimes you may want to get specific messages not getting logged to the server’s logfile or audit records. That might be specific error codes or, even more important, passwords you specify when you create users. EDB comes with a solution for that by providing an extension which is called EDB Filter Log. Lets see how you can install, and even more important, how to use that extension.

The first thing I usually do when I want to check what extensions are available is looking at . I was quite surprised that this extension is not listed there:

edb=# select * from pg_available_extensions where name like '%filter%';
 name | default_version | installed_version | comment 
------+-----------------+-------------------+---------
(0 rows)

Anyway you can load it by adjusting the shared_preload_libraries parameter:

edb=# show shared_preload_libraries;
             shared_preload_libraries              
---------------------------------------------------
 $libdir/dbms_pipe,$libdir/edb_gen,$libdir/dbms_aq
(1 row)
edb=# alter system set shared_preload_libraries='$libdir/dbms_pipe,$libdir/edb_gen,$libdir/dbms_aq,$libdir/edb_filter_log';
ALTER SYSTEM
edb=# \q
enterprisedb@edb1:/var/lib/edb/ [pg1] pg_ctl -D $PGDATA restart -m fast
enterprisedb@edb1:/var/lib/edb/ [pg1] psql edb
psql.bin (11.2.9)
Type "help" for help.


edb=# show shared_preload_libraries ;
                         shared_preload_libraries                         
--------------------------------------------------------------------------
 $libdir/dbms_pipe,$libdir/edb_gen,$libdir/dbms_aq,$libdir/edb_filter_log
(1 row)

But even then the extension does not show up in pg_available_extensions:

edb=# select * from pg_available_extensions where name like '%filter%';
 name | default_version | installed_version | comment 
------+-----------------+-------------------+---------
(0 rows)

Lets assume you do not want violations on unique constraints to get logged in the server’s logfile. Usually you get this in the log file once a constraint is violated:

edb=# create table t1 ( a int );
CREATE TABLE
edb=# create unique index i1 on t1(a);
CREATE INDEX
edb=# insert into t1 values(1);
INSERT 0 1
edb=# insert into t1 values(1);
ERROR:  duplicate key value violates unique constraint "i1"
DETAIL:  Key (a)=(1) already exists.
edb=# select pg_current_logfile();
      pg_current_logfile       
-------------------------------
 log/edb-2019-03-24_162021.log
(1 row)
edb=# \! tail -20 $PGDATA/log/edb-2019-03-24_162021.log
...
2019-03-24 16:35:32 CET ERROR:  duplicate key value violates unique constraint "i1"
2019-03-24 16:35:32 CET DETAIL:  Key (a)=(1) already exists.
...

Using the extension you can do it like this (23505 is the SQLSTATE for unique constraint violations):

edb=# show edb_filter_log.errcode;
 edb_filter_log.errcode 
------------------------
 
(1 row)

edb=# alter system set edb_filter_log.errcode='23505';
ALTER SYSTEM
edb=# select context from pg_settings where name = 'edb_filter_log.errcode';
 context 
---------
 user
(1 row)
edb=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

edb=# show edb_filter_log.errcode;
 edb_filter_log.errcode 
------------------------
 23505
(1 row)

edb=# insert into t1 values(1);
ERROR:  duplicate key value violates unique constraint "i1"
DETAIL:  Key (a)=(1) already exists.
edb=# \! tail -20 $PGDATA/log/edb-2019-03-24_162021.log
...
2019-03-24 16:39:05 CET LOG:  received SIGHUP, reloading configuration files
2019-03-24 16:39:05 CET LOG:  parameter "edb_filter_log.errcode" changed to "23505"
edb=# 

This specific error is not any more reported in the logfile. Of course can use multiple codes for edb_filter_log.errcode by separating them with a comma. The complete list of codes is documented here.

This is for suppressing messages in the log file. What about passwords? Imagine you are logging all statements:

edb=# alter system set log_statement='all';
ALTER SYSTEM
edb=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

edb=# show log_statement;
 log_statement 
---------------
 all
(1 row)

In this configuration this will be captured as well and you will find the password in the logfile:

edb=# create user u1 with login password 'password';
CREATE ROLE
edb=# select pg_current_logfile();
      pg_current_logfile       
-------------------------------
 log/edb-2019-03-24_162021.log
edb=# \! tail -20 $PGDATA/log/edb-2019-03-24_162021.log | grep password

2019-03-24 16:46:59 CET LOG:  statement: create user u1 with login password 'password';

This is what you usually do not want to see there and exactly this is what “edb_filter_log.redact_password_commands” is for:

edb=# alter system set edb_filter_log.redact_password_commands = true;
ALTER SYSTEM
edb=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

edb=# show edb_filter_log.redact_password_commands;
 edb_filter_log.redact_password_commands 
-----------------------------------------
 on
(1 row)

When this is set to on the plain text password will not be anymore written to the log file when you create or alter users:

edb=# \! tail -20 $PGDATA/log/edb-2019-03-24_162021.log | grep secret
2019-03-24 16:51:19 CET STATEMENT:  create user u2 login with password 'secret';
2019-03-24 16:51:28 CET LOG:  statement: create user u2 with login password 'secret';

… and it is still there. A restart is required for that becoming active?:

enterprisedb@edb1:/var/lib/edb/as11/data/ [pg1] pg_ctl -D $PGDATA restart -m fast
enterprisedb@edb1:/var/lib/edb/as11/data/ [pg1] psql -X edb
psql.bin (11.2.9)
Type "help" for help.

edb=# create user u3 with login password 'topsecret';
CREATE ROLE
edb=# select pg_current_logfile();
      pg_current_logfile       
-------------------------------
 log/edb-2019-03-24_165229.log
(1 row)

edb=# \! tail -20 $PGDATA/log/edb-2019-03-24_165229.log | grep topsecret
2019-03-24 16:54:22 CET LOG:  statement: create user u3 with login password 'topsecret';

And we do still see it in the log file, why that? The issue is with the syntax. Consider this:

edb=# create user u4 with login password 'text';
CREATE ROLE
edb=# create user u5 login password 'text2';
CREATE ROLE
edb=# create user u6 password 'text3';
CREATE ROLE
edb=# 

Only the last command will replace the password in the log file:

2019-03-24 17:03:31 CET LOG:  statement: create user u4 with login password 'text';
2019-03-24 17:03:45 CET LOG:  statement: create user u5 login password 'text2';
2019-03-24 17:04:12 CET LOG:  statement: create user u6 password 'x';

You have to follow exactly this syntax:

{CREATE|ALTER} {USER|ROLE|GROUP} identifier { [WITH] [ENCRYPTED]
PASSWORD 'nonempty_string_literal' | IDENTIFIED BY {
'nonempty_string_literal' | bareword } } [ REPLACE {
'nonempty_string_literal' | bareword } ]

…otherwise it will not work.

Cet article The EDB filter log extension est apparu en premier sur Blog dbi services.

Oracle Analytics Cloud - Current Week SQL Filter

Tom Kyte - Mon, 2019-04-01 12:06
I'm trying to automate some reports in the Analytics Cloud by using SQL syntax within SQL filter. I was able to make a report refresh on a daily basis; but I'm stuck on getting it to work for the week. Is there a syntax that will recognize Monday to ...
Categories: DBA Blogs

Subtotal, Grand Total, ordering and breaking on different fields

Tom Kyte - Mon, 2019-04-01 12:06
Hi Tom I was looking through the already asked questions about Subtotals and Grand totals , but couldn't really find anything that suits my problem. I have a table with the following fields <code>DATE REGION REG_NUM AMOUNT 0...
Categories: DBA Blogs

Why oracle stored procedure execution time is greatly increased depending on how it is executed?

Tom Kyte - Mon, 2019-04-01 12:06
I have three different procedures written in PL/SQL (Oracle database). They do various calculation and analysis, end product of all three being ?INSERT INTO TABLE_A? (so all functions are inserting into same table). It?s a pretty straightforward PL/...
Categories: DBA Blogs

DBMS_SCHEDULER JOB (Need to make repeat_interval parameterized)

Tom Kyte - Mon, 2019-04-01 12:06
How can i dynamically retrieve the value for repeat_interval in the following job:- BEGIN DBMS_SCHEDULER.create_job ( job_name => 'create_Subpartition_Ongoing_LOCDB_M', job_type => 'PLSQL_BLOCK', job_action => ...
Categories: DBA Blogs

Easy Listening with Datapump in the SQLDeveloper DBA Module

The Anti-Kyte - Mon, 2019-04-01 12:05

There are a number of ways to transfer data between Oracle Databases, one of which is to use the PL/SQL Datapump API – DBMS_DATAPUMP.
If you wish to avail yourself of this utility but find the syntax a bit fiddly, you always have the option of getting SQLDeveloper to do (most of) it for you.
What we’re talking about here is how to persuade the SQLDeveloper DB module to :

  • Create and execute a custom Datapump export job
  • do most of the work creating an import of a subset of the exported data

The Application

I’ve got a simple application which consists of a single table and some data :

select title, artist, original_artist, is_mellow
from mike.tracks
/

TITLE                          ARTIST                ORIGINAL_ARTIST                                    IS_MELLOW
------------------------------ --------------------- -------------------------------------------------- ---------
MOTORHEAD                      COURDUROY             HAWKWIND                                           Y        
MOTORHEAD                      MOTORHEAD             HAWKWIND                                           N        
THUNDERSTRUCK                  STEVE N SEAGULLS      AC/DC                                              Y        
WHISKY IN THE JAR              METALLICA             THIN LIZZY                                         N        
HIGHWAY TO HELL                CARLA BRUNI           AC/DC                                              Y        
SHIPPING UP TO BOSTON          GODDESSES OF BAGPIPES DROPKICK MURPHYS                                   Y        
ANOTHER GIRL ANOTHER PLANET    BLINK 182             THE ONLY ONES                                      N        
SMOKE ON THE WATER             PAT BOONE             DEEP PURPLE                                        Y        
EMERGENCY                      MOTORHEAD             GIRL SCHOOL                                        N        
SMELLS LIKE TEEN SPIRIT        2 CELLOS              NIRVANA                                            Y        
WHOLE LOTTA LOVE               CHICO ARNEZ           LED ZEPPLIN                                        Y        
KING OF TWILIGHT               IRON MAIDEN           NEKTAR                                             N        
PARANOID                       GUNTER NORIS          BLACK SABBATH                                      Y        
C'MON FEEL THE NOISE           OASIS                 SLADE                                              N        

14 rows selected. 

The Datapump Export

Note that I’ve deliberately kept things simple in this example. If you were looking at ways of transferring this kind of data-volume then Datapump is probably overkill.

Anyhow, I want to export this table and all of it’s data. To do this in SQLDeveloper, connected as a DBA user I can to go to the View menu and select DBA.
In the DBA tree, I need to right-click the Data Pump node and select the Data Pump Export Wizard

This brings up the first screen of the Export Wizard. Slightly confusingly, it’s labelled as Step 2.
The table I want to export is in a different Schema to the one I’m connected as so I select Schemas from the radio group :

After clicking the Next button we get the chance to choose one or more schemas to include in the export. In my case, I just want the one schema :

The next screen affords us the opportunity to include or exclude types of object. The Filter Types are the same as those you’d see in the export log file :

On this occasion however, we’re not going use these filters so the screen for this step looks like this :

The next screen allows us to select which specific tables we may want to include in the export.

If I enter a search string in the Name field and click the Lookup button, I will eventually get a list of matching tables :

I can then highlight the required table(s) and add them to the list for export by clciking the down arrow…

In this case I want to export all of the data in all of the columns in the table so I have not specified a column list or a predicate.

Next, I can specify the name and location of a logfile for the export.
I’ve left the other options on this screen at their default setting…

I now need to specify the name and location of the dump file. Once again, all other settings are left at the default :

I’d like to run the job immediately so I don’t need to change any of the scheduling information :

Finally, I can see a summary of the job :

Interestingly, if I click on the PL/SQL tab, I can see the code that SQLDeveloper has generated for this export :

That’s quite interesting. Maybe this feature will come in handy in a bit.
For now though, I want to run the export immediately, so I just click the Finish button and SQLDeveloper starts the job whilst providing me a nice interactive view of it :

If I go to the Data Pump/Export Jobs node in the DBA tree, I can see a record of the job.
It will also display the contents of the log file :

If we now check the directory on the OS – the path for which can be found with…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH                                              
------------------------------------------------------------
/opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

… we can see that the export file has been created, with a timestamp appended in the filename :

cd /opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

ls -l
-rw-r-----. 1 oracle oinstall 331776 Mar 28 12:30 mike_tracks-12_30_11.dmp

We need to make a note of the file name because we’ll need it when we come to do…

The Datapump Import

I want to import this table into the HR schema. However, I don’t want all of the data, just the mellow tracks because I really need something to chill out to. Oh, and drown out the sound of the country going down the toilet.

Fortunately, SQLDeveloper can get us most of the way.

Once again, we need to right-click the Data Pump node of the DBA tree and select Data Pump Import Wizard.
We need to start by telling the Wizard the type of Import we want to do ( Tables) and the name of the dump file ( the export file we’ve just created) :

Next, we select the table using the shuttle buttons :

We want to import into the HR schema rather than the original schema :

I only want to create the table if it does not already exist :

Once again, I don’t want to schedule the job :

Finally, we have the summary :

Unlike the export, you’ll notice that the import Wizard has not given us the option to restrict the actual table rows we will be importing.
However, we do still get the code that has been generated in the PL/SQL tab of the summary window :

This time, rather than executing the job, I’m going to copy the code into an editor and add a manual “tweak” – namely :

    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');

The full script now looks like this :

set scan off
set serveroutput on
set escape off
whenever sqlerror exit 
DECLARE
    s varchar2(1000); 
    h1 number;
    errorvarchar varchar2(100):= 'ERROR';
    tryGetStatus number := 0;
begin
    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'TABLE', job_name => 'mike_playlist', version => 'COMPATIBLE'); 
    tryGetStatus := 1;
    dbms_datapump.set_parallel(handle => h1, degree => 1); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks_imp.log', directory => 'DATA_PUMP_DIR', filetype=>DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1); 
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MIKE'')'); 
    dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN(''TRACKS'')'); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks-12_30_11.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); 
    dbms_datapump.metadata_remap(handle => h1, name => 'REMAP_SCHEMA', old_value => 'MIKE', value => UPPER('HR') ); 
    
    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); 
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); 
    dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP'); 
    dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); 
    dbms_datapump.detach(handle => h1); 
    errorvarchar := 'NO_ERROR'; 
EXCEPTION
    WHEN OTHERS THEN
    BEGIN 
        IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN 
            DBMS_DATAPUMP.DETACH(h1);
        END IF;
    EXCEPTION 
    WHEN OTHERS THEN 
        NULL;
    END;
    RAISE;
END;
/

I can now run this code in a SQLDeveloper Worksheet…

Whilst the feedback is not quite as good in this environment, you can still find the job itself in the DBA module, where you can see all of the details…

If I now check the new HR.TRACKS table :

Appropriately enough given current events, eight tracks is the precise number needed to form a Desert Island Discs playlist.

New NetSuite Partner Initiative Meets Global Demand for Cloud ERP

Oracle Press Releases - Mon, 2019-04-01 11:00
Press Release
New NetSuite Partner Initiative Meets Global Demand for Cloud ERP NetSuite Increases Investment in Partner Ecosystem to Help New and Existing Partners Maximize Customer Success

SUITEWORLD 2019, LAS VEGAS, Nev.—Apr 1, 2019

To meet growing customer demand for cloud ERP systems that has seen strong growth for Oracle NetSuite in the last year, NetSuite today announced SuiteLife, a new partner engagement initiative. SuiteLife represents the largest investment NetSuite has ever made in its partner ecosystem and encompasses a number of the NetSuite Partner Programs, which include more than 550 partners in 85 countries. With SuiteLife, new partners will be able to start selling and delivering NetSuite to customers in only 90 days, while existing partners will benefit from innovative new offerings to expand their NetSuite practice and amplify their growth.

“The rate at which customers are moving ERP to the cloud is at an all-time high and our partner ecosystem plays a critical role in helping us meet this demand,” said Craig West, vice president, Alliances and Channels, Oracle NetSuite. “By making it easier than ever for partners to join, succeed and grow in the NetSuite ecosystem, we are putting the power of the NetSuite platform into the hands of more organizations across the world. With NetSuite, organizations gain the visibility and control needed to navigate complexities and accelerate growth while NetSuite partners capitalize on the generational transition to the cloud.”

SuiteLife delivers a comprehensive set of resources, certified training and tools that enable NetSuite partners to develop expertise around specific business functions, product areas and industries. With SuiteLife, NetSuite partners can quickly and easily gain the knowledge and expertise required to help customers succeed, differentiate their practices and expand their business. SuiteLife includes:

  • SuiteSuccess Alignment: Enables partners to leverage SuiteSuccess, a pre-configured industry cloud solution that is built on industry leading practices, within their practice and existing methodologies to deliver even more value to customers.
  • Accelerated New Product Introduction: A structured program that helps partners expedite the roll out and delivery of new products.
  • Partner Support Programs: New services that help partners save time and lower risk on customer engagements by providing anytime access to NetSuite’s library of training and network of industry experts. 
  • Simplified and Holistic Program Structure: One subscription for all the enablement, tools, assets and support a partner will need to grow without having to compromise.

“We have worked closely with the NetSuite team for many years to help organizations across industries drive growth,” said Mark Wenig, Principal, Eide Bailly LLP. “With this new initiative, NetSuite is making it easier than ever for us to get the resources we need to ensure our customers are unlocking the true value of the suite.”

“The NetSuite team has always been a true partner and the work we have done together has helped ensure our customers are always getting the very best value,” said Steve Ems, Principal National Business Applications Leader, RSM US LLP. “With the launch of SuiteLife, NetSuite has raised the bar again by giving us even more help and support so we can always deliver an exceptional experience to our customers.”

The NetSuite Partner Program grew exponentially as a result of significant expansion in Europe, Asia Pacific and Latin America. The program includes global and regional systems integrators, finance and business advisory firms, and technology consulting providers including Deloitte Digital, Capgemini, RSM and Eide Bailly. Partners provide a combination of services to NetSuite customers, ranging from program management, integration and implementation support to regulatory compliance, tax and audit advice, and system selection guidance in areas such as ERP, HCM, CRM and omnichannel commerce. These services are delivered on the core NetSuite platform to meet increasing demand for integrated cloud applications that drive business agility and deliver exceptional time to value.

Contact Info
Danielle Tarp
Oracle NetSuite
+1 650 506 2905
danielle.tarp@oracle.com
About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 16,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1 650 506 2905

Publishing Machine Learning API with Python Flask

Andrejus Baranovski - Mon, 2019-04-01 02:07
Flask is fun and easy to setup, as it says on Flask website. And that's true. This microframework for Python offers a powerful way of annotating Python function with REST endpoint. I’m using Flask to publish ML model API to be accessible by the 3rd party business applications.

This example is based on XGBoost.

For better code maintenance, I would recommend using a separate Jupyter notebook where ML model API will be published. Import Flask module along with Flask CORS:


Model is trained on Pima Indians Diabetes Database. CSV data can be downloaded from here. To construct Pandas data frame variable as input for model predict function, we need to define an array of dataset columns:


Previously trained and saved model is loaded using Pickle:


It is always a good practice to do a test run and check if the model performs well. Construct data frame with an array of column names and an array of data (using new data, the one which is not present in train or test datasets). Calling two functions — model.predict and model.predict_proba. Often I prefer model.predict_proba, it returns probability which describes how likely will be 0/1, this helps to interpret the result based on a certain range (0.25 to 0.75 for example). Pandas data frame is constructed with sample payload and then the model prediction is executed:


Flask API. Make sure you enable CORS, otherwise API call will not work from another host. Write annotation before the function you want to expose through REST API. Provide an endpoint name and supported REST methods (POST in this example). Payload data is retrieved from the request, Pandas data frame is constructed and model predict_proba function is executed:


Response JSON string is constructed and returned as a function result. I’m running Flask in Docker container, that's why using 0.0.0.0 as the host on which it runs. Port 5000 is mapped as external port and this allows calls from the outside.

While it works to start Flask interface directly in Jupyter notebook, I would recommend to convert it to Python script and run from command line as a service. Use Jupyter nbconvert command to convert to Python script:

jupyter nbconvert — to python diabetes_redsamurai_endpoint_db.ipynb

Python script with Flask endpoint can be started as the background process with PM2 process manager. This allows to run endpoint as a service and start other processes on different ports. PM2 start command:

pm2 start diabetes_redsamurai_endpoint_db.py


pm2 monit helps to display info about running processes:


ML model classification REST API call from Postman through endpoint served by Flask:


More info:

- GitHub repo with source code
- Previous post about XGBoost model training

Step by Step Troubleshooting and Validation of AWS CloudFormation Yaml template

Pakistan's First Oracle Blog - Mon, 2019-04-01 02:00
CloudFormation (CFN) is infrastructure as a code service of AWS. You just tell CFN your desired state of resources and it creates them in order while resolving dependencies. You mention these resources in a file called as template which can be written in YAML or JSON. YAML being bit more user readable, is widely used now. YAML is great but you have to be aware of its indentation.


Following is a step by step troubleshooting and validation of a simple CFN template by using aws cli from windows 10 command prompt.

Step 1: Create your template file. I am using ec2.yaml which just creates an EC2 instance and a security group with some parameters and outputs.


#####################################################################################################
#  Template : Use to deploy simple T2.MICRO EC2 Instance with a security group
#
#  Author   : Fahd Mirza
#
#  Created on :  23rd March, 2019
#
#####################################################################################################
---
AWSTemplateFormatVersion: "2010-09-09"
Description: Deploy EC2 Instance with a security group
######################################################################################################
Parameters:
######################################################################################################
  InstanceType:
    Type: String
    Default: t2.micro
    AllowedValues:
      - t2.micro
      - t2.2xlarge
  MyRSAKey:
    Type: String
    Description: Supply your RSA Key Name
Mappings:
  RegionMap:
    us-east-1:
      AMI: ami-1853ac65
    us-west-1:
      AMI: ami-bf5540df
    eu-west-1:
      AMI: ami-3bfab942
    ap-southeast-1:
      AMI: ami-e2adf99e
    ap-southeast-2:
      AMI: ami-43874721   
######################################################################################################
Resources:
######################################################################################################
  MySecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow SSH to EC2 Bastion Host
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: '22'
        ToPort: '22'
        CidrIp: 0.0.0.0/0
      Tags:
      - Key: Name
        Value: MyBastionSG112
  MyEc2Instance:
    Type: AWS::EC2::Instance
    Metadata:
      AWS::CloudFormation::Init:
        config:
          files:
            /etc/cfn/cfn-hup.conf:
              content: !Sub |
                    [main]
                    stack=${AWS::StackId}
                    region=${AWS::Region}
                    interval=1
              mode: '000744'
              owner: root
              group: root
            /etc/cfn/hooks.d/cfn-auto-reloader.conf:
              content: !Sub |
                     [cfn-auto-reloader-hook]
                     triggers=post.update
                     path=Resources.MyEc2Instance.Metadata.AWS::CloudFormation::Init
                     action=/opt/aws/bin/cfn-init -v --stack ${AWS::StackId} --region ${AWS::Region} --resource MyEc2Instance
                     runas=root
            ~/hello1.txt:
               content: !Sub |
                     hello world, I am from files section of CFN Init metadata.         
          commands:
            RunA:
              command: cat "hello world, I am from command section of CFN Init metadata." > ~/hello.txt
              ignoreErrors: "true"
          services:
              sysvinit:
                  cfn-hup:
                    enabled: true
                    ensureRunning: true
                    files: [/etc/cfn/cfn-hup.conf, /etc/cfn/hooks.d/cfn-auto-reloader.conf]
    Properties:
      Tags:
      - Key: Name
        Value: !Ref "AWS::StackName"
      ImageId:
        Fn::FindInMap:
        - RegionMap
        - !Ref AWS::Region
        - AMI       
      InstanceType:
        Ref: InstanceType
      KeyName:
        Ref: MyRSAKey
      UserData:
        Fn::Base64: !Sub |
              #!/bin/bash -xe
              yum install -y aws-cfn-bootstrap
              chkconfig --add cfn-hup
              /opt/aws/bin/cfn-init -v --stack ${AWS::StackId} --resource MyEc2Instance --region ${AWS::Region}
              /opt/aws/bin/cfn-signal -e 0 --stack ${AWS::StackName} --resource MyEc2Instance --region ${AWS::Region}
    CreationPolicy:
      ResourceSignal:
        Timeout: "PT15M"
######################################################################################################
Outputs:
######################################################################################################
  MyEC2InstancePublicIP:
    Description: My EC2 Instance Public IP
    Value: !GetAtt MyEc2Instance.PublicIp
  MyEC2InstanceID:
    Description: My EC2 Instance ID
    Value: !Ref MyEc2Instance
  MyEC2SecGroup:
    Description: My EC2 Security Group
    Value: !Ref MySecurityGroup

Step 2: When I first ran the validation on this template, it gave me error about its indentation like following. I fixed the indentation  by using simple editor. You can also use any online editor or Visual code studio.

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>dir
 Volume in drive C is Windows
 Volume Serial Number is 5285-4635

 Directory of C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates

23/03/2019  12:54 PM              .
23/03/2019  12:54 PM              ..
23/03/2019  12:49 PM             4,360 ec2.yaml
23/03/2019  12:54 PM             2,461 my.yaml
               2 File(s)          6,821 bytes
               2 Dir(s)  807,032,090,624 bytes free

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://my.yaml

An error occurred (ValidationError) when calling the ValidateTemplate operation: [/Parameters/Mappings] 'null' values are not allowed in templates

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://ec2.yaml

An error occurred (ValidationError) when calling the ValidateTemplate operation: Invalid template parameter property 'RegionMap'

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://ec2.yaml

Step 3: Then I had declared a resource in resources section but it wasn't present in the Parameters section. So I removed it from parameters.

An error occurred (ValidationError) when calling the ValidateTemplate operation: Template format error: Unresolved resource dependencies [MyVPCId] in the Resources block of the template

Step 4: Now it works like a charm.

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>
C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>aws cloudformation validate-template --template-body file://ec2.yaml
Deploy EC2 Instance with a security group
PARAMETERS              Supply your RSA Key Name        False   MyRSAKey
PARAMETERS      t2.micro                False   InstanceType

C:\Users\mirza\Desktop\Cloud\AWS\Cloud\CloudFormation\templates>

Categories: DBA Blogs

Utilities Testing Accelerator - A new way to test quickly lowering your cost and risk

Anthony Shorten - Sun, 2019-03-31 21:11

One of the most common pieces of feedback I got from attending the Oracle Utilities User Group and the Oracle Utilities Customer Edge Conference in Austin recently was that the approach that Oracle Utilities Testing Accelerator is taking is different and logical.

Traditionally, test automation is really coding using the language provided by the tool. The process of using it is recording a screen, with the data for the test, and having that become a script in whatever language supported by the tool. To use the script for other tests, either means you have to record it again or get some programmer, fluent in the scripting language to modify the script. The issue becomes when the user interface changes for any reason. This will most likely make the script now invalid so the whole process is repeated. You end up spending more time building scripts than actually testing.

Oracle Utilities Testing Accelerator takes a different and more cost effective approach:

  • Component Based Testing. Oracle Utilities Test Accelerator uses the tried and tested Oracle Testing approach by exposing test assets as reusable components in a pre-built library. These components do not use the online as the API but use the same underlying API used by online and other channels into the product. This isolates the test from changes to any of the channels as it is purely focused on functionality not user experience testing only.
  • More Than Online Testing. Oracle Utilities Testing Accelerator can test all channels (online, web service, mobile and batch). This allows maximum flexibility in testing across diverse business processes.
  • Orchestrate Not Build. Oracle Utilities Testing Accelerator allows for your business processes to be orchestrated as a sequence of components which can be within a single supported Oracle Utilities product, across supported Oracle Utilities products, within the same channel or across multiple channels. The orchestration reduces the need for relying on traditional recording and maximizes flexibility. 
  • No Touch Scripting. Oracle Utilities Testing Accelerator generates Selenium based code that requires no adjustment to operate which can be executed from an Eclipse Plugin or directly from the tool (the latter is available in 6.0.0.1.0).
  • Data is Independent of Process. During the orchestration data is not required to build out the business process. Data can be added at any time during the process including after the business process is completed and/or at runtime. This allows business processes to be reused with multiple test data sets. Test data can entered using the Workbench directly, via an external file and in the latest release (6.0.0.1.0) it can be populated via test data extractors.
  • Content from Product QA. The Oracle Utilities Testing Accelerator comes with pre-built component libraries, provided by Product QA, for over 30 version/product combinations for Oracle Utilities products. The license, which is user based, gives you access to any of the libraries appropriate for your site, regardless of the number of non-production environments or number of Oracle Utilities product it is used against.

These differences reduce your costs and risk when adopting automated testing. For more information about Oracle Utilities Testing Accelerator refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

Auditing with EDB Postgres Enterprise

Yann Neuhaus - Sun, 2019-03-31 14:07

It might be that there is a requirement to audit operations in the database. Maybe because of legal requirements, maybe because of security requirements or whatever. I’ve already written a post in the past describing what you can do in community PostgreSQL, this post is specific to EDB Postgres. The auditting features come be default in EDB Postgres and you do not need to install any extension such as pgaudit.

I am using EDB Postgres Enterprise version 11.2 for this post but it should work the same in previous versions:

enterprisedb@edb1:/var/lib/edb/ [pg1] psql -X postgres
psql.bin (11.2.9)
Type "help" for help.

postgres=# select version();
                                                                    version                                                                    
-----------------------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11.2 (EnterpriseDB Advanced Server 11.2.9) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
(1 row)

The parameter which controls if auditing is enabled or not is “edb_audit”:

postgres=# show edb_audit;
 edb_audit 
-----------
 
(1 row)

postgres=# 

When it is not set (the default) auditing is enabled. You have two options to enable it:

  • csv: Enabled autiting and will write the audit records to a csv file
  • xml: Enabled autiting and will write the audit records to a csv file

Before enabling auditing you should think about where you want to store the audit files. It should be a location only the operating system user which runs EDB Postgres should have access to. You might think of $PGDATA but do you really want to have all the audit files included in every base backup you will be doing in the future? A better location is outside $PGDATA so you can keep the audit files separated. Lets go with “/var/lib/edb/audit” for the scope of this post:

postgres=# \! mkdir /var/lib/edb/audit
postgres=# \! chmod 700 /var/lib/edb/audit
postgres=# alter system set edb_audit_directory = '/var/lib/edb/audit';
ALTER SYSTEM
postgres=# alter system set edb_audit='csv';
ALTER SYSTEM
postgres=# select name,context from pg_settings where name in ('edb_audit_directory','edb_audit');
        name         | context 
---------------------+---------
 edb_audit           | sighup
 edb_audit_directory | sighup
(2 rows)

Both parameter changes can be made active by reloading the server, a restart is not required:

postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

The default file name that will be used for the audit logs is:

postgres=# show edb_audit_filename;
 edb_audit_filename  
---------------------
 audit-%Y%m%d_%H%M%S
(1 row)

Lets keep that as it is which is sufficient for the scope of this post. Now you need to think about what you want to audit. There are several options available:

  • edb_audit_connect: Logs all connections to the instance, either successful, failed or all
  • edb_audit_disconnect: The opposite of edb_audit_connect, logs all disconnections/li>
  • edb_audit_statement: Here you have several options to log SQL statements such as insert,truncate, whatever, more on that later
  • edb_audit_tag: When set, adds a string value to all audit log files

We start with logging connections and disconnections. When we set edb_audit_connect to all, we should see all connections to a database, no matter if successful or failed:

postgres=# alter system set edb_audit_connect = 'all';
ALTER SYSTEM
postgres=# select context from pg_settings where name = 'edb_audit_connect';
 context 
---------
 sighup
(1 row)
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on we should have the audit information in the log file for every successful connection and every connection attempt that failed. Is it true?

postgres=# \! psql edb
psql.bin (11.2.9)
Type "help" for help.

[local]:5444 enterprisedb@edb=# \q
postgres=# \! psql -U dummy edb
psql.bin: FATAL:  role "dummy" does not exist
postgres=# 

That should have produced two lines in the latest audit file:

enterprisedb@edb1:/var/lib/edb/audit/ [pg1] pwd
/var/lib/edb/audit
enterprisedb@edb1:/var/lib/edb/audit/ [pg1] ls -latr
total 8
drwx------. 5 enterprisedb enterprisedb 183 Mar 24 14:24 ..
-rw-------. 1 enterprisedb enterprisedb 611 Mar 24 14:38 audit-20190324_143640.csv
drwx------. 2 enterprisedb enterprisedb  72 Mar 24 14:38 .
-rw-------. 1 enterprisedb enterprisedb 412 Mar 24 14:41 audit-20190324_143805.csv
enterprisedb@edb1:/var/lib/edb/audit/ [pg1] cat audit-20190324_143805.csv
2019-03-24 14:40:54.683 CET,"enterprisedb","edb",1534,"[local]",5c9788e6.5fe,1,"authentication",2019-03-24 14:40:54 CET,5/133,0,AUDIT,00000,"connection authorized: user=enterprisedb database=edb",,,,,,,,,"","",""
2019-03-24 14:41:16.617 CET,"dummy","edb",1563,"[local]",5c9788fc.61b,1,"authentication",2019-03-24 14:41:16 CET,5/136,0,AUDIT,00000,"connection authorized: user=dummy database=edb",,,,,,,,,"","",""

a
As expected, we can see the successful connection request and in addition the one that failed. When we want to log disconnections as well, we can do so:

postgres=# alter system set edb_audit_disconnect = 'all';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)
postgres=# \! psql edb
psql.bin (11.2.9)
Type "help" for help.

[local]:5444 enterprisedb@edb=# \q
postgres=# 

In the same audit file as before:

2019-03-24 14:47:42.447 CET,"enterprisedb","edb",1929,"[local]",5c978a7a.789,2,"idle",2019-03-24 14:47:38 CET,,0,AUDIT,00000,"disconnection: session time: 0:00:03.708 user=enterprisedb database=edb host=[local]",,,,,,,,,"psql.bin","",""

The duration of the session is logged as well. So far for the basic auditing features. Logging connections and disconnections is a good start but probably not enough. You might soon come to a point where you want to have more information, such as what the user was doing exactly in the database. This is where “edb_audit_statement” comes into the game. You can set it to something simple like “all inserts” or “all updates”:

postgres=# alter system set edb_audit_statement = 'insert';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# insert into t1 values(1);
INSERT 0 1

Looking at the audit file:

2019-03-24 14:55:36.744 CET,,,9004,,5c977540.232c,3,,2019-03-24 13:17:04 CET,,0,LOG,00000,"received SIGHUP, reloading configuration files",,,,,,,,,"","",""
2019-03-24 14:55:53.460 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,6,"idle",2019-03-24 14:13:30 CET,4/477,0,AUDIT,00000,"statement: insert into t1 values(1);",,,,,,,,,"psql.bin","INSERT",""

The insert is logged. You may also spot a potential issue here: Depending on how the statement is written the actual values (1 in this case) is written to the log. This might open another security hole if the audit files are not handled with care. You can not prevent that using prepared statements and in fact the “prepare” part is logged as well:

postgres=# prepare stmt as insert into t1 values($1);
PREPARE
postgres=# execute stmt(2);
INSERT 0 1
postgres=# select * from t1;
 a 
---
 1
 2
(2 rows)

The entries in the audit log:

2019-03-24 14:58:50.395 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,7,"idle",2019-03-24 14:13:30 CET,4/478,0,AUDIT,00000,"statement: prepare stmt as insert into t1 values($1);",,,,,,,,,"psql.bin","PREPARE",""
2019-03-24 14:59:02.952 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,8,"idle",2019-03-24 14:13:30 CET,4/479,0,AUDIT,00000,"statement: execute stmt(2);","prepare: prepare stmt as insert into t1 values($1);",,,,,,,,"psql.bin","EXECUTE",""

Although we only asked to log “inserts”, the prepare and execute statements are logged as well. If we prepare an update it is not logged (what is correct):

postgres=# prepare stmt2 as update t1 set a = $1;
PREPARE
postgres=# execute stmt2(2);
UPDATE 5

The last line in the audit file is still this one:

2019-03-24 15:02:33.502 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,9,"idle",2019-03-24 14:13:30 CET,4/487,0,AUDIT,00000,"statement: execute stmt(5);","prepare: prepare stmt as insert into t1 values($1);",,,,,,,,"psql.bin","EXECUTE",""

The power of edb_audit_statement comes when you want to audit multiple kinds of statements but do not want to set it to “all” (this would log all the statements):

postgres=# alter system set edb_audit_statement='insert,update,delete,create table,drop view';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

This should log all inserts, updates and deletes and in addition every create table or drop view:

postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# insert into t2 values(1);
INSERT 0 1
postgres=# update t2 set a = 2;
UPDATE 1
postgres=# delete from t2 where a = 2;
DELETE 1
postgres=# truncate t2;
TRUNCATE TABLE
postgres=# create view v1 as select * from t2;
CREATE VIEW
postgres=# drop view v1;
DROP VIEW

We should see entries for the insert, the update and the delete, but not for the truncate. The drop view should be logged as well:

2019-03-24 15:08:46.245 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,10,"idle",2019-03-24 14:13:30 CET,4/496,0,AUDIT,00000,"statement: create table t2 ( a int );",,,,,,,,,"psql.bin","CREATE TABLE",""
2019-03-24 15:08:59.713 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,12,"idle",2019-03-24 14:13:30 CET,4/498,0,AUDIT,00000,"statement: insert into t2 values(1);",,,,,,,,,"psql.bin","INSERT",""
2019-03-24 15:09:21.299 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,13,"idle",2019-03-24 14:13:30 CET,4/499,0,AUDIT,00000,"statement: update t2 set a = 2;",,,,,,,,,"psql.bin","UPDATE",""
2019-03-24 15:09:29.614 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,14,"idle",2019-03-24 14:13:30 CET,4/500,0,AUDIT,00000,"statement: delete from t2 where a = 2;",,,,,,,,,"psql.bin","DELETE",""
2019-03-24 15:12:51.652 CET,"enterprisedb","postgres",31899,"[local]",5c97827a.7c9b,15,"idle",2019-03-24 14:13:30 CET,4/503,0,AUDIT,00000,"statement: drop view v1;",,,,,,,,,"psql.bin","DROP VIEW",""

Fine. Using edb_audit_statement we have control of what exactly we want to log. What we did now was valid for the whole instance, can we modify auditing to a specific role? Yes, this is possible:

edb=# alter user enterprisedb set edb_audit_statement = 'truncate';
ALTER ROLE
edb=# create role test;
CREATE ROLE
edb=# alter role test set edb_audit_statement = 'truncate';
ALTER ROLE

The same is true on the database level:

edb=# alter database edb set edb_audit_statement = 'truncate';
ALTER DATABASE

Lets do a small test and create user and then set edb_audit_statement on the user level, and reset it on the instance level:

edb=# create user u1 with login password 'u1';
CREATE ROLE
edb=# alter user u1 set edb_audit_statement = 'create table';
ALTER ROLE
edb=# alter system set edb_audit_statement = 'none';
ALTER SYSTEM
edb=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Create table statements from that user should now be logged:

edb=# \c edb u1
You are now connected to database "edb" as user "u1".
edb=> create table t1 ( a int );
CREATE TABLE

The statement is indeed logged:

2019-03-24 15:44:19.793 CET,"u1","edb",6243,"[local]",5c9797b7.1863,1,"idle",2019-03-24 15:44:07 CET,5/30177,0,AUDIT,00000,"statement: create table t1 ( a int );",,,,,,,,,"psql.bin","CREATE TABLE",""

Does the same work for a role?

edb=> \c edb enterprisedb
You are now connected to database "edb" as user "enterprisedb".
edb=# create role r1;
CREATE ROLE
edb=# alter role r1 set edb_audit_statement = 'drop table';
ALTER ROLE
edb=# grant r1 to u1;
GRANT ROLE
edb=# \c edb u1
You are now connected to database "edb" as user "u1".
edb=> drop table t1;
DROP TABLE
edb=> 

No, in this case the drop statement is not logged. You can set the parameter for a role, but is does not have any effect.

The last test for today: What happens when the directory we configured for the audit files is removed?

enterprisedb@edb1:/var/lib/edb/ [pg1] pwd
/var/lib/edb
enterprisedb@edb1:/var/lib/edb/ [pg1] ls -l
total 0
drwx------. 4 enterprisedb enterprisedb 51 Mar 24 13:16 as11
drwx------. 2 enterprisedb enterprisedb 72 Mar 24 14:38 audit
drwxrwxr-x. 3 enterprisedb enterprisedb 17 Mar 24 13:09 local
enterprisedb@edb1:/var/lib/edb/ [pg1] mv audit/ audit_org
enterprisedb@edb1:/var/lib/edb/ [pg1] ls -l
total 0
drwx------. 4 enterprisedb enterprisedb 51 Mar 24 13:16 as11
drwx------. 2 enterprisedb enterprisedb 72 Mar 24 14:38 audit_org
drwxrwxr-x. 3 enterprisedb enterprisedb 17 Mar 24 13:09 local

These two inserts should generate audit records:

edb=> create table t2 ( a int );
CREATE TABLE
edb=> create table t3 ( a int );
CREATE TABLE
edb=> 

Nothing happens, not even a log entry in the server log file. I would have at least expected to get a warning that the directory does not exist. Lets restart the instance:

enterprisedb@edb1:/var/lib/edb/as11/data/log/ [pg1] pg_ctl -D /var/lib/edb/as11/data/ restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2019-03-24 15:51:59 CET LOG:  listening on IPv4 address "0.0.0.0", port 5444
2019-03-24 15:51:59 CET LOG:  listening on IPv6 address "::", port 5444
2019-03-24 15:51:59 CET LOG:  listening on Unix socket "/tmp/.s.PGSQL.5444"
2019-03-24 15:51:59 CET LOG:  redirecting log output to logging collector process
2019-03-24 15:51:59 CET HINT:  Future log output will appear in directory "log".
 done
server started

And again: Nothing. But the audit directory is being recreated once the server starts:

enterprisedb@edb1:/var/lib/edb/ [pg1] ls -l
total 0
drwx------. 4 enterprisedb enterprisedb 51 Mar 24 13:16 as11
drwx------. 2 enterprisedb enterprisedb 39 Mar 24 15:51 audit
drwx------. 2 enterprisedb enterprisedb 72 Mar 24 14:38 audit_org
drwxrwxr-x. 3 enterprisedb enterprisedb 17 Mar 24 13:09 local

Changing the permissions so that the enterprisedb user can not write anymore to that directory will prevent the server from restarting:

enterprisedb@edb1:/var/lib/edb/ [pg1] sudo chown root:root audit
enterprisedb@edb1:/var/lib/edb/ [pg1] ls -l
total 0
drwx------. 4 enterprisedb enterprisedb 51 Mar 24 13:16 as11
drwx------. 2 root         root         39 Mar 24 15:51 audit
drwx------. 2 enterprisedb enterprisedb 72 Mar 24 14:38 audit_org
drwxrwxr-x. 3 enterprisedb enterprisedb 17 Mar 24 13:09 local
enterprisedb@edb1:/var/lib/edb/ [pg1] pg_ctl -D /var/lib/edb/as11/data/ restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2019-03-24 15:55:44 CET LOG:  listening on IPv4 address "0.0.0.0", port 5444
2019-03-24 15:55:44 CET LOG:  listening on IPv6 address "::", port 5444
2019-03-24 15:55:44 CET LOG:  listening on Unix socket "/tmp/.s.PGSQL.5444"
2019-03-24 15:55:44 CET FATAL:  could not open log file "/var/lib/edb/audit/audit-20190324_155544.csv": Permission denied
2019-03-24 15:55:44 CET LOG:  database system is shut down
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Hope that helps…

Cet article Auditing with EDB Postgres Enterprise est apparu en premier sur Blog dbi services.

Windocks, SQL Server and Azure in our internal CI pipeline

Yann Neuhaus - Sun, 2019-03-31 12:50

During the last DevOps Day in Geneva, I presented a sample of our CI implementation related to our MSSQL DMK maintenance product. It was definitely a very good experience for me and good opportunity to get feedback from DevOps community as well.

During the session I explained our CI pipeline includes SQL Server containers both on AKS (K8s as managed service in Azure) and Windocks. I got questions from some of attendees who asked me why we are using Windocks as container solution for SQL Server on the Windows side in our specific context? As promised here some explanations in this blog post but let’s provide now the quick answer: we are using Windocks to address challenges that exist with SQL Server containers on Windows side. The long answer will follow but let’s set first the context with a high-level overview of our continuous integration pipeline architecture:

We are using a hybrid scenario where tools for development (SSDT and GitLab) are located in an on-premises dbi internal infrastructure whereas the CI pipeline runs entirely on Azure. The pipeline breaks down into two main areas including CI testing performed on different SQL Server containers that run on Windows through Windocks as well as Microsoft SQL Server containers that run on Linux and AKS. The AKS (K8s managed service in Azure) hosts SQL Server availability groups (in a beta release) and Windocks  (surrounded in green in the above picture) is also part of this Azure architecture in IaaS mode within an Azure virtual machine Standard D4s v3 (4 vCPUs, 16 GB memory and 512GB of disk space). As an aside, we choose this machine size because nested virtualization is required by Windocks and cloned database feature that uses Hyper-V differencing disk capabilities in the background.

 

  • Maintaining docker SQL Server images on Windows may be cumbersome

The DMK maintenance tool performs database maintenance tasks, basically the tasks you may find usually on SQL Server environments including database backups, database check integrity and maintenance of indexes and statistics as well. We obviously brought our added value and best practices in the tool and we provided to our customers they want to use it. The main challenge here consists in supporting a wide range of versions from 2008R2 to 2017 versions at the moment of this write-up (both on Windows and Linux obviously) and most of issues encountered with Docker images came from SQL Server docker images on Windows. First, if you refer to the Docker Hub (and new Microsoft Container Registry), there are no real official images for SQL Server versions prior 2016. Thus, maintaining such images is at your own responsibility and risk and we were not confident to go this way. However, I kept motivated and I decided to perform further tests to check the feasibility with Docker images. However, I quickly figured out that the going through a Docker native based solution will lead to some boring challenges. Indeed, having no official images from Microsoft for older versions of SQL Server, I had to build mine but I was disappointed by the image size that was too large compared to those we may found officially for Linux – more than 10GB for a SQL Server docker image on Windows versus ~ 1.4GB on Linux.

SQL Server Docker image size on Windows after building the custom image

The total size includes the SQL Server binaries but event if we exclude it from the calculation the final size leads to the same conclusion.

SQL Server image size on Linux

In addition, building a basic image of SQL Server on Windows remains a boring stuff and may be time consuming to be honest because you need to write some pieces of code to install optional prerequisites, SQL Server itself meaning you have first to copy binaries (CUs or / and SPs according the version) and then run the command file to install it. A lot of work and no real added values (and no warranties) at the end. That is definitely at the opposite of what I may expect as part of a DevOps process when I want to be fast and to use simply a SQL Server docker based image. Indeed, in this case, I would like to just pick up the right docker image version and corresponding tag and then to focus on my work.

Windocks fills the gap that exists with older versions (and probably new ones) of SQL Server on Windows by providing a different way to create based images compared to the docker-native solution. The first step consists in installing SQL Server instances as we would in a traditional approach. Then the interesting point is that these instances will serve as based images when spinning up containers. This new approach provides several advantages we get through but here I would like to point out the ability to apply configuration settings directly at the SQL Server instance level that will be propagated automatically to new created containers. From my point of view, it is an interesting way to apply segregation of duties without compromising the architecture’s agility. DBAs (Ops) may still work on providing a well configured template from an architecture point of view whereas developers will focus on their work but both will interact with the same tool.

 

  •  Storage concern may exist even on DEV environment

Usually in DEV environment storage is not a big concern for DBAs. From my experience, they usually provide to developers a shared environment with different SQL Server instances and application databases as well. Most of time developers get often high privileges on those environments as – db_owner or sysadmin according to the context – because it is about a DEV environment after all and DBAs apply often a dirty fix to make these environments more “agile”. But this approach implies installing a static environment that is in fact not as flexible as we may think for developers. For instance, how to reinitialize an environment for a specific developer without impacting the work of other ones? The ideal context would be each developer is able to create quickly an isolated and ephemeral environment on-demand. But in turn this new approach comes with its new challenges: Indeed, how to deal with the total disk space consumption in this case? Let’s say each developer wants to spin up a new SQL Server container environment, then the total storage footprint would include the SQL Server docker image and the space consumed by the user databases as well, right?  Let’s take a real customer example who wants to provide fresh data from production databases every week to the developers (after applying sanitized data scripts or not). This is a common scenario by the way and let’s say the final storage size of databases is roughly 500GB for this customer. Adding ~ 20 developers in the game, I ‘m pretty sure you already guessed the potential storage concern which may result here if all developers want to spin up their own environment in the same time. Let’s do a quick math:  20 [developers] x (10GB [Docker image size] + 500GB [user databases] ~= 10 TB.

Going back to my specific context (our DMK maintenance tool) the storage footprint is not so exacerbated because we could be up to 7 developers at the same time with a total storage footprint of 770GB (10GB for the Docker image + 100GB of user databases). It remains too much for us even if we have provisioned 512GB of premium SSD and we can increase it in an easy way … Storage has also a cost on Azure right? Furthermore, we know that for each developer the ratio between the payload disk space and real consumed disk space is low for the most part of developed features. We need to find a way to improve this ratio and Windocks provides a simple way to address it by providing Hyper-V differencing disk capabilities directly integrated with containerization.

  • Security

How to secure our environment was a question that came at the end of our CI implementation. As many DevOps projects security is usually not at the heart of first concern but moving to the cloud helped to consider security as an important topic in our architecture.

First, we need to ensure images used by our team are secure. Insecure images are part of new issues that come with container environments and image checking process requires a more complex infrastructure with often EE capabilities and extra components on the container side (at least in the case you don’t want to put your images on a public repository. Using a private registry on Azure is another option but after some investigations we were in favor of Windocks capabilities in our context. Windocks goes through a different approach to create SQL Server images by using SQL Server native instance installation as based template rather than relying on a Docker native images and on potential docker registry. The built-in approach to prevent compromising the container infrastructure with potential malicious code without further complexifying the architecture was a good argument for us because it can help DBAs to keep security concerns under control here.

Then Windocks provides other features that help us securing the container environment in an easy way with basic authentication to prevent an unauthorized user to spin up a Windocks container for instance. The native support of Windows authentication was another good argument because it simplified the security management of admin users. We are using a mix of Windows sysadmin accounts and SQL Server logins for applications.

In the bottom line, as a small DEV team we are really satisfied with Windocks that was able to address challenges we faced on the operational side. But it is worth noting that our needs and challenges are closed to what we may see with some of our customers, but in a different order of magnitude, when SQL Server is introducing in their CI/CD pipeline. In our context, we are running standard edition of Windocks but EE capabilities are also available that are more suitable with enterprise-class environments.

See you

Cet article Windocks, SQL Server and Azure in our internal CI pipeline est apparu en premier sur Blog dbi services.

review at amazon: Azure for Architects

Dietrich Schroff - Sun, 2019-03-31 10:31
This weekend i read "Azure for Architects - Implementing cloud design, DevOps, containers, IoT and serverless solutions on your public cloud".


After i had some trouble to publish my review on amazon.de and i put in onto my blog (take a look here), i wrote a smaller review only about 2 chapters. And this worked...

One statement i liked, was:
This is not only true for security, but sometimes it is important to state the obvious.

If you are interested, take a look at my review at amazon.de (like all my reviews: written in german ;-).

Monitoring your Oracle APEX environment; when is a good time to do an upgrade?

Dimitri Gielis - Sat, 2019-03-30 06:12
Yesterday Oracle released Oracle APEX 19.1. We typically upgrade our environment within a week.

The question is always, when is a good time to upgrade?

As we are hosting multiple applications and sites in our environment, we want to find a spot with minimal impact to our customers. For some applications which are only accessed by some people in Belgium, it's very easy to schedule an upgrade at night as the chances are not high people accessing the app. But in the case of APEX Office Print, we have customers all over the world in different timezones. So they might want to look at their dashboard or download the latest version at any moment in time.

For our AOP service itself, we have an internal policy we don't allow downtime. We load balance, so even when performing upgrades of AOP, there will always be a server available to handle the requests. For our Oracle database and APEX environment, we are fine with a few minutes downtime while upgrading the server.

In Oracle APEX itself, when you go to Monitor Activity - Page Views by User by Hour you get a graphical overview when your applications are being used:

This screen works well for a given workspace. If you want to look cross workspaces, for your entire APEX instance, you could build something yourself similar to the above. An overview like that would give you an idea based on historical usage which hours have the least amount of impact.

Now that you found the window, you still want to check if somebody is using our APEX apps.

So before I start an upgrade, I always check the last activity in our APEX apps.
The script I use to monitor the activity in our entire APEX environment - as SYS user run in SQL>

SELECT
workspace_name,
apex_session_id,
user_name,
remote_addr,
TO_CHAR(session_created, 'DD-MON-YYYY HH24:MI') AS session_created,
TO_CHAR(session_idle_timeout_on, 'DD-MON-YYYY HH24:MI') AS session_idle_timeout_on,
TO_CHAR(session_idle_timeout_on-(session_max_idle_sec/24/60/60), 'DD-MON-YYYY HH24:MI') AS last_activity,
TO_CHAR(sysdate, 'DD-MON-YYYY HH24:MI') AS time_now,
round((sysdate-(session_idle_timeout_on-(session_max_idle_sec/24/60/60)))*24*60) as minutes_ago
FROM apex_workspace_sessions
WHERE user_name NOT IN ('APEX_PUBLIC_USER','nobody')
ORDER BY minutes_ago, workspace_name, session_idle_timeout_on DESC;

This gives me the following overview:


So I see how many minutes ago (last column) which APEX workspace was used by which user. I could even follow the APEX session to find out more about the APP etc. but that is not really relevant for me. In the script, I only check for logged in users.

In case I also want to see for direct connections to the database, or I want more real-time info from the APEX session, I run the following script to check the session in the Oracle Database:

SELECT
sid,
serial#,
username,
osuser,
machine,
program,
--sql_id, sql_exec_start, event,
logon_time,
client_info AS workspace_id_auth_user,
module AS db_schema_apex_app_id_page_id,
action AS action,
client_identifier AS auth_user_apex_session
FROM gv$session
WHERE type = 'USER'
AND service_name = 'apex_pdb';

This gives me the following overview:


In the above screen you see I'm connected to the database with SQL Developer. The other sessions are coming from our connection pool. What is interesting is that APEX is instrumented to populate session info in the database like module, client_info, and client_identifier. This is important as APEX/ORDS is working with a connection pool so it would be hard to see what APEX session corresponded to which database session in case they didn't.

The above helps me to find a good time to perform upgrades, without impacting too many people.
So I thought to share my scripts as it might help you too.
Categories: Development

Granting select on all tables and views of one user to other

Tom Kyte - Sat, 2019-03-30 05:06
Tom, I have a user with x tables any y views. I wish to grant select on all the tables and views owned by this user to another user. Is there any thing like "select any table" for this purpose. I am thinking to implement by granting the select...
Categories: DBA Blogs

best way to get high value of partitions from data dictionary

Tom Kyte - Sat, 2019-03-30 05:06
Hi, Chris and Connor, I'm trying to find the best way to get "high value" of partitions from data dictionary, in order to write a customized statistics gathering program. As you know, the data type of "high_value" column of DBA_TAB_PARITTIONS i...
Categories: DBA Blogs

Working with Multiple Row Selection Tables in Visual Builder

Shay Shmeltzer - Fri, 2019-03-29 18:35

The Oracle JET table component allows you to select multiple records in one go using the regular ctrl & shift key combinations. But once the user selected rows, how do you know which rows were selected? How do you track this?

The video below shows you the basics. As the JET tag documentation will show you, the table has a selection property which is an array of the selected records. This selections array is passed to the selection event on the table that you can hook to in with an action chain in VBCS. The array has a row for each range of records you selected listing their keys and indexes in the table.

It's up to you to parse this information if you want to operate on these rows.

The code in the JavaScript method is:

 

  PageModule.prototype.listSelection = function(selection) {     console.log("we got " + selection.length + " selections")     for (var i = 0; i < selection.length; i++) {       console.log("start key " + selection[i].startKey.row +         ", start index " + +selection[i].startIndex.row);       console.log("end key " + selection[i].endKey.row + ", end index " +         +selection[i].endIndex.row);     }   }

 

 

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator