Feed aggregator

TensorFlow Linear Regression Model Access with Custom REST API using Flask

Andrejus Baranovski - Mon, 2017-12-11 13:28
In my previous post - TensorFlow - Getting Started with Docker Container and Jupyter Notebook I have described basics about how to install and run TensorFlow using Docker. Today I will describe how to give access to the machine learning model from outside of TensorFlow with REST. This is particularly useful while building JS UIs on top of TensorFlow (for example with Oracle JET).

TensorFlow supports multiple languages, but most common one is Python. I have implemented linear regression model using Python and now would like to give access to this model from the outside. For this reason I'm going to use Flask, micro-framework for Python to allow simple REST annotations directly in Python code.

To install Flask, enter into TensorFlow container:

docker exec -it RedSamuraiTensorFlowUI bash

Run Flask install:

pip install flask

Run Flask CORS install:

pip install -U flask-cors

I'm going to call REST from JS, this means TensorFlow should support CORS, otherwise request will be blocked. No worries, we can import Flask CORS support.

REST is enabled for TensorFlow model with Flask in these two lines of code:

As soon as Flask is imported and enabled we can annotate a method with REST operations and endpoint URL:

There is option to check what kind of REST operation is executed and read input parameters from POST request. This is useful to control model learning steps, for example:

If we want to collect all x/y value pairs and return in REST response, we can do that by collecting values into array:

And construct JSON response directly out of the array structure using Flask jsonify:

After we run TensorFlow model in Jupyter, it will print URL endpoint for REST service. For URL to be accessible outside TensorFlow Docker container, make sure to run TensorFlow model with as in the screenshot below:

Here is example of TensorFlow model REST call from Postman. POST operation is executed payload and response:

All REST calls are logged in TensorFlow:

Download TensorFlow model enabled with REST from my GitHub.

Top 12 Rank Tracking Software and services

Nilesh Jethwa - Mon, 2017-12-11 13:23

The role of Search Engine Optimization and keyword tracking tools are important in this technological age. This is especially true for people involved in business. One sure way to track the performance of a business is to use software specifically designed to look for rank and popularity.

Rank tracking software gauges the performance of a business. The use of this rank tracker tool allows one to evaluate and track ranks in search engines as well as give them the range of their visibility to the prospect market. This can also observe the progress of competitors in the market. This knowledge can act as their edge and chance for better improvement.

Monitoring business using keyword rank checker can offer numerous benefits to a business enterprise, but before it can happen, you must first find the suitable rank tracker tool for you.

Here are a list of the top rank tracking software and services


Initially founded in 2002, this company offers an extensive variety of different keyword tracking tools. It prides itself on providing worry-free reporting that allows a user to focus on website optimization rather than tracking.  It also has cloud service and desktop software.

Features and benefits of this rank tracker:

  • Allows white label reporting
  • It offers unlimited observation of campaigns and websites competition
  • Allows easy assignment of read/write permissions
  • This keyword rank checker has an international customer base that supports special characters
  • Simple project setup and easy sharing of data online
  • User-intuitive interface and can be accessed by mobile devices
  • Because of its custom google location, it aren support the international markets and gets results from over 50+ countries.

What makes it better?

Compared to its competitors, Advanced Web Ranking can give localized ranking with pinpoint accuracy. Geolocation proxies in the company system is the reason for this better accuracy. It also offers a 30-day trial for free.

But what’s the price?

The starting least priced option is $49 per month and the highest value is at $499. The price is a bit costly, but with its given features, it is understandable.


Read more at https://www.infocaptor.com/dashboard/top-12-rank-tracking-software-and-services-for-your-business-needs

Secure Configuration Guidelines for Oracle E-Business Suite 12.2 and 12.1

Steven Chan - Mon, 2017-12-11 11:32

We've been providing Oracle E-Business Suite secure configuration guidelines or best practices in our published MOS Notes and guides for some time now. Our secure configuration deployment guidelines include the following recommendations:

Related Articles


Categories: APPS Blogs

DBMS_COMPRESSION can be run in parallel, at least from

Jeff Moss - Mon, 2017-12-11 10:12

I was trying to use DBMS_COMPRESSION on an 11gR2 ( on RHEL 6.3) database the other day and was helped by this article from Neil Johnson. I was having some trouble with permissions until I read that article which nicely summarises everything you need to know – thanks Neil!

One thing I did notice is that Neil stated that you can’t parallelise the calls to the advisor since it uses the same named objects each time and this would then cause conflicts (and problems). Neil illustrated the example calls that the advisor is making based on him tracing the sessions…

create table "ACME".DBMS_TABCOMP_TEMP_UNCMP tablespace "SCRATCH" nologging
 as select /*+ DYNAMIC_SAMPLING(0) FULL("ACME"."ACCS") */ *
 from "ACME"."ACCS" sample block( 99) mytab
create table "ACME".DBMS_TABCOMP_TEMP_CMP organization heap 
 tablespace "SCRATCH" compress for all operations nologging
 as select /*+ DYNAMIC_SAMPLING(0) */ *

Because I kept having permissions issues I was repeatedly running the advisor and I ended up with a situation where one of the transient objects (above, or so I thought) had been left in place and when I tried the next rerun it complained that the object existed. I can’t reproduce this as I can’t remember all the steps that I took and I wasn’t recording my session at the time – it’s not really the point of this blog in any case, rather the knowledge it led to. Because the error was that the object existed, I figured I just needed to find the object and drop it and I’d be good to carry on – obviously I looked at the above code fragments and started to search for the two objects in question (DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP) but found nothing. I started looking for DBMS_TABCOMP% and again found nothing.

Somewhat confused, I then looked for the latest object created and found that the objects were actually called something completely different and of the form CMPx$yyyyyyyy. I think this must have changed since Neil wrote his article (it is from 2013 after all).

I can’t work out what “x” is – at first I thought it was the RAC instance but that was just a coincidence that I saw a 3 and I was on instance 3 of a RAC cluster. In fact on a single instance database (test below) I saw numbers higher than 1 so it’s not the RAC instance number and I can’t work out what it is. “yyyyyyyy” is definitely the OBJECT_ID of the data object, confirmed by cross referencing the data dictionary.

Given this naming standard is therefore object specific, it suggests that you could execute these advisor calls in parallel.

Just to be clear, I’m not sure what version of 11g Neil was using but I am using

SQL> select * from v$version;

Oracle Database 11g Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE Production
TNS for Linux: Version - Production
NLSRTL Version - Production

A little test using Neil’s code (and a bit more as I don’t have acme user on my database):

First create a script called dbms_comp.sql with the following content:

set serveroutput on
set feedback on
set verify off
 blkcnt_cmp BINARY_integer;
 blkcnt_uncmp BINARY_integer;
 row_cmp BINARY_integer;
 row_uncmp BINARY_integer;
 cmp_ratio number;
 comptype_str varchar2(60);
 scratchtbsname => upper('&3.')
 , ownname => upper('&1.')
 , tabname => upper('&2.')
 , partname => null
 , comptype => dbms_compression.comp_for_oltp
 , blkcnt_cmp => blkcnt_cmp
 , blkcnt_uncmp => blkcnt_uncmp
 , row_cmp => row_cmp
 , row_uncmp => row_uncmp
 , cmp_ratio => cmp_ratio
 , comptype_str => comptype_str
 , subset_numrows => &4.
 DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
 DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
 --DBMS_OUTPUT.PUT_LINE('Compression type = ' ||comptype_str);
 DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,1)||' to 1');
 DBMS_OUTPUT.PUT_LINE('Compression % benefit = '||round((blkcnt_uncmp-blkcnt_cmp)/blkcnt_uncmp*100,1));
 --DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);
set verify on

Then create another script called setup.sql with the following content – I’m using auditing (thanks Tim!) to see the statements rather than tracing in this instance:

conn sys as sysdba
drop user acme cascade;
drop user nj cascade;
drop tablespace acme including contents and datafiles;
drop tablespace scratch including contents and datafiles;
drop role nj_dba;
create user acme identified by acme;
grant create session,create table to acme;
create tablespace acme datafile '/u01/app/oracle/oradata/db11g/acme01.dbf' size 2G;
alter user acme quota unlimited on acme;
create tablespace scratch datafile '/u01/app/oracle/oradata/db11g/scratch01.dbf' size 2G;
create role nj_dba;
create user nj identified by nj;
REM Use auditing instead of tracing to identify the statements run:
audit all by nj by access;
audit create table by nj by access;
grant create session, create any table, drop any table, select any table to nj_dba;
grant execute on sys.dbms_monitor to nj_dba;
grant nj_dba to nj;
alter user acme quota unlimited on scratch;
alter user nj quota unlimited on scratch;

Now login to sqlplus /nolog and run setup.sql which should show this:

SQL> @setup
Enter password:

User dropped.

User dropped.

Tablespace dropped.

Tablespace dropped.

Role dropped.

User created.

Grant succeeded.

Tablespace created.

User altered.

Tablespace created.

Role created.

User created.

Audit succeeded.

Audit succeeded.

Grant succeeded.

Grant succeeded.

Grant succeeded.

User altered.

User altered.

Grant succeeded.


Now login to acme and create the subject table for the compression advisor:

conn acme/acme
create table test tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now check the compression using the advisor (ignore the actual compression results as we’re not interested in those at this time):

conn nj/nj
@dbms_comp acme test scratch 200000
Block count compressed = 2048
Block count uncompressed = 2048
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Now check the audit trail to find the statements run:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';

-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518

(Abridged to remove non relevant tests)

Now check the dictionary to see the OBJECT_ID:

select object_name from dba_objects where object_id=87401;


1 row selected.

OK, how about the parallelism? Let’s create a second table called TEST2 in ACME:

conn acme/acme
create table test2 tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now run two parallel sessions – I did it by firing off the calls manually in separate SQL*Plus sessions rather than being clever:

In session 1:

conn nj/nj
@dbms_comp acme test scratch 200000

In session 2:

conn nj/nj
@dbms_comp acme test2 scratch 200000


First one gives:

Block count compressed = 1920
Block count uncompressed = 1920
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Second one gives:

Block count compressed = 2432
Block count uncompressed = 2432
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Both ran at the same time and didn’t fail

Now check the audit trail:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518 
NJ CREATE TABLE CMP3$87408 10-DEC-2017 18:12:18.114321
NJ CREATE TABLE CMP3$87409 10-DEC-2017 18:12:18.114353
NJ CREATE TABLE CMP4$87408 10-DEC-2017 18:12:22.730715
NJ CREATE TABLE CMP4$87409 10-DEC-2017 18:12:22.735908
(Abridged to remove non relevant tests)

And from the dictionary:

select object_name from dba_objects where object_id IN(87408,87409);


2 rows selected.

So, it appears to allow parallel running without issue.

If anyone works out what the “x” part of the object names is (3 and 4 in the above example), please shout out in the comments…

Oracle Expands Security Portfolio with New Capabilities and Partner Program Designed to Help Organizations Detect and Protect Against Threats

Oracle Press Releases - Mon, 2017-12-11 07:00
Press Release
Oracle Expands Security Portfolio with New Capabilities and Partner Program Designed to Help Organizations Detect and Protect Against Threats New Identity Governance, A.I.-based Configuration Management, and Consumer Identity Management are features designed to help organizations protect their clouds and digital services

Redwood Shores, Calif.—Dec 11, 2017

Oracle today announced that it is expanding its security portfolio and unveiling a new partner program. The Oracle Identity SOC portfolio now includes new capabilities to help enterprises manage and certify user identities, applications, and confidential data more securely and through a richer, consumerized user experience. Additionally, the new partner program will help improve collaboration with security vendors and simplify customer adoption.

First unveiled at Oracle OpenWorld 2017, Oracle’s integrated suites—Oracle Identity Security Operations Center (SOC) portfolio of services and Oracle Management Cloud—are designed to help enterprises forecast, reduce, detect, and resolve security threats and assist in efforts to remediate application and infrastructure performance issues.  Leveraging artificial intelligence to analyze a unified data set consisting of the full breadth of security and operational telemetry, as well as provide automated remediation, Oracle’s integrated suite is designed to enable customers to quickly adapt their security and operational posture as their risk landscape changes. This application of machine learning can potentially help thwart attacks, reduce the detection window from months to minutes, and more quickly address security breaches and performance outages.

“Built on our advanced artificial intelligence and automation technologies, Oracle Identity SOC portfolio now offers cloud-native identity governance combined with adaptive security designed to protect our customer’s cloud and digital services,” said Rohit Gupta, group vice president, Cloud Security, Oracle. “In addition to strengthening internal security monitoring, Oracle’s comprehensive cross-stack threat detection capabilities provide broad threat detection range, less noise from false positives and a high level of accuracy in detecting and remediating threats against the enterprise’s most important IT asset–its data.”

The Oracle Identity SOC portfolio now includes new automated identity governance for hybrid clouds, expanded consumer identity management, machine learning-driven configuration management and enhanced artificial intelligence capabilities.

First Cloud-Native Identity Governance Offering

Oracle announced the first cloud-native identity governance service for hybrid cloud environments, which will be fully integrated and native to Oracle’s SaaS applications, Oracle Identity SOC portfolio (including Oracle Identity Cloud Service and Oracle CASB Cloud Service), as well as Oracle Management Cloud. This combination helps provide a better user experience for identity governance and automation of traditional processes and assessments through intelligent machine learning and cloud application risk feeds from Oracle CASB Cloud Service, all of which can be unified with security and operational telemetry from on-premises and hybrid environments. The new identity governance service features out-of-the-box connectors for integration with external systems and a consumer-grade interface for enterprise campaigns including application access requests, approvals and certifications.

Oracle CASB Cloud Service now also includes risk-based cloud access control to help enterprises mitigate risks to cloud applications. The new controls help manage configuration changes or application access based on rich context including device type, geo-location, and dynamic risk scores. Combined, these factors are designed to help prevent user access based on stolen credentials and mitigate the risk of cloud application administrator privileges misuse.

Industry-First Machine Learning-Driven Configuration Management

Additionally, Oracle introduced a game-changing feature in Oracle Configuration and Compliance Cloud Service, built on Oracle Management Cloud, which automatically discovers the configuration settings across a customer’s entire organization in real time and uses machine learning to find outlier configurations, which can then be automatically remediated.  In addition, Oracle Management Cloud now supports rulesets to enforce DISA’s Security Technical Implementation Guide (STIG) against the Oracle Database.  Oracle has also now extended its user and entity behavior analytics (UEBA) across the entire stack.  Specifically, Oracle Security Monitoring and Analytics (SMA) Cloud Service continuously baselines data access patterns at the SQL level and then detects anomalies by user, database or application.

Expanded Consumer Identity Management

Recognizing the need for enterprises to correlate consumer data with marketing information, Oracle expanded its consumer identity management capabilities in Oracle Identity Cloud Service with integrations with Oracle Marketing Cloud and Oracle Data Cloud. By leveraging built-in consent management, social profiles, preference management and activity attributes from Oracle Identity Cloud Service with these solutions, enterprises can build more targeted marketing campaigns with better consumer insights and analytics. In addition, easy integration with third-party or custom applications via Oracle Self-Service Integration Cloud Service will allow marketers to incorporate additional services.

New Partner Programs Help Deliver Complete Security Solutions

Continuing its commitment to supporting the development of a heterogeneous security ecosystem, Oracle today launched the Identity SOC Security Network in Oracle Cloud Marketplace. This program offers technology integrations with leading security vendors across critical areas including Enterprise Mobility Management, Next Generation Firewalls, Endpoint Security and Threat Intelligence, enabling enterprises to enrich the dynamic context, intelligence and automation capabilities of Oracle Identity SOC. In addition, Oracle continues to enable hundreds of partners worldwide to achieve Oracle PartnerNetwork Specializations in Security and Systems Management Cloud Services.

"We're incredibly excited to work with the Oracle Identity SOC portfolio," said Prakash Linga, CTO and co-founder of Vera. "Identity is the single most critical element of all modern applications and systems. Enhancing Oracle’s portfolio with Vera’s data-centric security and encryption solution makes Oracle’s services incredibly compelling for businesses.”

“We are excited to work with Oracle to introduce Compromised Credentials as a new category in Oracle CASB Cloud Service for refining organization and identity-centric risk assessment,” said Steve Tout, CEO of Seattle based VeriClouds. “With VeriClouds-powered credential monitoring and detection, more than 7 billion compromised credentials are surfaced to enable the automation of risk insight from the dark web into actionable intelligence for the Oracle Identity SOC.”

“Our digital workspace platform, VMware Workspace ONE powered by AirWatch unified endpoint management technology, helps IT secure apps based on conditional access policies and provides a seamless end user experience for any app from any device,” said Ashish Jain, vice president of product management, End-User Computing, VMware. “Workspace ONE and Oracle’s security solutions enables us to strengthen application security and provide data-driven insights to help mutual customers make better informed decisions related to device and app management.”

Contact Info
Jesse Caputo
Kristin Reeves
Blanc & Otus
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Kristin Reeves

  • +1.415.856.5145

OBIEE 12c ( post upgrade challenges - unable to save HTML based dashboard reports

Syed Jaffar - Mon, 2017-12-11 03:18
Very recently, an OBIEE v12.2.1.1 upgraded to v12.2.1.3 on an Exalytics machine. Though it wasn't a major upgrade, we had to do it to fix some bugs encountered in v12.2.1.1. For sure, it wasn't a easy walk in the park during the upgrade and post upgrade.

I am here to discuss a particular issue we faced post upgrade. This should be interesting.

Post upgrade to v12.2.1.3, we heard from the application team that all the reports working perfectly fine, except the dashboard reports with HTML tag unable to save. The below error was thrown when try to save the existing HTML reports:

"You do not currently have sufficient privileges to save a report or dashboard page that contains HTML markup"

It appeared to be a very generic error, and we tried all the workarounds we found on various websites and Oracle docs. Unfortunately, none of the solution fixed the issues. One of the popular advice is the following:

you need to go and check the HTML markup privileges in Administration.

1. Login with OBIEE Admin user,
2. Go to Administration link - Manage Privileges,
3. Search for "Save Content with HTML Markup" and see if it has necessary privileges. else assign the necessary roles and re-check the issue. 

After giving an additional privilege we could save the existing HTML reports. However, still the issue exists when we create a new dashboard report, with HTML tag. This time the below error msg was appeared:

While doing the research we also opened a SR with Oracle support. Luckily, (I mean it), the engineer who assigned this SR was good enough to catch the issue and provide the solution.

The engineer tried to simulate the issue on his environment and surprisingly he faced the similar problem. He then contacted the OBIEE development team and reaised the above concerns.

A few hours later, he provided us the below workaround, which seems to be a generic solution to the first and second problem we faced. I believe this is a common issue on

Please follow below steps ::

Take a backup of Instanceconfig.xml file, which is located in the following location, 


2. Add the following entry under security. 




3. Restart the presentation services with command below :: 

cd /refresh/home/oracle/12c/Oracle_Home/Oracle_Config/bi/bitools/bin

./stop.sh -i obips1 

./start.sh -i obips1 

4. Now you will see the "Contains HTML MARK up" in the answers , check it and try to save the report now. 

The solution perfectly worked and all HTML dashboard reports were able to save.

If you are on OBIEE, I strongly recommend you to test the HTML reports, and if you encounter one similar to ours, apply the workaround.

Updated Whitepapers for

Anthony Shorten - Sun, 2017-12-10 18:49

With the anticipated releases of the first products based upon Oracle Utilities Application Framework V4. starting to appear soon, the first set of whitepapers have to been updated to reflect new functionality, updated functionality and experiences from the field and our Oracle Utilities cloud implementations.

The following whitepapers have been updated and are now available from My Oracle Support:

  • ConfigTools Best Practices (Doc Id: 1929040.1) - This has been updated with the latest advice from our implementations and cloud teams. There are a few new sections around Groovy and a new section which highlights the ability to write batch programs using the Plug-In Batch architecture. In Oracle Utilities Application Framework, we add the capability to implement File Import functionality using Groovy in our Plug-In Batch. We provide a mechanism to support Delimited, Fixed or XML based files within the algorithms. Samples of each are supplied in the product.
  • Identity Management Suite Integration (Doc Id: 1375600.1) - This whitepaper has been greatly simplified to reflect the latest Oracle Identity Management Suite changes and the newer interface that has been migrated from XAI to IWS. The new interface as two new algorithms which are used in our cloud implementations and are now part of the F1-IDMUser object supplied with the product.
    • Generation of Authorization Identifier - The F1-IDMUser object now supports the ability to generate the unique authorization identifier (the 8 character one) if the identifier is not provisioned from Oracle Identity Manager itself. This provides some flexibility of where this identifier can be provisioned as part of the Oracle Identity Manager solution. In the past the only place this was available was within Oracle Identity Manager itself. This enhancement means that the user can be provisioned from Oracle Identity Manager or part of the Identity Management interface to Oracle Utilities Application Framework.
    • Duplication of User now supported within interface - In past releases the use of template users was a common way of quickly provisioning users. This release also allows the duplication function within the User Object to be used in isolation or in conjunction with template users for more flexible options in provisioning. If this method is used, a characteristic is added to the duplicated user to indicate it was duplicated from another user (for auditing purposes).

As we get closer to release of products using Oracle Utilities Application Framework you will see more and more updated whitepapers to reflect the new and improved changes in the releases.

Attracting Visitors To Your Website

Nilesh Jethwa - Sun, 2017-12-10 15:28

It’s not that easy to go from zero visitors to thousands of potential customers in an instant. But if you implement the right traffic-generating strategy, you can increase the number of visitors coming in to your website. If you can get enough traffic, that means you can generate sales. And you can check the main components in your salesto get more traffic and keep it coming.

Getting Instant and Cheap Traffic

The key components in the sales process that you need to test are opt-in offer, navigation, order form, and sales copy. You need to test them first before implementing your large-scale traffic generating campaign. Why is it important? Because this will help you prevent losing huge money once the campaign fails. Here are some of the best ways to attract visitors to your site:

  • Test your site. There are actually plenty of components to test in your website. That includes your web design, content, and many others. However, stick to what’s more important when testing your website. The basics are your salescopy, order process, opt-in offer, and site navigation. Later on, you can test other components once your sales generation is stable. 
  • Use Yahoo! Search Marketing. Pay-per-click search engines offer cheap and instant qualified traffic. But you need to come up with some targeted keywords in order to get results. Bidding to appear on top listings gives visibility on search results like Yahoo, MSN, and others. This will help you reach 80% of internet users across the globe. 
  • Offer free content such as eBooks. Providing irresistible free resources in exchange of priceless publicity is a common practice among digital marketers today. For instance, a valuable eBook or well-written blog posts can generate loads of web traffic without spending a dime. Just don’t use sales pitch in writing them so you won’t shoo potential customers away.

Read more at https://www.infocaptor.com/dashboard/how-to-attract-visitors-to-your-site

Docker-Swarm: Running a minimal webserver in a swarm

Dietrich Schroff - Sun, 2017-12-10 12:32
In my last posting to docker swarm i created a swarm on virtualbox with alpine linux with a hdd footprint of 290MB per node:
There are some tutorials out there with running a nginx or a java webserver in a container but >100MB for each node seems far to much for my tests.

So i decided to create a application which listens on port 8080 with netcat. I created a directory ncweb with ncweb.sh:
ncweb# cat ncweb.sh
sed -i  's/Hostname:.*/Hostname: '$HOSTNAME'/g' index.html
while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html;}  | nc  -l -p 8080; done 2&>1 logfileand a index.html:
ncweb# cat index.html
    <title>"Hello, World"</title>
  <body bgcolor=white>
    <table border="0" cellpadding="10">
          <h1>"Hello, World"</h1>
</html>The Dockerfile looks like this:
# cat Dockerfile
FROM alpine
ADD .  /tmp
ENTRYPOINT [ "/tmp/ncweb.sh" ]After that i created the container:
ncweb# docker build -t ncweb:0.2 .
Sending build context to Docker daemon  4.096kB
Step 1/5 : FROM alpine
 ---> 053cde6e8953
Step 2/5 : WORKDIR /tmp
 ---> Using cache
 ---> c3e11ac3773b
Step 3/5 : RUN mkdir ncweb
 ---> Using cache
 ---> d9e634c03cd1
Step 4/5 : ADD .  /tmp
 ---> 95f022aacc1c
Step 5/5 : ENTRYPOINT [ "/tmp/ncweb.sh" ]
 ---> Running in c1a9e8cee248
 ---> 6880521f68e4
Removing intermediate container c1a9e8cee248
Successfully built 6880521f68e4
Successfully tagged ncweb:0.2And let's do a test without docker swarm:
ncweb# docker run -p 8080:8080  ncweb
But running this as a service fails:
# docker service create --replicas=1 --name myweb ncweb:0.2
image ncweb:0.2 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.2 independently,
possibly leading to different nodes running different
versions of the image.
overall progress: 0 out of 1 tasks
1/1: No such image: ncweb:0.2
^COperation continuing in background.
Use `docker service ps n0himwum38bqzd8ob1vf8zhip` to check progress.and:
# docker service ps n0himwum38bqzd8ob1vf8zhip
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR                        PORTS
8tjsuae9jv8o        myweb.1             ncweb:0.2           node01              Ready               Rejected 3 seconds ago        "No such image: ncweb:0.2"  
qp24ssxb5bl5         \_ myweb.1         ncweb:0.2           alpine              Shutdown            Failed 36 seconds ago         "task: non-zero exit (2)"   
zwfgcatk7zyi         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
v4a7zkb85yd4         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
ycjftjusv484         \_ myweb.1         ncweb:0.2           node01              Shutdown            Rejected about a minute ago   "No such image: ncweb:0.2"  
# docker service rm n0himwum38bqzd8ob1vf8zhip
The error "No such image..." is happening, because the container ncweb is only in the repository of my master.
The easiest way for my test environment is to distribute the local image to all nodes:
# docker save ncweb:0.3 | ssh docker load
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:2/8O/SE1fGJ4f5bAQls5txrKMbqZfMmiZ+Tha/WFKxA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
root@'s password:
Loaded image: ncweb:0.3(i have to distribute a ssh-key to all nodes)

and then:
alpine:~/ncweb# docker service create --replicas=1 --name myweb ncweb:0.3
image ncweb:0.3 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.3 independently,
possibly leading to different nodes running different
versions of the image.# docker service ps myweb
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
in97xlc7azcw        myweb.1             ncweb:0.3           node01              Running             Running 8 seconds ago      
So my nc-webserver runs on node01, but i can not access it there because i did not define any port mappings ;-(

But finally this command did the job:
# docker service create --replicas=1 --name myweb --publish 8080:8080  ncweb:0.3
image ncweb:0.3 could not be accessed on a registry to record
its digest. Each node will access ncweb:0.3 independently,
possibly leading to different nodes running different
versions of the image.
overall progress: 1 out of 1 tasks
1/1: running  
verify: Service converged
The hostname corresponds to the docker container id on node01:
node01:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
6c9434b08082        ncweb:0.3           "/tmp/ncweb.sh"     37 minutes ago      Up 37 minutes                           myweb.1.lqiyb34cuxxme2141ahsg8neu

Remaining open points:
  • Is it possible to do a failback or limit the number of a service per node?
  • How to get a loadbalancing mechanism for a server application?
    (load balancer needed?)
  • What happens, if the manager fails / is shutdown?

Domain Indexes -- 1 : CONTEXT Indexes

Hemant K Chitale - Sun, 2017-12-10 03:17
A CONTEXT Index is one of a class of Domain Indexes.  It is used to build text-retrieval application.
Instead of a regular B-Tree index that captures the entire text of a VARCHAR2 column as a key, you can build an index that consists of "Tokens", words extracted from a long-ish text in the database column.  Searching the index is based on the CONTAINS operator.

Here is a quick demo in :

First I setup a new user for this demo.  I could have used an existing user but my "HEMANT" user has DBA privileges and I want to demonstrate CONTEXT without such privileges.
Here the key grant is "CTXAPP" that is granted to the user.

SQL> create user ctxuser identified by ctxuser default tablespace users;

User created.

SQL> grant create session, create table to ctxuser;

Grant succeeded.

SQL> grant ctxapp to ctxuser;

Grant succeeded.

SQL> alter user ctxuser quota unlimited on users;

User altered.


Next, this user creates the demonstration table and index :

SQL> connect ctxuser/ctxuser
SQL> create table my_text_table
2 (id_column number primary key,
3 my_text varchar2(2000));

Table created.

SQL> create index my_text_index
2 on my_text_table(my_text)
3 indextype is ctxsys.context;

Index created.

SQL> insert into my_text_table
2 values (1,'This is a long piece of text written by Hemant');

1 row created.

SQL> insert into my_text_table
2 values (2,'Another long text to be captured by the index');

1 row created.

SQL> commit;

Commit complete.

SQL> -- this update to the index would be a regular background job
SQL> exec ctx_ddl.sync_index('MY_TEXT_INDEX');

PL/SQL procedure successfully completed.


Note the call to CTX_DDL.SYNC_INDEX.  Unlike a regular B-Tree index, the CONTEXT (Domain) Index is, by default, *not* updated in real-time when DML occurs against the base table.  (Hopefully, in a future post, I may demonstrate real-time updates to a CTXCAT index, different from a CONTEXT Index).
Only remember that this CONTEXT index is not automatically updated.  If new rows are added to the table, they are not visible through the index until a SYNC_INDEX operation is performed.  The SYNC_INDEX can be scheduled as a job with another call to DBMS_JOB or DBMS_SCHEDULER or itself as part of the CREATE INDEX statement.

Now, let me demonstrate usage of the Index with the CONTAINS operator.

SQL> select id_column as id,
2 my_text
3 from my_text_table
4 where contains (my_text, 'written by Hemant') > 0
5 /

This is a long piece of text written by Hemant

SQL> select my_text
2 from my_text_table
3 where contains (my_text, 'Another long') > 0
4 /

Another long text to be captured by the index


Yes, the CONTAINS operator is a bit awkward.  It will be some time before you (or your developers) get used to the syntax !

Besides CTX_DDL, there are a number of other packages in the prebuilt CTXSYS schema that are available :

Since I have created a separate database user, I can also demonstrate the additional objects that are created when the INDEXTYPE IS CTXSYS.CONTEXT.

SQL> select object_type, object_name, to_char(created,'DD-MON-RR HH24:MI') Crtd
2 from user_objects
3 order by object_type, object_name
4 /

------------------- ------------------------------ ------------------------
INDEX SYS_C0017472 10-DEC-17 16:48
INDEX SYS_IL0000045133C00006$$ 10-DEC-17 16:48
INDEX SYS_IL0000045138C00002$$ 10-DEC-17 16:48
INDEX SYS_IOT_TOP_45136 10-DEC-17 16:48
INDEX SYS_IOT_TOP_45142 10-DEC-17 16:48
LOB SYS_LOB0000045133C00006$$ 10-DEC-17 16:48
LOB SYS_LOB0000045138C00002$$ 10-DEC-17 16:48

15 rows selected.

SQL> select table_name, constraint_name, index_name
2 from user_constraints
3 where constraint_type = 'P'
4 order by table_name, constraint_name
5 /

------------------------------ ------------------------------





Yes, that is a large number of database objects besides MY_TEXT_TABLE and MY_TEXT_INDEX.  SYS_C0017472 is, of course, the Primary Key Index on MY_TEXT_TABLE (on the ID_COLUMN column).  The others are interesting.

The "Tokens" I mentioned in the first paragraph are, for the purpose of this table MY_TEXT_TABLE, in the DR$MY_TEXT_INDEX$I.

Name Null? Type
----------------------------------------- -------- ----------------------------

SQL> select token_text, token_count
2 from dr$my_text_index$i
3 /

---------------------------------------------------------------- -----------

8 rows selected.


I have been busy in the last few months and have not published much this quarter. Hopefully, I will get more time in the coming weeks to explore CONTEXT, CTXCAT and Domain Indexes.

Categories: DBA Blogs

Steps For Optimizing Your Website

Nilesh Jethwa - Sat, 2017-12-09 16:57

People use search engines like Google when looking for products or brands these days. In fact, 60 percent of consumers take advantage of Google search just to find what they exactly want, and more than 80 percent of online search results lead to in-store visits as well as sales. So if you want to receive massive traffic and increase your sales, optimizing your website is the perfect solution.

Optimized Website, A Business Owner’s Priority

Search engine optimization or SEO is not that easy to implement. That is because there are constant changes that you need to keep up with such as the algorithms of search engines. For instance, the search algorithm of Google constantly evolves due to their goal of providing searchers with best results they deserve.

It’s not enough to hire a SEO professional to do the job today and stop search optimizing in the next months or years. Experts understand that this is a non-stop process. Your website needs to deal with the changing algorithms, and as long as it is evolving, your website also needs to keep up. This is why an optimized website should be a priority for any business owner across the globe.

Read more at  https://www.infocaptor.com/dashboard/what-are-the-steps-to-optimizing-your-website

how to count the number of records in a file

Tom Kyte - Fri, 2017-12-08 16:06
How do i get to count the number of records in a flat file(.csv/.dat) including the header and trailer records.The code was return in a plsql using utl_file.Can you suggest me the best method to implement the logic in Plsql.And the trailer record sho...
Categories: DBA Blogs

advise given by segment advisor 12c

Tom Kyte - Fri, 2017-12-08 16:06
Hi Tom, My question is, Does <b>segment advisor in 12cR1</b>, advise the use of <b>online table redefination for tables</b> in both d<b>ictionary managed</b> and <b>locally managed tablespace</b>? In addition, What about the <b>use of segm...
Categories: DBA Blogs

Dynamic fields in External File

Tom Kyte - Fri, 2017-12-08 16:06
Sorry if its a weird requirement. I have an input feed coming in like the following(only two rows per file).. col1, col2, col3,Attribute_1,Attribute_2,Attribute_3,......,Attribute_n col1, col2, col3,Value_1,Value_2,Value_3,......,Value_n ...
Categories: DBA Blogs

Reconstruct sale from audit commands

Tom Kyte - Fri, 2017-12-08 16:06
We are running on, 12.1 and 12.2. SQL> show parameter audit_trail NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_trail ...
Categories: DBA Blogs

Reg: Elapsed time

Tom Kyte - Fri, 2017-12-08 16:06
Hello Team, Quite confused with elapsed time definition. elapsed time is equal to db time. but DBtime= cpu time + wait time. so its correct or not. somewhere i found elapsed time is nothing but wall time. plz comment on above statem...
Categories: DBA Blogs


Yann Neuhaus - Fri, 2017-12-08 15:40

ukoug_tech17_award_paul_fitton.jpgI’ve received an award for an article I’ve written last year, the Most Read Oracle Scene Article in 2016 – Technology. I like to write for Oracle Scene magazine. It is rare today to have a magazine both available in a paper version, and also publicly available on the web. And I must say that as an author, all the people behind are doing a great work. Thanks to them and thanks for the award. Seeing that what I write helps is the motivation to do so.

The article is: Is your AWR/Statspack Report Relevant.

This is the occasion to link to other articles I’ve written for the UKOUG Magazine. Statspack, because not everybody has Enterprise Edition with optional Diagnostic Pack: Improving Statspack Experience. One on the parameter I hate the most: CBO Choice between Index & Full Scan: the Good, the Bad & the Ugly parameters. Another about the statistics that should replace this awful parameter: Demystifying WORKLOAD System Statistics Gathering.

ukoug_tech17_award_keynoteTwo interesting features in 12c: 12c Online Statistics Gathering & Session Private Statistics. A big change that happened in concerning availability: I/O Error on Datafile: Instance Crash or Datafile Offline?. My SLOB performance tests when ODA Lite came with MVMe SSD: Performance for All Editions on ODA X6-2S/M. And finally, the article on the great Oracle features for physical transport/clone/migration: From Transportable Tablespaces to Pluggable Databases

I’ve stolen a few pictures to illustrate this blog post, from UKOUG photo albums, and from friends.


Pres1The article on Transportable Tablespaces and Pluggable Databases is actually based one one presentation I did. I was in that big Hall 1 [I realize that some may do a joke on this, but I’m talking about the room] where I look like Ant-Man from the back of the room. But doing live demos is great on this huge screen.

The idea for this presentation came 1 year ago when preparing a 5 minutes talk for Oracle Open World 2016 ACED talks (blog post about this) and this subject is so exciting that I wanted to share more about it. Online PDB clone and relocate will be the features you will like the most when going to Multitenant.

pres2 My second presentation was more developer focused, exposing all Join Methods that can magically construct your query result in a record time, or, when bad Join Method is chosen, make your 2-second query still running after one hour. I explained the join methods by 3 ways: theory with .ppt animation, execution plan with dbms_xplan and in live when running queries, with SQL Monitor, showing the different operations and A-Rows increasing in live.

I was also co-presenting in a roundtable on Oracle Cloud PaaS, sharing my little experience on DBaaS. Everybody talks about Cloud and it is good to talk about problems encountered and how to deal with it.

Round tables

Despite the hard concurrency of good sessions, I also attended a few round tables. Those conferences are a good opportunity to meet and share other users and product managers. Oracle is a big company, and we sometimes think that they care only about their biggest customers, but that is not exact. There are several product managers who really listen to customers. A great one was the discussion about something that slowly changes for a few years: the documentation which was really precise in the past – in explaining the When, Why, and How – is now more vague.


Geeking at the Oracle Developer Community lounge, Eating something decent near the conference center, drinking while discussing technical stuff, the ACE dinner (and the first ACED briefing out of US), posing with award winners, and the amazing chocolate testing…



Cet article #UKOUG_TECH17 est apparu en premier sur Blog dbi services.

How do I create a responsive Rich Text Editor in Oracle APEX?

Joel Kallman - Fri, 2017-12-08 13:40
I was in a video call this morning with a great customer from England (and by the way, this customer is in the process of transforming the healthcare industry across the UK).  They asked me a very simple question:

How do I create a responsive Rich Text Editor item on a page?

Simple question and answer, right?  Well, you'd be wrong.  While we pride ourselves on the responsive user interfaces that you can easily create with Oracle Application Express (APEX), unfortunately, the item type of Rich Text Editor is not responsive, out of the box.

So - I did what all smart people do, and I reached out to the Oracle APEX Development team, in this case, the ever-intelligent Carsten Czarski.  And in a few minutes, he showed me exactly what I needed to do.

  1. Open up Application Builder, and in Page Designer, edit the page with the Rich Text Editor item.  In my example, my Rich Text Editor page item name is P3_RESUME.
  2. Navigate to the attributes of the Rich Text Editor item, and in the Advanced section, enter the following code in the "JavaScript Initialization Code" attribute:
    function (o) {
    o.width = $("#P3_RESUME").closest(".t-Form-inputContainer").width() - 5;
    o.height = 300; // Specify your desired item height, in pixels
    return o;
    This code determines the width of the region container of the item, subtracts 5, and returns the object initialized to this size.  This will take care of the Rich Text Editor when the page is initially displayed. But it won't handle the case when the browser is resized. To handle that case, we'll need to add a dynamic action.
  3. Click the Dynamic Actions sub-tab in Page designer (the lightning bolt)
  4. Select Events in the upper-left, right-click your mouse and choose "Create Dynamic Action".
  5. In the attributes, enter "Resize" for Name, and select "Resize" for the Event.
  6. Select the True action of the dynamic action (it should be "Show").  Change the Action to "Execute JavaScript Code".
  7. In the Code attribute, enter the code:
    CKEDITOR.instances.P3_RESUME.resize( $("#P3_RESUME").closest(".t-Form-inputContainer").width() - 5, 300);
    This is an absolute reference to the Rich Text Editor item on the page, named P3_RESUME. And like the code before, this will determine what the width is of the container of the item, subtract 5 from it, and invoke the resize() method of the Rich Text Editor (CK Editor) element.
That's all there is to it!

Obviously, this item type (like all others) should be responsive, out of the box.  And Carsten is looking at this for the next version of APEX.  In the meantime, if you're using Universal Theme with Oracle APEX 5.1, all it takes is a tiny amount of JavaScript to get a responsive Rich Text Editor.

Keyword Rank Tracking Tools

Nilesh Jethwa - Fri, 2017-12-08 13:34

An important element of search engine optimization (SEO) is choosing the right keyword. With the right keywords, you can make your content rank on search engines. But the work doesn’t stop after ranking, you still need to track the position of your keyword during the search. That way, you can obtain helpful information that will guide you in keeping your SEO efforts successful.

Why Check Keyword Ranking Regularly

One of the main reasons why you need to check your keyword ranking is to identify target keywords. Any SEO professional or blogger should understand how important it is for their content marketing strategies. In fact, a common mistake committed by website administrators and bloggers is writing and publishing articles that don’t target any keywords. It’s like aiming your arrow at something that you are not sure of.

Here are some of the best tools you can take advantage of when tracking your keyword rank:

SEMRUSH. When using this keyword rank tracking tool, it will take 10 to 15 minutes in order to determine which keywords or key phrases to use. Whether you are a webmaster or SEO specialist, this tool will help you analyze data for your clients and website. It also offers useful features such as in-depth reports, keyword grouping, and competitor tracking.

Google Rank Checker. This is a premium online tool that you can use for free. It will help you in tracking keyword positioning while making sure that you appear in search results. To use Google Rank Checker, all you need to do is enter the keywords that you want to check as well as your domain name. After putting in the details, you will now view the keyword rank.


Read more at https://www.infocaptor.com/dashboard/best-tools-for-keyword-rank-tracking

Reverse engineer existing Oracle tables to Quick SQL

Dimitri Gielis - Fri, 2017-12-08 13:21
If you didn't hear about Oracle Quick SQL, it's time to read about it as it's something you have without knowing (it's a packaged app in Oracle APEX) and I believe you should start using :)

Quick SQL enables you to rapidly design and prototype data models using a markdown-like shorthand syntax that expands to standards-based Oracle SQL. You can easily create master detail relationships, check constraints, and even generate sample data.
In my blog post Create the Oracle database objects I go over the history how I created database objects and why I think Quick SQL is great and why I use it.

I guess most people typically use Quick SQL at the start of a new project, as it's the quickest way to create your data model and Oracle database objects. That is my primary use case too, but I started to use Quick SQL even on projects where database objects already exist.

In the project I'm currently involved in, the datamodel was generated by another tool, but as we iterate through the project, tables change, columns get renamed and added, row version were requested, triggers need to be made Oracle APEX aware...

Now we could do those changes manually, but I thought it made much more sense to create the data model in Quick SQL and use the features that come with Quick SQL. By clicking a checkbox we can include a Row version, Quick SQL generates the triggers automatically in an APEX aware form, we can generate as much sample data as we want by adding /insert and we can use all the other features that come with Quick SQL. For example when you want to include a history table in the future it's just another checkbox to click.

It's also easy to check-in the Quick SQL script into source control, together with the generated DDL.
If changes need to be done, we can adapt in Quick SQL and generate the DDL again and we see the changes immediately. It would be nice if Quick SQL could generate the ALTER statements too, but that's not the case yet. But it's easy enough to see the changes that were done by comparing the scripts in source control.

If you also want to reverse engineer an existing model into Quick SQL, here's a script that gives you a head start generating the markdown style format.

I tried the script on the Quick SQL data model itself - the result you see below:

Hopefully you see the benefit of using Quick SQL in existing projects too and the script helps you get there. Also Quick SQL gets frequent updates - in the upcoming release (17.3.4), which is already online, you can add a Security Group ID to every table (to make your app multi-tenant) and you can rename the audit columns to your own naming conventions.
Categories: Development


Subscribe to Oracle FAQ aggregator