Feed aggregator

Oracle Empowers Retailers to Detect Omnichannel Theft with Embedded Science

Oracle Press Releases - Tue, 2018-06-12 08:00
Press Release
Oracle Empowers Retailers to Detect Omnichannel Theft with Embedded Science New Innovations in Oracle Retail XBRi Cloud Service Reveal New Sources of Omnichannel Risk and Provides Investigators with Automated Alerts and Actionable Insights

Redwood Shores, Calif.—Jun 12, 2018

Today Oracle announced enhancements to Oracle Retail XBRi Loss Prevention Cloud Service that help retailers quickly identify and investigate fraudulent activities across store and online transactions. With embedded science, Oracle Retail XBRi can detect new data anomalies and alert loss prevention analysts to potential risks and drive quick resolution by providing the transactional insight needed to investigate. The Oracle Retail XBRi enhancements also include embedded science developed over 20 years of research and development in retail including more than 4,000 reporting metrics and attributes representing experiences and best practices of global retailers.

“The opportunity for retail theft grows as retailers reach to meet customer expectations and struggle to manage individual channels of engagement,” said Jeff Warren, vice president, Oracle Retail. “The updates to Oracle Retail XBRi provide customers with the tools to protect their margins and is another example of the continued value we aim to deliver through cloud services.”

“This new generation of cloud services accelerates the path to ROI within months of deployment,” said Chris Sarne, senior director of omnichannel strategy, Oracle Retail.  “The result is a smart, holistic approach to preventing fraud and saving money that scales to grow with your business.”

Oracle Retail XBRi Loss Prevention Cloud Service provides the retail community with:
  • Agility, Security and a Path to Lower Total Cost of Ownership. One of the goals of using embedded science is to clarify and lower the total cost of installation, set up, and maintenance of solutions by taking advantage of iterative cloud updates. The cloud-based solution offers even greater precision, building on existing knowledge and adding new features and analytics automatically.
  • Prompt Investigation. Upon deployment, the solution immediately identifies exceptions and escalates them to analysts and investigators without requiring manual intervention to initiate the process.
  • Comprehensive Discovery. The cloud service employs embedded science to quickly surface anomalous behavior and potential cases. With new insight to emerging fraudulent activities comes opportunities to help store managers train their associates to detect and avoid potential loss before it happens.
  • Intuitive Configuration. The Oracle Retail XBRi interface allows analysts to easily configure reports that reflect the unique needs of omnichannel brands without incurring additional costs or relying on IT specialists.
  • POS Integration that Streamlines Investigation.  A heightened degree of integration between Oracle Retail XBRi Cloud Service and Oracle Retail Xstore Point-of-Service further enhances the ability of XBRi’s embedded science to pinpoint new sources of risk and deliver purpose-build reports that streamline and support investigative activities while leveraging the best practices built into the Oracle Retail portfolio.

Oracle will continue to add new knowledge from the Oracle Retail customer community to combat new patterns of fraud that emerge in the complex omnichannel retail environment.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

The Medrec 12.2.1.3 Datamodel DDL

Darwin IT - Tue, 2018-06-12 06:57
Next week I deliver the training 'Weblogic 12c Tuning and Troubleshooting' . One of the labs is to have the sample application MedRec generate Stuck Threads, so that the students can investigate and try to solve that. Or actually configure the server so that it will cause a automatic restart.

To do so I have to deliberately break the application and so I need the source. I have an earlier version of the application, but not the sources. So I have to go to the latest MedRec. I actually like that, because it looks more modern.

The MedRec application is available if you install WebLogic with samples.

You can run the script demo_oracle.ddl  from against the database:
$WL_HOME/samples/server/examples/src/examples/common/ddl

The medrec.ear can be found at:
$WL_HOME/samples/server/medrec/dist/standalone

I ran in quite some confusion and frustration, but I found that this combination, although from the same samples folder, does not work. Not only this medrec.ear expects the tables in plural (PRESCRIPTIONS) where the script creates them in singular (PRESCRIPTION), it expects a separate DRUGS table with a foreign key column DRUG_ID in PRESCRIPTIONS.  And a few other changes.

I had a version of the scripts from earlier versions of WebLogic's MedRec. Based the exceptions in the server log, I refactored/reverse engineered the scripts.

Using those I could succesfully login and view the Patient records of fred@golf.com:


First we need to create a a schema using createDBUserMedrec.sql:
prompt Create user medrec with connect, resource roles;
grant connect, resource to medrec identified by welcome1;
alter user medrec
default tablespace users
temporary tablespace temp;
alter user medrec quota unlimited on users;

Drop tables (if needed) using medrec_dropall.sql:
DROP TABLE "MEDREC"."ADMINISTRATORS";

DROP TABLE "MEDREC"."OPENJPA_SEQUENCE_TABLE";

DROP TABLE "MEDREC"."PATIENTS";

DROP TABLE "MEDREC"."PATIENTS_RECORDS";

DROP TABLE "MEDREC"."PHYSICIANS";

DROP TABLE "MEDREC"."PRESCRIPTIONS";

DROP TABLE "MEDREC"."DRUGS";

DROP TABLE "MEDREC"."RECORDS";

DROP TABLE "MEDREC"."RECORDS_PRESCRIPTIONS";

Create the tables using medrec_tables.sql:
CREATE TABLE "MEDREC"."ADMINISTRATORS" (
"ID" INTEGER NOT NULL,
"EMAIL" VARCHAR(255),
"PASSWORD" VARCHAR(255),
"USERNAME" VARCHAR(255),
"VERSION" INTEGER,
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."OPENJPA_SEQUENCE_TABLE" (
"ID" SMALLINT NOT NULL,
"SEQUENCE_VALUE" INTEGER,
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."PATIENTS" (
"ID" INTEGER NOT NULL,
"EMAIL" VARCHAR(255),
"PASSWORD" VARCHAR(255),
"USERNAME" VARCHAR(255),
"PHONE" VARCHAR(255),
"DOB" TIMESTAMP,
"GENDER" VARCHAR(20),
"SSN" VARCHAR(255),
"STATUS" VARCHAR(20),
"VERSION" INTEGER,
"FIRSTNAME" VARCHAR(255),
"LASTNAME" VARCHAR(255),
"MIDDLENAME" VARCHAR(255),
"CITY" VARCHAR(255),
"COUNTRY" VARCHAR(255),
"STATE" VARCHAR(255),
"STREET1" VARCHAR(255),
"STREET2" VARCHAR(255),
"ZIP" VARCHAR(255),
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."PATIENTS_RECORDS" (
"PATIENT_ID" INTEGER,
"RECORDS_ID" INTEGER );


CREATE TABLE "MEDREC"."PHYSICIANS" (
"ID" INTEGER NOT NULL,
"EMAIL" VARCHAR(255),
"PASSWORD" VARCHAR(255),
"USERNAME" VARCHAR(255),
"PHONE" VARCHAR(255),
"VERSION" INTEGER,
"FIRSTNAME" VARCHAR(255),
"LASTNAME" VARCHAR(255),
"MIDDLENAME" VARCHAR(255),
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."DRUGS"
( "ID" NUMBER(*,0) NOT NULL ENABLE,
"NAME" VARCHAR2(255 BYTE),
"FREQUENCY" VARCHAR2(255 BYTE),
"PRICE" NUMBER(10,2),
"VERSION" NUMBER(*,0),
PRIMARY KEY ( "ID" ) );


CREATE TABLE "MEDREC"."PRESCRIPTIONS" (
"ID" INTEGER NOT NULL,
"DATE_PRESCRIBED" TIMESTAMP,
"FREQUENCY" VARCHAR(255),
"INSTRUCTIONS" VARCHAR(255),
"REFILLS_REMAINING" INTEGER,
"VERSION" INTEGER,
"DOSAGE" NUMBER,
"DRUG_ID" NUMBER,
PRIMARY KEY ( "ID" ) );

CREATE TABLE "MEDREC"."RECORDS" (
"ID" INTEGER NOT NULL,
"CREATE_DATE" TIMESTAMP,
"RECORDDATE" TIMESTAMP,
"DIAGNOSIS" VARCHAR(255),
"NOTES" VARCHAR(255),
"SYMPTOMS" VARCHAR(255),
"VERSION" INTEGER,
"PATIENT_ID" INTEGER NOT NULL,
"PHYSICIAN_ID" INTEGER NOT NULL,
"DIASTOLIC_BLOOD_PRESSURE" INTEGER,
"HEIGHT" INTEGER,
"PULSE" INTEGER,
"SYSTOLIC_BLOOD_PRESSURE" INTEGER,
"TEMPERATURE" INTEGER,
"WEIGHT" INTEGER,
PRIMARY KEY ( "ID" ) );

CREATE TABLE "MEDREC"."RECORDS_PRESCRIPTIONS" (
"RECORD_ID" INTEGER,
"PRESCRIPTIONS_ID" INTEGER );


Insert data using medrec_data.sql:
INSERT INTO "MEDREC"."ADMINISTRATORS" (
"ID", "EMAIL", "PASSWORD", "USERNAME", "VERSION" )
VALUES (
201,'admin@avitek.com','weblogic','admin@avitek.com',1
);

COMMIT;

INSERT INTO "MEDREC"."OPENJPA_SEQUENCE_TABLE" (
"ID", "SEQUENCE_VALUE" )
VALUES (
0,251
);

COMMIT;

INSERT INTO "MEDREC"."PATIENTS"
VALUES (
51,'page@fish.com','weblogic','page@fish.com','4151234564',
TIMESTAMP '1972-03-18 00:00:00','MALE','888888888','APPROVED',3,
'Page','Trout','A','Ponte Verde','United States','FL',
'235 Montgomery St','Suite 15','32301'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
52,'fred@golf.com','weblogic','fred@golf.com','4151234564',
TIMESTAMP '1965-04-26 00:00:00','MALE','123456789','APPROVED',3,
'Fred','Winner','I','San Francisco','United States','CA',
'1224 Post St','Suite 100','94115'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
53,'volley@ball.com','weblogic','volley@ball.com','4151234564',
TIMESTAMP '1971-09-17 00:00:00','MALE','333333333','APPROVED',3,
'Gabrielle','Spiker','H','San Francisco','United States','CA',
'1224 Post St','Suite 100','94115'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
54,'charlie@star.com','weblogic','charlie@star.com','4151234564',
TIMESTAMP '1973-11-29 00:00:00','MALE','444444444','REGISTERED',3,
'Charlie','Florida','E','Ponte Verde','United States','FL',
'235 Montgomery St','Suite 15','32301'
);
INSERT INTO "MEDREC"."PATIENTS"
VALUES (
55,'larry@bball.com','weblogic','larry@bball.com','4151234564',
TIMESTAMP '1959-03-13 00:00:00','MALE','777777777','APPROVED',3,
'Larry','Parrot','J','San Francisco','United States','CA',
'1224 Post St','Suite 100','94115'
);

COMMIT;

INSERT INTO "MEDREC"."PHYSICIANS" (
"ID", "EMAIL", "PASSWORD", "USERNAME", "PHONE", "VERSION", "FIRSTNAME", "LASTNAME", "MIDDLENAME" )
VALUES (
1,'mary@md.com','weblogic','mary@md.com','1234567812',4,'Mary','Oblige','J'
);

COMMIT;

Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (101,'Advil','1/4hrs',1.0, 2);
Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (102,'Codeine','1/6hrs',2.5,2);
Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (103,'Drixoral','1tspn/4hrs',3.75,2);

COMMIT;

Insert into "MEDREC"."PRESCRIPTIONS" (ID,DATE_PRESCRIBED,FREQUENCY,INSTRUCTIONS,REFILLS_REMAINING,VERSION,DOSAGE,DRUG_ID) values (101,to_timestamp('18-JUL-99 12.00.00.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'),'1/4hrs',null,0,2,1,101);
Insert into "MEDREC"."PRESCRIPTIONS" (ID,DATE_PRESCRIBED,FREQUENCY,INSTRUCTIONS,REFILLS_REMAINING,VERSION,DOSAGE,DRUG_ID) values (102,to_timestamp('30-JUN-93 12.00.00.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'),'1/6hrs',null,1,2,1,102);
Insert into "MEDREC"."PRESCRIPTIONS" (ID,DATE_PRESCRIBED,FREQUENCY,INSTRUCTIONS,REFILLS_REMAINING,VERSION,DOSAGE,DRUG_ID) values (103,to_timestamp('18-JUL-99 12.00.00.000000000 AM','DD-MON-RR HH.MI.SSXFF AM'),'1tspn/4hrs',null,0,2,1,103);


COMMIT;

INSERT INTO "MEDREC"."RECORDS"
VALUES (
151,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Allergic to coffee. Drink tea.',
'','Drowsy all day.',2,51,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
152,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Light cast needed.',
'At least 20 sprained ankles since 15.','Sprained ankle.',
2,53,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
153,TIMESTAMP '1989-08-05 00:00:00',TIMESTAMP '1989-08-05 00:00:00','Severely sprained interior ligament. Surgery required.','Cast will be necessary before and after.','Twisted knee while playing soccer.',2,52,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
154,TIMESTAMP '1993-06-30 00:00:00',TIMESTAMP '1993-06-30 00:00:00','Common cold. Prescribed codiene cough syrup.','Call back if not better in 10 days.','Sneezing, coughing, stuffy head.',2,52,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
155,TIMESTAMP '1999-07-18 00:00:00',TIMESTAMP '1999-07-18 00:00:00','Mild stroke. Aspirin advised.','Patient needs to stop smoking.','Complains about chest pain.',2,52,1,85,70,75,125,98,180
);
INSERT INTO "MEDREC"."RECORDS"
VALUES (
156,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Patient is crazy. Recommend politics.','','Overjoyed with everything.',2,55,1,85,70,75,125,98,180
);

COMMIT;

INSERT INTO "MEDREC"."RECORDS_PRESCRIPTIONS"
VALUES (
154,102
);
INSERT INTO "MEDREC"."RECORDS_PRESCRIPTIONS"
VALUES (
155,101
);
INSERT INTO "MEDREC"."RECORDS_PRESCRIPTIONS"
VALUES (
155,103
);

COMMIT;
To install the datasource you can use this wlst script, createDataSource.py:
#############################################################################
# Create DataSource for WLS 12c Tuning & Troubleshooting workshop
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.1, 2018-01-22
#
#############################################################################
# Modify these values as necessary
import os,sys, traceback
scriptName = sys.argv[0]
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
ttServerName=os.environ["TTSVR_NAME"]
adminUser='weblogic'
adminPwd='welcome1'
#
dsName = 'MedRecGlobalDataSourceXA'
dsJNDIName = 'jdbc/MedRecGlobalDataSourceXA'
initialCapacity = 5
maxCapacity = 10
capacityIncrement = 1
driverName = 'oracle.jdbc.xa.client.OracleXADataSource'
dbUrl = 'jdbc:oracle:thin:@darlin-vce.darwin-it.local:1521:orcl'
dbUser = 'medrec'
dbPassword = 'welcome1'
#
def createDataSource(dsName, dsJNDIName, initialCapacity, maxCapacity, capacityIncrement, dbUser, dbPassword, dbUrl, targetSvrName):
# Check if data source already exists
try:
cd('/JDBCSystemResources/' + dsName)
print 'The JDBC Data Source ' + dsName + ' already exists.'
jdbcSystemResource=cmo
except WLSTException:
print 'Creating new JDBC Data Source named ' + dsName + '.'
edit()
startEdit()
cd('/')
# Create data source
jdbcSystemResource = create(dsName, 'JDBCSystemResource')
jdbcResource = jdbcSystemResource.getJDBCResource()
jdbcResource.setName(dsName)
# Set JNDI name
jdbcResourceParameters = jdbcResource.getJDBCDataSourceParams()
jdbcResourceParameters.setJNDINames([dsJNDIName])
jdbcResourceParameters.setGlobalTransactionsProtocol('TwoPhaseCommit')
# Create connection pool
connectionPool = jdbcResource.getJDBCConnectionPoolParams()
connectionPool.setInitialCapacity(initialCapacity)
connectionPool.setMaxCapacity(maxCapacity)
connectionPool.setCapacityIncrement(capacityIncrement)
# Create driver settings
driver = jdbcResource.getJDBCDriverParams()
driver.setDriverName(driverName)
driver.setUrl(dbUrl)
driver.setPassword(dbPassword)
driverProperties = driver.getProperties()
userProperty = driverProperties.createProperty('user')
userProperty.setValue(dbUser)
# Set data source target
targetServer = getMBean('/Servers/' + targetSvrName)
jdbcSystemResource.addTarget(targetServer)
# Activate changes
save()
activate(block='true')
print 'Data Source created successfully.'
return jdbcSystemResource

def main():
# Connect to administration server
try:
connect(adminUser, adminPwd, admServerUrl)
#
createDataSource(dsName, dsJNDIName, initialCapacity, maxCapacity, capacityIncrement, dbUser, dbPassword, dbUrl,ttServerName)
#
print("\nExiting...")
exit()
except:
apply(traceback.print_exception, sys.exc_info())
exit(exitcode=1)
#call main()
main()

Also Medrec needs an administrative user, createUser.py:
print 'starting the script ....'
#
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
#
adminUser='weblogic'
adminPwd='welcome1'
#
realmName = 'myrealm'
#
def addUser(realm,username,password,description):
print 'Prepare User',username,'...'
if realm is not None:
authenticator = realm.lookupAuthenticationProvider("DefaultAuthenticator")
if authenticator.userExists(username)==1:
print '[Warning]User',username,'has been existed.'
else:
authenticator.createUser(username,password,description)
print '[INFO]User',username,'has been created successfully'


connect(adminUser,adminPwd,admServerUrl)

security=getMBean('/').getSecurityConfiguration()
realm=security.lookupRealm(realmName)
addUser(realm,'administrator','administrator123','MedRec Administrator')

disconnect()


Deploy the medrec application, deployMedRec.py:
#############################################################################
# Deploy MedRec for WLS 12c Tuning & Troubleshooting workshop
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2018-01-22
#
#############################################################################
# Modify these values as necessary
import os,sys, traceback
scriptName = sys.argv[0]
adminHost=os.environ["ADM_HOST"]
adminPort=os.environ["ADM_PORT"]
admServerUrl = 't3://'+adminHost+':'+adminPort
ttServerName=os.environ["TTSVR_NAME"]
adminUser='weblogic'
adminPwd='welcome1'
#
appName = 'medrec'
appSource = '../ear/medrec.ear'
#
# Deploy the application
def deployApplication(appName, appSource, targetServerName):
print 'Deploying application ' + appName + '.'
progress = deploy(appName=appName,path=appSource,targets=targetServerName)
# Wait for deploy to complete
while progress.isRunning():
pass
print 'Application ' + appName + ' deployed.'
#
#
def main():
# Connect to administration server
try:
connect(adminUser, adminPwd, admServerUrl)
#
deployApplication(appName, appSource, ttServerName)
#
print("\nExiting...")
exit()
except:
apply(traceback.print_exception, sys.exc_info())
exit(exitcode=1)
#call main()
main()


Quickly setup a persistent React application

Amis Blog - Tue, 2018-06-12 03:20

After having recently picked up the React framework, I figured I’d share how I quickly setup my projects to implement persistence. This way you spend minimal time setting up the skeleton of your application, letting you focus on adding functionality. This is done by combining a few tools. Let’s take a closer look.

Should you be interested in the code itself, go ahead and get it here.

Tools

NPM/Yarn

Be sure to install a package manager like npm or Yarn. We will use it to install and run some of the other tools.

Create-react-app
If you ever worked with React you undoubtedly came across this CLI tool. It is useful for providing the boilerplate code you need for the front-end of your application. Take a look at the documentation to get a sense on how to use it.

FeathersJS
This is an implementation of the Express framework. It provides another CLI tool we can use to quickly setup the boilerplate code for building a REST API. What makes it powerful is the concept of hooks (see this link for more) as well as the possibility to model our entities in a javascript file. After starting this service we can manipulate data in our database using REST calls.

MongoDB
This is the database we will use – be sure to download, install and start it. It’s a NoSQL database and uses a JSON document structure.

Steps
Create a folder for your application with <application name>. I like to create two seperate folders in here. One ‘frontend’, the other one ‘backend’. Usually I will add the source folder to a version control tool like Git. If you’re dealing with a larger application however, you might want to initiate two separate repositories for both your front- and backend. Using Visual Studio Code, my file structure looks like this:

Selection_002

Fire up your terminal. Install create-react-app globally.

npm install -g create-react-app

Navigate to your frontend folder. Initialize your first application.

cd frontend
create-react-app <application name>

Note how the CLI will add another folder with the application name into your frontend folder. Since we already created a root folder, copy the content of this folder and move it one level higher. Now the structure looks like this:

create-react-app

Install FeathersJS CLI globally.

npm install -g @feathersjs/cli

Navigate to the backend folder of your project an generate a feathers application

cd ../backend
feathers generate app

The CLI will prompt for a project name. Follow along with these settings:

feathers-settings

Alright, now it’s time to generate a service so we can start making REST calls. In true Oracle fashion, let’s make a service for the entity employees. While in the backend folder, run:

feathers generate services

Follow along:

feathers-service-settings

Navigate to the employees.model.js file. The filestructure of the backend folder should look like this:

feathers-backend-structure

In this file we can specify what employees look like. For now, let’s just give them the property name of type String and let’s make it required. Also, let’s delete the timestamps section to keep things clean. It should look like this:

//employees.model.js

module.exports = function (app) {
  const mongooseClient = app.get('mongooseClient');
  const { Schema } = mongooseClient;
  const employees = new Schema({
    name: { type: String, required: true }
  });

  return mongooseClient.model('employees', employees);
};

Great. We are almost good to go. First fire up MongoDB. Windows users see here. Unix users:

sudo service mongod start

Next up navigate to the backend folder. Using the terminal, give the following command:

npm start

You should see a message saying: Feathers application started on http://localhost:3030

The service we created is available on http://localhost:3030/employees. Navigate there. You should see JSON data – though right now all you’ll probably see is metadata. By using this address we can make REST calls to see and manipulate the data. You could use curl commands in the terminal, simply use the browser, or install a tool like Postman.

Next up we need a way to access this service in our frontend section of the application. For this purpose, let’s create a folder in the location: frontend/src/api. In here, create the file Client.js.

This file should contain the following code:

//Client.js

import io from 'socket.io-client';
import feathers from '@feathersjs/client';

const socket = io('http://localhost:3030');
const client = feathers();

client.configure(feathers.socketio(socket));
client.configure(feathers.authentication({
  storage: window.localStorage
}));

export default client;

Make sure the libraries we refer to are included in the package.json of our frontend. The package.json should look like this:

{
  "name": "react-persistent-start",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@feathersjs/client": "^3.4.4",
    "react": "^16.4.0",
    "react-dom": "^16.4.0",
    "react-scripts": "1.1.4",
    "socket.io-client": "^1.7.3"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  }
}

Be sure to run npm install after updating this file.

That’s basically all we need to perform CRUD operations from our front-end section of the application. To make the example a little more vivid, let’s implement a button into the App.js page which was automatically generated by the create-react-app CLI.

First, I’ll get rid of the boilerplate code in there and I will import the client we just created. Next, create a button that calls method handleClick() when the React onClick method is fired. In here we will call the client, let it call the service and perform a create operation for us. The code will look like this:

//App.js

import React, { Component } from 'react';

import client from './api/Client';
import './App.css';

class App extends Component {
handleClick() {
  client.service('employees').create({
    name: "Nathan"
  });
}

  render() {
    return (
      <div className="App">
        <button onClick={this.handleClick}>Add employee</button>
      </div>
    );
  }
}

export default App;

Navigate to the frontend section in the terminal, make sure the node modules are installed correctly by running npm install. Now use command npm start. Navigate to http://localhost:3030/employees. Verify there is no employee here. On http://localhost:3000 our page should be available and show a single button.  Click the button and refresh the http://localhost:3030/employees page. You can see we have added an employee with the same name as yours truly.

json-result

(I use a Chrome plugin called JSON viewer to get the layout as shown in the picture)

Using the code provided in the handleClick() method you can expand upon this. See this link for all the calls you can make in order to provide CRUD functionality using FeathersJS.

That’s basically all you need to start adding functionality to your persistent React application.

Happy developing! The code is available here.

The post Quickly setup a persistent React application appeared first on AMIS Oracle and Java Blog.

Machine Learning Applied - TensorFlow Chatbot UI with Oracle JET Custom Component

Andrejus Baranovski - Mon, 2018-06-11 16:22
This post is based on Oracle Code 2018 Shenzhen, Warsaw and Berlin talks. View presentation on SlideShare:


In my previous post I have outlined how to build chatbot backend with TensorFlow - Classification - Machine Learning Chatbot with TensorFlow. Today post is the next step - I will explain how to build custom UI on top of TensorFlow chatbot with Oracle JET.

You can download complete source code (which includes TensorFlow part, backend for chatbot context processing and JET custom component chatbot UI) from my GitHub repository.

Here is solution architecture snapshot:


TensorFlow is used for machine learning and text classification task. Flask allows to communicate through REST to TensorFlow from outside. Contextual chatbot conversation processing is implemented in Node.js backend, communication with Oracle JET client is handled by Socket.io.

Key point in chatbot implementation - correct data structure construction for machine training process. More accurate learning will be, better classification results will be achieved afterwards. Chatbot training data can come in the form of JSON. Training data quality can be measured by overlap between intents and sample sentences. As more overlaps you have, weaker machine learning output will be produced and classification will be less accurate. Training data can contain information which is not used directly by TensorFlow - we can include intent context processing into the same structure, it will be used by context processing algorithm. Sample JSON structure for training data:


Accurate classification by TensorFlow is only one piece of chatbot functionality. We need to maintain conversation context. This can be achieved in Node.js backend, by custom algorithm. In my example, when context is not set - TensorFlow is called to classify statement and produce intent probability. There might be multiple intents classified for the same sentence - TensorFlow will return multiple probabilities. It is up to you, either to always choose intent with top probability or ask user to choose. Communication back to the client is handled through Socket.io by calling socket.emit function:


If context was already set, we don't call classification function - we don't need it in this step. Rather we check by intent context mapping - what should be the next step. Based on that information, we send back question or action to the client, again through Socket.io by calling socket.emit function:


Chatbot UI is implemented with JET custom component (check how it works in JET cookbook). This makes it easy to reuse the same component in various applications:


Here is example, when chatbot UI is included into consuming application. It comes with custom listener, where any custom actions are executed. Custom listener allows to move any custom logic outside of chatbot component, making it truly reusable:


Example for custom logic - based on chatbot reply, we can load application module, assign parameter values, etc.:


Chatbot UI implementation is based on the list, which renders bot and client messages using the template. This template detects if message belongs to client or bot and applies required style - this helps to render readable list. Also there is input area and control buttons:


JS module executes logic which helps to display bot message, by adding it to the list of messages generates event to be handled by custom logic listener. Message is sent from the client to the bot server by calling Socket.io socket.emit function:


Here is the final result - chatbot box implemented with Oracle JET:

what is read consistency

Tom Kyte - Mon, 2018-06-11 15:46
<i></i>Could you explain in your words what is Read consistence in Oracle 4.0 <i></i>
Categories: DBA Blogs

SESSION parameter shows different value

Tom Kyte - Mon, 2018-06-11 15:46
hi there As you guys suggested, last day i was trying to change process and session parameter values as follows, Everything gone perfectly. But after starting up the database, for the SESSION PARAMETER it shows me a value that i wasn't ex...
Categories: DBA Blogs

Listagg returning multiple values

Tom Kyte - Mon, 2018-06-11 15:46
hello, I am new to writing this kind of SQL and I am almost there with this statement but not quite. I'm trying to write a query using listagg and I am getting repeating values in the requirement column when I have 2 passengers. This is because...
Categories: DBA Blogs

Difference between Procedure and function(at least 5, if there are)

Tom Kyte - Mon, 2018-06-11 15:46
Difference between Procedure and function(at least 5, if there are) Seems like a basic question but its a very tricky question.. Some of the differences which I encountered on the internet seems incorrect later, I will list some of them below...
Categories: DBA Blogs

How to use conditional case with aggregate function

Tom Kyte - Mon, 2018-06-11 15:46
I have a query like the one below: SELECT CH.BLNG_NATIONAL_PRVDR_IDNTFR CASE WHEN CH.TCN_DATE BETWEEN TO_DATE('01-JAN-2016','DD/MM/YYYY') AND TO_DATE('30-JUN-2016','DD/MM/YYYY') THEN ROUND(SUM(CH.PAID_AMOUNT)/COUNT(DISTINCT CH.MBR_IDENT...
Categories: DBA Blogs

Conversion of Date format

Tom Kyte - Mon, 2018-06-11 15:46
I am having Column value as 'YYYYDDMM' and need to convert into 'DD/MM/YYYY' and store the data.
Categories: DBA Blogs

extract only two digit after fraction

Tom Kyte - Mon, 2018-06-11 15:46
For example, My query is following input data is in number format :- 123.456 and I want to see 123.45. Not need round off.
Categories: DBA Blogs

alter database noarchivelog versus flashback

Tom Kyte - Mon, 2018-06-11 15:46
SQL> alter database noarchivelog; alter database noarchivelog * ERROR at line 1: ORA-38774: cannot disable media recovery - flashback database is enabled
Categories: DBA Blogs

Can you Override USER built-in function ?

Tom Kyte - Mon, 2018-06-11 15:46
Hi Tom, Good Morning. My question may be naive, I'm not an Oracle expert. Can the USER built-in function be overridden ? Context: My current team has used Oracle for decades ( along with power-builder). We are now transitioning to a n...
Categories: DBA Blogs

Add Weblogic12c Vagrant project

Darwin IT - Mon, 2018-06-11 07:53
Last week I proudly presented a talk on  how to create, provision and maintain VMs with Vagrant, including installing Oracle software (Database, FusionMiddlware,etc.). It was during the nlOUG Tech Experience 18.  I've written about it in the last few posts.

Today I added my WLS12c Vagrant project to Github. Based on further insights I refactored my database12c installation a bit and pushed an updated Database12c Vagrant project to Github as well.

Main updates are that I disliked the 'Zipped' subfolder in the stage folder paths for the installations. And I wanted all the installations extracted into the same main folder. Then I can clean that up more easily.

You should check the references to the stage folder. Download the appropriate Oracle installer zip files and place them in the appropriate folder:



Coming up: a SOA 12c (including SOA, BPM and OSB) complete with creation of the domain (based on my scripts from 2016 and my TechExperience talk of last year), and configuration of nodemanager. Ready to start up after 'up'.

I also uploaded my slides of my talk to slideshare:
 

Change Data Capture from Oracle with StreamSet Data Collector

Yann Neuhaus - Mon, 2018-06-11 07:21

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects. There are some ideas running on (Debezium), VOODOO but not yet released.

Today I tested the Oracle CDC Data Collector for StreamSets. StreamSets Data Collector is an open-source project started by former people from Cloudera and Informatica, to define pipelines streaming data from data collectors. It is easy, simple and has a buch of destinations possible. The Oracle CDC is based on LogMiner which means that it is easy but may have some limitations (mainly datatypes, DDL replication and performance).

Install

The installation guide is at streamsets.com. I choose the easiest way for testing as they provide a Docker container (https://github.com/streamsets)

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest: sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Status: Downloaded newer image for streamsets/datacollector:latest
ef707344c8bd393f8e9d838dfdb475ec9d5f6f587a2a253ebaaa43845b1b516d

CaptureStreamSets001
And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and efficient. There’s a home page where you define the ‘pipelines’ and monitor them running. In the pipelines, we define sources and destinations. Some connectors are already installed, others can be automatically installed. For Oracle, as usual, you need to download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal reasons. I’ll do something simple here just to check the mining from Oracle.

CaptureStreamSets002CaptureStreamSets003
In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the streamsets-datacollector-jdbc-lib library
Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver (ojdbc8.jar).
I’ve also installed the MySQL one for future tests:

File Name Library ID
ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline

I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy: just select ‘Oracle CDC’ as source in a new pipeline. Click on it and configure it. I’ve set the minimum here.
In JDBC tab I’ve set only the JDBC Connection String to: jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1 which is my PDB (I’m on Oracle 18c here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set ‘sys as sysdba’ as username and its password. The configuration can also be displayed as JSON and here is the corresponding entry:

"configuration": [
{
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1"
},
{
"name": "hikariConf.useCredentials",
"value": true
},
{
"name": "hikariConf.username",
"value": "sys as sysdba"
},
{
"name": "hikariConf.password",
"value": "oracle"
},
...

I’ve provided SYSDBA credentials and only the PDB service, but it seems that StreamSets figured out automatically how to connect to the CDB (as LogMiner can be started only from CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to the source – but of course, it will use CPU and memory resource from the source database host in this case.

Then I’ve defined the replication in the Oracle CDC tab. Schema to ‘SCOTT’ and Table Name Pattern to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually replicate for this first test. But of course, we can define a SCN here which is what must be used to ensure consistency between the initial load and the replication. ‘Dictionary source to ‘Online Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table names and column names. But be carefull as table structure changes may not be managed correctly with this option.

{
"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
{
"schema": "SCOTT",
"table": "%"
}
] },
{
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"INSERT",
"UPDATE",
"DELETE",
"SELECT_FOR_UPDATE"
] },
{
"name": "oracleCDCConfigBean.dictionary",
"value": "DICT_FROM_ONLINE_CATALOG"
},

I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is there.

Named Pipe destination

I know that the destination part is easy. I just want to see the captured changes here and I took the easiest destination: Named Pipe where I configured only the Named Pipe (/tmp/scott) and Data Format (JSON)

{
"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName": "com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
{
"name": "namedPipe",
"value": "/tmp/scott"
},
{
"name": "dataFormat",
"value": "JSON"
},
...

Supplemental logging

The Oracle redo log stream is by default focused only on recovery (replay of transactions in the same database) and contains only the minimal physical information requried for it. In order to get enough information to replay them in a different database we need supplemental logging for the database, and for the objects involved:

SQL> alter database add supplemental log data;
Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where owner='SCOTT' and table_name like '%') loop execute immediate 'alter table "'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary key) columns'; end loop;
PL/SQL procedure successfully completed.

Run

And that’s all. Just run the pipeline and look at the logs:

CaptureStreamSets005-log

StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the queries that it uses for that:

BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => :1 , ENDTIME => :2 , OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.NO_SQL_DELIMITER); END;

This starts to mine between two timestamp. I suppose that it will read the SCNs to get finer grain and avoid overlapping information.

And here is the main one:

SELECT SCN, USERNAME, OPERATION_CODE, TIMESTAMP, SQL_REDO, TABLE_NAME, COMMIT_SCN, SEQUENCE#, CSF, XIDUSN, XIDSLT, XIDSQN, RS_ID, SSN, SEG_OWNER FROM V$LOGMNR_CONTENTS WHERE ((( (SEG_OWNER='SCOTT' AND TABLE_NAME IN ('BONUS','DEPT','EMP','SALGRADE')) ) AND (OPERATION_CODE IN (1,3,2,25))) OR (OPERATION_CODE = 7 OR OPERATION_CODE = 36))

This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The operations 1,3,2,25 are those that we want to capture (insert, update, delete, select for update) and were defined in the configuration. Here the pattern ‘%’ for the SCOTT schema has been expanded to the table names. As far as I know, there’s no DDL mining here to automatically capture new tables.

Capture

Then I’ve run this simple insert (I’ve added a primary key on this table as it is not ther from utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet buffers the changes until the end of the transaction)

SQL> commit;

and here I got the message from the pipe:

/ $ cat /tmp/scott
 
{"LOC":"Cloud","DEPTNO":50,"DNAME":"IT"}

The graphical interface shows how the pipeline is going:
CaptureStreamSets006

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually, this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo stream format) and limitations are clearly documented.

Monitoring

Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on the source database. With big transactions, you may need large PGA (but you can also choose buffer to disk). If you have Oracle Tuning Pack, you can also monitor the main query which retreives the redo information from LogMiner:
CaptureStreamSets007
You will see a different SQL_ID because the filter predicates sues literals instead of bind variables (which is not a problem here).

Conclusion

This product is very easy to test, so you can do a Proof of Concept within a few hours and test for your context: supported datatypes, operations and performance. By easy to test, I mean: very good documentation, very friendly and responsive graphical interface, very clear error messages,…

 

Cet article Change Data Capture from Oracle with StreamSet Data Collector est apparu en premier sur Blog dbi services.

Real-time Sailing Yacht Performance - stepping back a bit (Part 1.1)

Rittman Mead Consulting - Mon, 2018-06-11 07:20

Slight change to the planned article. At the end of my analysis in Part 1 I discovered I was missing a number of key messages. It turns out that not all the SeaTalk messages from the integrated instruments were being translated to an NMEA format and therefore not being sent wirelessly from the AIS hub. I didn't really want to introduce another source of data directly from the instruments as it would involve hard wiring the instruments to the laptop and then translating a different format of a message (SeaTalk). I decided to spend on some hardware (any excuse for new toys). I purchased a SeaTalk to NMEA converter from DigitalYachts (discounted at the London boat show I'm glad to say).

This article is about the installation of that hardware and the result (hence Part 1.1), not our usual type of blog. You never know it may be of interest to somebody out there and this is a real-life data issue! Don't worry it will be short and more of an insight into Yacht wiring than anything.

The next blog will be very much back on track. Looking at Kafka in the architecture.

The existing wiring

The following image shows the existing setup, what's behind the panels and how it links to the instrument architecture documented in Part 1. No laughing at the wiring spaghetti - I stripped out half a tonne of cable last year so this is an improvement. Most of the technology lives near the chart table and we have access to the navigation lights, cabin lighting, battery sensors and DSC VHF. The top left image also shows a spare GPS (Garmin) and far left an EPIRB.

Approach

I wanted to make sure I wasn't breaking anything by adding the new hardware so followed the approach we use as software engineers. Check before, during and after any changes enabling us to narrow down the point errors are introduced. To help with this I create a little bit of Python that reads the messages and lets me know the unique message types, the total number of messages and the number of messages in error.

 
import json
import sys

#DEF Function to test message
def is_message_valid (orig_line):

........ [Function same code described in Part 1]

#main body
f = open("/Development/step_1.log", "r")

valid_messages = 0
invalid_messages = 0
total_messages = 0
my_list = [""]
#process file main body
for line in f:

  orig_line = line

  if is_message_valid(orig_line):
    valid_messages = valid_messages + 1
    #look for wind message
    #print "message valid"

    if orig_line[0:1] == "$":
      if len(my_list) == 0:
        #print "ny list is empty"
        my_list.insert(0,orig_line[0:6]) 
      else:
        #print orig_line[0:5]
        my_list.append(orig_line[0:6])

      #print orig_line[20:26]

  else:
    invalid_messages = invalid_messages + 1

  total_messages = total_messages + 1

new_list = list(set(my_list))

i = 0

while i < len(new_list):
    print(new_list[i])
    i += 1

#Hight tech report
print "Summary"
print "#######"
print "valid messages -> ", valid_messages
print "invalid messages -> ", invalid_messages
print "total mesages -> ", total_messages

f.close()

For each of the steps, I used nc to write the output to a log file and then use the Python to analyse the log. I log about ten minutes of messages each step although I have to confess to shortening the last test as I was getting very cold.

nc -l 192.168.1.1 2000 > step_x.log

While spooling the message I artificially generate some speed data by spinning the wheel of the speedo. The image below shows the speed sensor and where it normally lives (far right image). The water comes in when you take out the sensor as it temporarily leaves a rather large hole in the bottom of the boat, don't be alarmed by the little puddle you can see.

Step 1;

I spool and analyse about ten minutes of data without making any changes to the existing setup.

The existing setup takes data directly from the back of a Raymarine instrument seen below and gets linked into the AIS hub.

Results;
 
$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

$IIDBT -> Depth sensor
$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 

Summary
#######
valid messages ->  2129
invalid messages ->  298
total mesages ->  2427
12% error

Step 2;

I disconnect the NMEA interface between the AIS hub and the integrated instruments. So in the diagram above I disconnect all four NMEA wires from the back of the instrument.

I observe the Navigation display of the integrated instruments no longer displays any GPS information (this is expected as the only GPS messages I have are coming from the AIS hub).

Results;

$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

No $II messages as expected 

Summary
#######
valid messages ->  3639
invalid messages ->  232
total mesages ->  3871
6% error
Step 3;

I wire in the new hardware both NMEA in and out then directly into the course computer.

Results;

$AITXT -> AIS (from AIS hub)

$GPGBS -> GPS messages
$GPGGA
$GPGLL
$GPRMC

$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 
$IIVHW -> Heading & Speed
$IIRSA -> Rudder Angle
$IIHDG -> Heading
$IIVLW -> Distance travelled

Summary
#######
valid messages ->  1661
invalid messages ->  121
total mesages ->  1782
6.7% error
Conclusion;

I get all the messages I am after (for now) the hardware seems to be working.

Now to put all the panels back in place!

In the next article, I will get back to technology and the use of Kafka in the architecture.

Categories: BI & Warehousing

dbms_random

Jonathan Lewis - Mon, 2018-06-11 02:31

In a recent ODC thread someone had a piece of SQL that was calling dbms_random.string(‘U’,20) to generate random values for a table of 100,000,000 rows. The thread was about how to handle the ORA-30009 error (not enough memory for operation) that is almost inevitable when you use the “select from dual connect by level <= n” strategy for generating very large numbers of rows, but this example of calling dbms_random.string() so frequently prompted me to point out an important CPU saving , and then publicise through this blog a little known fact (or deduction) about the dbms_random.string() function.

If you generate a random string of length 6 using only upper-case letters there are 308,915,766 different combinations (266); so if you’re after “nearly unique” values for 100 million rows then a six character string is probably good enough – it might give you a small percentage of values which appear in a handful rows but most of the values are likely to be unique or have two rows. If you want to get closer to uniqueness then 7 characters will do it, and 8 will make it almost certain that you will get a unique value in every row.

So if you want “nearly unique” and “random 20 character strings” it’s probably sufficient to generate random strings of 6 to 8 characters and then rpad() them up to 20 characters with spaced – the saving in CPU will be significant; roughly a factor of 3 (which is going to matter when you’re trying to generate 100 million rows. As a little demo I supplied the OP with a script to create a table of just one million random strings – first of 20 random characters, then of 6 random characters with 14 spaces appended. The run time (mostly CPU) dropped from 1 minute 55 seconds to 41 seconds.

Why is there such a difference ? Because to generate a random string of 6 characters Oracle generates a random string of one character six times in a row and concatenates them. The difference between 6 calls and 20 calls per row gives you that factor of around 3. For a quick demo, try running the following anonymous PL/SQL block:

rem
rem     Script:         random_speed.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2010
rem

begin
        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',6));
        dbms_output.new_line;

        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
end;
/

Jere are the results I got from instances of 12.1.0.2, 12.2.0.1, and 18.1.0.0 (from LiveSQL):


BVGFJB
B
V
G
F
J
B

I haven’t shown the tests for all the possible dbms_random.string() options but, unsurprisingly, changing the test to use the ‘L’ (lower case alpha) option produces the same effect (and the same 6 letters changed to lower case). The same effect, with different characters, also appeared using the ‘A’ (mixed case alpha), ‘X’ (uppercase alphanumeric) and ‘P’ (all printable characters) options.

I haven’t considered the effect of using a multi-byte character set – maybe Oracle calls its random number generator once per byte rather than once per character. The investigation is left as an exercise to the interested reader.

tl;dr

When generating a very large number of random strings – keep the “operational” part of the string as short as you can and leave the rest to be rpad()‘ed.

Live Virtual Fluid Training in July

Jim Marion - Sun, 2018-06-10 21:46

In the Northern hemisphere, with days getting longer, and temperatures rising, many of us seek wet forms of recreation to keep cool. With the heat of summer upon us, I can't think of a better topic to study than the cool topic of PeopleSoft Fluid. That is why we are offering a remote live virtual Fluid training class during the hottest week of July. Additional details and registration links are available on our live virtual training schedule page. I look forward to having you in our virtual class!

New FY19 Cloud Platform Specializations Available

Showcase your expertise and distinguish yourself in the market by achieving one or more of our 120+ product and solution specializations. Specialized partners are recognized by Oracle and preferred...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Need Help with Oracle Security GDPR Training and Services

Pete Finnigan - Sun, 2018-06-10 02:46
I talked here a few days ago about GDPR in general and I also published my slides from my talk GDPR for the Oracle DBA . We have been helping clients secure data in their Oracle databases and training people....[Read More]

Posted by Pete On 09/06/18 At 04:33 PM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator