Feed aggregator

Quickly setup a persistent React application

Amis Blog - Tue, 2018-06-12 03:20

After having recently picked up the React framework, I figured I’d share how I quickly setup my projects to implement persistence. This way you spend minimal time setting up the skeleton of your application, letting you focus on adding functionality. This is done by combining a few tools. Let’s take a closer look.

Should you be interested in the code itself, go ahead and get it here.

Tools

NPM/Yarn

Be sure to install a package manager like npm or Yarn. We will use it to install and run some of the other tools.

Create-react-app
If you ever worked with React you undoubtedly came across this CLI tool. It is useful for providing the boilerplate code you need for the front-end of your application. Take a look at the documentation to get a sense on how to use it.

FeathersJS
This is an implementation of the Express framework. It provides another CLI tool we can use to quickly setup the boilerplate code for building a REST API. What makes it powerful is the concept of hooks (see this link for more) as well as the possibility to model our entities in a javascript file. After starting this service we can manipulate data in our database using REST calls.

MongoDB
This is the database we will use – be sure to download, install and start it. It’s a NoSQL database and uses a JSON document structure.

Steps
Create a folder for your application with <application name>. I like to create two seperate folders in here. One ‘frontend’, the other one ‘backend’. Usually I will add the source folder to a version control tool like Git. If you’re dealing with a larger application however, you might want to initiate two separate repositories for both your front- and backend. Using Visual Studio Code, my file structure looks like this:

Selection_002

Fire up your terminal. Install create-react-app globally.

npm install -g create-react-app

Navigate to your frontend folder. Initialize your first application.

cd frontend
create-react-app <application name>

Note how the CLI will add another folder with the application name into your frontend folder. Since we already created a root folder, copy the content of this folder and move it one level higher. Now the structure looks like this:

create-react-app

Install FeathersJS CLI globally.

npm install -g @feathersjs/cli

Navigate to the backend folder of your project an generate a feathers application

cd ../backend
feathers generate app

The CLI will prompt for a project name. Follow along with these settings:

feathers-settings

Alright, now it’s time to generate a service so we can start making REST calls. In true Oracle fashion, let’s make a service for the entity employees. While in the backend folder, run:

feathers generate services

Follow along:

feathers-service-settings

Navigate to the employees.model.js file. The filestructure of the backend folder should look like this:

feathers-backend-structure

In this file we can specify what employees look like. For now, let’s just give them the property name of type String and let’s make it required. Also, let’s delete the timestamps section to keep things clean. It should look like this:

//employees.model.js

module.exports = function (app) {
  const mongooseClient = app.get('mongooseClient');
  const { Schema } = mongooseClient;
  const employees = new Schema({
    name: { type: String, required: true }
  });

  return mongooseClient.model('employees', employees);
};

Great. We are almost good to go. First fire up MongoDB. Windows users see here. Unix users:

sudo service mongod start

Next up navigate to the backend folder. Using the terminal, give the following command:

npm start

You should see a message saying: Feathers application started on http://localhost:3030

The service we created is available on http://localhost:3030/employees. Navigate there. You should see JSON data – though right now all you’ll probably see is metadata. By using this address we can make REST calls to see and manipulate the data. You could use curl commands in the terminal, simply use the browser, or install a tool like Postman.

Next up we need a way to access this service in our frontend section of the application. For this purpose, let’s create a folder in the location: frontend/src/api. In here, create the file Client.js.

This file should contain the following code:

//Client.js

import io from 'socket.io-client';
import feathers from '@feathersjs/client';

const socket = io('http://localhost:3030');
const client = feathers();

client.configure(feathers.socketio(socket));
client.configure(feathers.authentication({
  storage: window.localStorage
}));

export default client;

Make sure the libraries we refer to are included in the package.json of our frontend. The package.json should look like this:

{
  "name": "react-persistent-start",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@feathersjs/client": "^3.4.4",
    "react": "^16.4.0",
    "react-dom": "^16.4.0",
    "react-scripts": "1.1.4",
    "socket.io-client": "^1.7.3"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  }
}

Be sure to run npm install after updating this file.

That’s basically all we need to perform CRUD operations from our front-end section of the application. To make the example a little more vivid, let’s implement a button into the App.js page which was automatically generated by the create-react-app CLI.

First, I’ll get rid of the boilerplate code in there and I will import the client we just created. Next, create a button that calls method handleClick() when the React onClick method is fired. In here we will call the client, let it call the service and perform a create operation for us. The code will look like this:

//App.js

import React, { Component } from 'react';

import client from './api/Client';
import './App.css';

class App extends Component {
handleClick() {
  client.service('employees').create({
    name: "Nathan"
  });
}

  render() {
    return (
      <div className="App">
        <button onClick={this.handleClick}>Add employee</button>
      </div>
    );
  }
}

export default App;

Navigate to the frontend section in the terminal, make sure the node modules are installed correctly by running npm install. Now use command npm start. Navigate to http://localhost:3030/employees. Verify there is no employee here. On http://localhost:3000 our page should be available and show a single button.  Click the button and refresh the http://localhost:3030/employees page. You can see we have added an employee with the same name as yours truly.

json-result

(I use a Chrome plugin called JSON viewer to get the layout as shown in the picture)

Using the code provided in the handleClick() method you can expand upon this. See this link for all the calls you can make in order to provide CRUD functionality using FeathersJS.

That’s basically all you need to start adding functionality to your persistent React application.

Happy developing! The code is available here.

The post Quickly setup a persistent React application appeared first on AMIS Oracle and Java Blog.

Machine Learning Applied - TensorFlow Chatbot UI with Oracle JET Custom Component

Andrejus Baranovski - Mon, 2018-06-11 16:22
This post is based on Oracle Code 2018 Shenzhen, Warsaw and Berlin talks. View presentation on SlideShare:


In my previous post I have outlined how to build chatbot backend with TensorFlow - Classification - Machine Learning Chatbot with TensorFlow. Today post is the next step - I will explain how to build custom UI on top of TensorFlow chatbot with Oracle JET.

You can download complete source code (which includes TensorFlow part, backend for chatbot context processing and JET custom component chatbot UI) from my GitHub repository.

Here is solution architecture snapshot:


TensorFlow is used for machine learning and text classification task. Flask allows to communicate through REST to TensorFlow from outside. Contextual chatbot conversation processing is implemented in Node.js backend, communication with Oracle JET client is handled by Socket.io.

Key point in chatbot implementation - correct data structure construction for machine training process. More accurate learning will be, better classification results will be achieved afterwards. Chatbot training data can come in the form of JSON. Training data quality can be measured by overlap between intents and sample sentences. As more overlaps you have, weaker machine learning output will be produced and classification will be less accurate. Training data can contain information which is not used directly by TensorFlow - we can include intent context processing into the same structure, it will be used by context processing algorithm. Sample JSON structure for training data:


Accurate classification by TensorFlow is only one piece of chatbot functionality. We need to maintain conversation context. This can be achieved in Node.js backend, by custom algorithm. In my example, when context is not set - TensorFlow is called to classify statement and produce intent probability. There might be multiple intents classified for the same sentence - TensorFlow will return multiple probabilities. It is up to you, either to always choose intent with top probability or ask user to choose. Communication back to the client is handled through Socket.io by calling socket.emit function:


If context was already set, we don't call classification function - we don't need it in this step. Rather we check by intent context mapping - what should be the next step. Based on that information, we send back question or action to the client, again through Socket.io by calling socket.emit function:


Chatbot UI is implemented with JET custom component (check how it works in JET cookbook). This makes it easy to reuse the same component in various applications:


Here is example, when chatbot UI is included into consuming application. It comes with custom listener, where any custom actions are executed. Custom listener allows to move any custom logic outside of chatbot component, making it truly reusable:


Example for custom logic - based on chatbot reply, we can load application module, assign parameter values, etc.:


Chatbot UI implementation is based on the list, which renders bot and client messages using the template. This template detects if message belongs to client or bot and applies required style - this helps to render readable list. Also there is input area and control buttons:


JS module executes logic which helps to display bot message, by adding it to the list of messages generates event to be handled by custom logic listener. Message is sent from the client to the bot server by calling Socket.io socket.emit function:


Here is the final result - chatbot box implemented with Oracle JET:

what is read consistency

Tom Kyte - Mon, 2018-06-11 15:46
<i></i>Could you explain in your words what is Read consistence in Oracle 4.0 <i></i>
Categories: DBA Blogs

SESSION parameter shows different value

Tom Kyte - Mon, 2018-06-11 15:46
hi there As you guys suggested, last day i was trying to change process and session parameter values as follows, Everything gone perfectly. But after starting up the database, for the SESSION PARAMETER it shows me a value that i wasn't ex...
Categories: DBA Blogs

Listagg returning multiple values

Tom Kyte - Mon, 2018-06-11 15:46
hello, I am new to writing this kind of SQL and I am almost there with this statement but not quite. I'm trying to write a query using listagg and I am getting repeating values in the requirement column when I have 2 passengers. This is because...
Categories: DBA Blogs

Difference between Procedure and function(at least 5, if there are)

Tom Kyte - Mon, 2018-06-11 15:46
Difference between Procedure and function(at least 5, if there are) Seems like a basic question but its a very tricky question.. Some of the differences which I encountered on the internet seems incorrect later, I will list some of them below...
Categories: DBA Blogs

How to use conditional case with aggregate function

Tom Kyte - Mon, 2018-06-11 15:46
I have a query like the one below: SELECT CH.BLNG_NATIONAL_PRVDR_IDNTFR CASE WHEN CH.TCN_DATE BETWEEN TO_DATE('01-JAN-2016','DD/MM/YYYY') AND TO_DATE('30-JUN-2016','DD/MM/YYYY') THEN ROUND(SUM(CH.PAID_AMOUNT)/COUNT(DISTINCT CH.MBR_IDENT...
Categories: DBA Blogs

Conversion of Date format

Tom Kyte - Mon, 2018-06-11 15:46
I am having Column value as 'YYYYDDMM' and need to convert into 'DD/MM/YYYY' and store the data.
Categories: DBA Blogs

extract only two digit after fraction

Tom Kyte - Mon, 2018-06-11 15:46
For example, My query is following input data is in number format :- 123.456 and I want to see 123.45. Not need round off.
Categories: DBA Blogs

alter database noarchivelog versus flashback

Tom Kyte - Mon, 2018-06-11 15:46
SQL> alter database noarchivelog; alter database noarchivelog * ERROR at line 1: ORA-38774: cannot disable media recovery - flashback database is enabled
Categories: DBA Blogs

Can you Override USER built-in function ?

Tom Kyte - Mon, 2018-06-11 15:46
Hi Tom, Good Morning. My question may be naive, I'm not an Oracle expert. Can the USER built-in function be overridden ? Context: My current team has used Oracle for decades ( along with power-builder). We are now transitioning to a n...
Categories: DBA Blogs

Add Weblogic12c Vagrant project

Darwin IT - Mon, 2018-06-11 07:53
Last week I proudly presented a talk on  how to create, provision and maintain VMs with Vagrant, including installing Oracle software (Database, FusionMiddlware,etc.). It was during the nlOUG Tech Experience 18.  I've written about it in the last few posts.

Today I added my WLS12c Vagrant project to Github. Based on further insights I refactored my database12c installation a bit and pushed an updated Database12c Vagrant project to Github as well.

Main updates are that I disliked the 'Zipped' subfolder in the stage folder paths for the installations. And I wanted all the installations extracted into the same main folder. Then I can clean that up more easily.

You should check the references to the stage folder. Download the appropriate Oracle installer zip files and place them in the appropriate folder:



Coming up: a SOA 12c (including SOA, BPM and OSB) complete with creation of the domain (based on my scripts from 2016 and my TechExperience talk of last year), and configuration of nodemanager. Ready to start up after 'up'.

I also uploaded my slides of my talk to slideshare:
 

Change Data Capture from Oracle with StreamSet Data Collector

Yann Neuhaus - Mon, 2018-06-11 07:21

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects. There are some ideas running on (Debezium), VOODOO but not yet released.

Today I tested the Oracle CDC Data Collector for StreamSets. StreamSets Data Collector is an open-source project started by former people from Cloudera and Informatica, to define pipelines streaming data from data collectors. It is easy, simple and has a buch of destinations possible. The Oracle CDC is based on LogMiner which means that it is easy but may have some limitations (mainly datatypes, DDL replication and performance).

Install

The installation guide is at streamsets.com. I choose the easiest way for testing as they provide a Docker container (https://github.com/streamsets)

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest: sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Status: Downloaded newer image for streamsets/datacollector:latest
ef707344c8bd393f8e9d838dfdb475ec9d5f6f587a2a253ebaaa43845b1b516d

CaptureStreamSets001
And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and efficient. There’s a home page where you define the ‘pipelines’ and monitor them running. In the pipelines, we define sources and destinations. Some connectors are already installed, others can be automatically installed. For Oracle, as usual, you need to download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal reasons. I’ll do something simple here just to check the mining from Oracle.

CaptureStreamSets002CaptureStreamSets003
In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the streamsets-datacollector-jdbc-lib library
Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver (ojdbc8.jar).
I’ve also installed the MySQL one for future tests:

File Name Library ID
ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline

I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy: just select ‘Oracle CDC’ as source in a new pipeline. Click on it and configure it. I’ve set the minimum here.
In JDBC tab I’ve set only the JDBC Connection String to: jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1 which is my PDB (I’m on Oracle 18c here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set ‘sys as sysdba’ as username and its password. The configuration can also be displayed as JSON and here is the corresponding entry:

"configuration": [
{
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1"
},
{
"name": "hikariConf.useCredentials",
"value": true
},
{
"name": "hikariConf.username",
"value": "sys as sysdba"
},
{
"name": "hikariConf.password",
"value": "oracle"
},
...

I’ve provided SYSDBA credentials and only the PDB service, but it seems that StreamSets figured out automatically how to connect to the CDB (as LogMiner can be started only from CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to the source – but of course, it will use CPU and memory resource from the source database host in this case.

Then I’ve defined the replication in the Oracle CDC tab. Schema to ‘SCOTT’ and Table Name Pattern to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually replicate for this first test. But of course, we can define a SCN here which is what must be used to ensure consistency between the initial load and the replication. ‘Dictionary source to ‘Online Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table names and column names. But be carefull as table structure changes may not be managed correctly with this option.

{
"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
{
"schema": "SCOTT",
"table": "%"
}
] },
{
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"INSERT",
"UPDATE",
"DELETE",
"SELECT_FOR_UPDATE"
] },
{
"name": "oracleCDCConfigBean.dictionary",
"value": "DICT_FROM_ONLINE_CATALOG"
},

I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is there.

Named Pipe destination

I know that the destination part is easy. I just want to see the captured changes here and I took the easiest destination: Named Pipe where I configured only the Named Pipe (/tmp/scott) and Data Format (JSON)

{
"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName": "com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
{
"name": "namedPipe",
"value": "/tmp/scott"
},
{
"name": "dataFormat",
"value": "JSON"
},
...

Supplemental logging

The Oracle redo log stream is by default focused only on recovery (replay of transactions in the same database) and contains only the minimal physical information requried for it. In order to get enough information to replay them in a different database we need supplemental logging for the database, and for the objects involved:

SQL> alter database add supplemental log data;
Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where owner='SCOTT' and table_name like '%') loop execute immediate 'alter table "'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary key) columns'; end loop;
PL/SQL procedure successfully completed.

Run

And that’s all. Just run the pipeline and look at the logs:

CaptureStreamSets005-log

StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the queries that it uses for that:

BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => :1 , ENDTIME => :2 , OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.NO_SQL_DELIMITER); END;

This starts to mine between two timestamp. I suppose that it will read the SCNs to get finer grain and avoid overlapping information.

And here is the main one:

SELECT SCN, USERNAME, OPERATION_CODE, TIMESTAMP, SQL_REDO, TABLE_NAME, COMMIT_SCN, SEQUENCE#, CSF, XIDUSN, XIDSLT, XIDSQN, RS_ID, SSN, SEG_OWNER FROM V$LOGMNR_CONTENTS WHERE ((( (SEG_OWNER='SCOTT' AND TABLE_NAME IN ('BONUS','DEPT','EMP','SALGRADE')) ) AND (OPERATION_CODE IN (1,3,2,25))) OR (OPERATION_CODE = 7 OR OPERATION_CODE = 36))

This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The operations 1,3,2,25 are those that we want to capture (insert, update, delete, select for update) and were defined in the configuration. Here the pattern ‘%’ for the SCOTT schema has been expanded to the table names. As far as I know, there’s no DDL mining here to automatically capture new tables.

Capture

Then I’ve run this simple insert (I’ve added a primary key on this table as it is not ther from utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet buffers the changes until the end of the transaction)

SQL> commit;

and here I got the message from the pipe:

/ $ cat /tmp/scott
 
{"LOC":"Cloud","DEPTNO":50,"DNAME":"IT"}

The graphical interface shows how the pipeline is going:
CaptureStreamSets006

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually, this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo stream format) and limitations are clearly documented.

Monitoring

Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on the source database. With big transactions, you may need large PGA (but you can also choose buffer to disk). If you have Oracle Tuning Pack, you can also monitor the main query which retreives the redo information from LogMiner:
CaptureStreamSets007
You will see a different SQL_ID because the filter predicates sues literals instead of bind variables (which is not a problem here).

Conclusion

This product is very easy to test, so you can do a Proof of Concept within a few hours and test for your context: supported datatypes, operations and performance. By easy to test, I mean: very good documentation, very friendly and responsive graphical interface, very clear error messages,…

 

Cet article Change Data Capture from Oracle with StreamSet Data Collector est apparu en premier sur Blog dbi services.

Real-time Sailing Yacht Performance - stepping back a bit (Part 1.1)

Rittman Mead Consulting - Mon, 2018-06-11 07:20

Slight change to the planned article. At the end of my analysis in Part 1 I discovered I was missing a number of key messages. It turns out that not all the SeaTalk messages from the integrated instruments were being translated to an NMEA format and therefore not being sent wirelessly from the AIS hub. I didn't really want to introduce another source of data directly from the instruments as it would involve hard wiring the instruments to the laptop and then translating a different format of a message (SeaTalk). I decided to spend on some hardware (any excuse for new toys). I purchased a SeaTalk to NMEA converter from DigitalYachts (discounted at the London boat show I'm glad to say).

This article is about the installation of that hardware and the result (hence Part 1.1), not our usual type of blog. You never know it may be of interest to somebody out there and this is a real-life data issue! Don't worry it will be short and more of an insight into Yacht wiring than anything.

The next blog will be very much back on track. Looking at Kafka in the architecture.

The existing wiring

The following image shows the existing setup, what's behind the panels and how it links to the instrument architecture documented in Part 1. No laughing at the wiring spaghetti - I stripped out half a tonne of cable last year so this is an improvement. Most of the technology lives near the chart table and we have access to the navigation lights, cabin lighting, battery sensors and DSC VHF. The top left image also shows a spare GPS (Garmin) and far left an EPIRB.

Approach

I wanted to make sure I wasn't breaking anything by adding the new hardware so followed the approach we use as software engineers. Check before, during and after any changes enabling us to narrow down the point errors are introduced. To help with this I create a little bit of Python that reads the messages and lets me know the unique message types, the total number of messages and the number of messages in error.

 
import json
import sys

#DEF Function to test message
def is_message_valid (orig_line):

........ [Function same code described in Part 1]

#main body
f = open("/Development/step_1.log", "r")

valid_messages = 0
invalid_messages = 0
total_messages = 0
my_list = [""]
#process file main body
for line in f:

  orig_line = line

  if is_message_valid(orig_line):
    valid_messages = valid_messages + 1
    #look for wind message
    #print "message valid"

    if orig_line[0:1] == "$":
      if len(my_list) == 0:
        #print "ny list is empty"
        my_list.insert(0,orig_line[0:6]) 
      else:
        #print orig_line[0:5]
        my_list.append(orig_line[0:6])

      #print orig_line[20:26]

  else:
    invalid_messages = invalid_messages + 1

  total_messages = total_messages + 1

new_list = list(set(my_list))

i = 0

while i < len(new_list):
    print(new_list[i])
    i += 1

#Hight tech report
print "Summary"
print "#######"
print "valid messages -> ", valid_messages
print "invalid messages -> ", invalid_messages
print "total mesages -> ", total_messages

f.close()

For each of the steps, I used nc to write the output to a log file and then use the Python to analyse the log. I log about ten minutes of messages each step although I have to confess to shortening the last test as I was getting very cold.

nc -l 192.168.1.1 2000 > step_x.log

While spooling the message I artificially generate some speed data by spinning the wheel of the speedo. The image below shows the speed sensor and where it normally lives (far right image). The water comes in when you take out the sensor as it temporarily leaves a rather large hole in the bottom of the boat, don't be alarmed by the little puddle you can see.

Step 1;

I spool and analyse about ten minutes of data without making any changes to the existing setup.

The existing setup takes data directly from the back of a Raymarine instrument seen below and gets linked into the AIS hub.

Results;
 
$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

$IIDBT -> Depth sensor
$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 

Summary
#######
valid messages ->  2129
invalid messages ->  298
total mesages ->  2427
12% error

Step 2;

I disconnect the NMEA interface between the AIS hub and the integrated instruments. So in the diagram above I disconnect all four NMEA wires from the back of the instrument.

I observe the Navigation display of the integrated instruments no longer displays any GPS information (this is expected as the only GPS messages I have are coming from the AIS hub).

Results;

$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

No $II messages as expected 

Summary
#######
valid messages ->  3639
invalid messages ->  232
total mesages ->  3871
6% error
Step 3;

I wire in the new hardware both NMEA in and out then directly into the course computer.

Results;

$AITXT -> AIS (from AIS hub)

$GPGBS -> GPS messages
$GPGGA
$GPGLL
$GPRMC

$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 
$IIVHW -> Heading & Speed
$IIRSA -> Rudder Angle
$IIHDG -> Heading
$IIVLW -> Distance travelled

Summary
#######
valid messages ->  1661
invalid messages ->  121
total mesages ->  1782
6.7% error
Conclusion;

I get all the messages I am after (for now) the hardware seems to be working.

Now to put all the panels back in place!

In the next article, I will get back to technology and the use of Kafka in the architecture.

Categories: BI & Warehousing

dbms_random

Jonathan Lewis - Mon, 2018-06-11 02:31

In a recent ODC thread someone had a piece of SQL that was calling dbms_random.string(‘U’,20) to generate random values for a table of 100,000,000 rows. The thread was about how to handle the ORA-30009 error (not enough memory for operation) that is almost inevitable when you use the “select from dual connect by level <= n” strategy for generating very large numbers of rows, but this example of calling dbms_random.string() so frequently prompted me to point out an important CPU saving , and then publicise through this blog a little known fact (or deduction) about the dbms_random.string() function.

If you generate a random string of length 6 using only upper-case letters there are 308,915,766 different combinations (266); so if you’re after “nearly unique” values for 100 million rows then a six character string is probably good enough – it might give you a small percentage of values which appear in a handful rows but most of the values are likely to be unique or have two rows. If you want to get closer to uniqueness then 7 characters will do it, and 8 will make it almost certain that you will get a unique value in every row.

So if you want “nearly unique” and “random 20 character strings” it’s probably sufficient to generate random strings of 6 to 8 characters and then rpad() them up to 20 characters with spaced – the saving in CPU will be significant; roughly a factor of 3 (which is going to matter when you’re trying to generate 100 million rows. As a little demo I supplied the OP with a script to create a table of just one million random strings – first of 20 random characters, then of 6 random characters with 14 spaces appended. The run time (mostly CPU) dropped from 1 minute 55 seconds to 41 seconds.

Why is there such a difference ? Because to generate a random string of 6 characters Oracle generates a random string of one character six times in a row and concatenates them. The difference between 6 calls and 20 calls per row gives you that factor of around 3. For a quick demo, try running the following anonymous PL/SQL block:

rem
rem     Script:         random_speed.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2010
rem

begin
        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',6));
        dbms_output.new_line;

        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
end;
/

Jere are the results I got from instances of 12.1.0.2, 12.2.0.1, and 18.1.0.0 (from LiveSQL):


BVGFJB
B
V
G
F
J
B

I haven’t shown the tests for all the possible dbms_random.string() options but, unsurprisingly, changing the test to use the ‘L’ (lower case alpha) option produces the same effect (and the same 6 letters changed to lower case). The same effect, with different characters, also appeared using the ‘A’ (mixed case alpha), ‘X’ (uppercase alphanumeric) and ‘P’ (all printable characters) options.

I haven’t considered the effect of using a multi-byte character set – maybe Oracle calls its random number generator once per byte rather than once per character. The investigation is left as an exercise to the interested reader.

tl;dr

When generating a very large number of random strings – keep the “operational” part of the string as short as you can and leave the rest to be rpad()‘ed.

Live Virtual Fluid Training in July

Jim Marion - Sun, 2018-06-10 21:46

In the Northern hemisphere, with days getting longer, and temperatures rising, many of us seek wet forms of recreation to keep cool. With the heat of summer upon us, I can't think of a better topic to study than the cool topic of PeopleSoft Fluid. That is why we are offering a remote live virtual Fluid training class during the hottest week of July. Additional details and registration links are available on our live virtual training schedule page. I look forward to having you in our virtual class!

New FY19 Cloud Platform Specializations Available

Showcase your expertise and distinguish yourself in the market by achieving one or more of our 120+ product and solution specializations. Specialized partners are recognized by Oracle and preferred...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Need Help with Oracle Security GDPR Training and Services

Pete Finnigan - Sun, 2018-06-10 02:46
I talked here a few days ago about GDPR in general and I also published my slides from my talk GDPR for the Oracle DBA . We have been helping clients secure data in their Oracle databases and training people....[Read More]

Posted by Pete On 09/06/18 At 04:33 PM

Categories: Security Blogs

Oracle Cloud Orchestration

Michael Dinh - Sat, 2018-06-09 08:57

Oracle Cloud has a pretty cool concept (Orchestration) to recreate an instance; however, it’s all hard coded.
The orchestration cannot be shared with some else to create the configuration with different name or from different account.
What version is the Orchestration and shouldn’t it be in the metadata.


{
"account" : "/Compute-601138841/default",
"description" : "",
"tags" : [ ],
"name" : "/Compute-601138841/me@yahoo.com/qs-classic",
"objects" : [ {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"managed" : true,
"description" : "qs-classic Storage Volume",
"bootable" : true,
"shared" : false,
"imagelist" : "/oracle/public/OL_7.2_UEKR4_x86_64",
"size" : "128G",
"properties" : [ "/oracle/public/storage/default" ],
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_storage"
},
"label" : "qs-classic_storage_1",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "StorageVolume",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/storage_1"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"description" : "qs-classic Security Rule (IP Network) Egress",
"tags" : [ "qs-classic" ],
"flowDirection" : "egress",
"acl" : "{{qs-classic_AccessControlList:name}}",
"enabledFlag" : true,
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_SecurityRule_Egress"
},
"label" : "qs-classic_SecurityRule_Egress",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "SecurityRule",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/f10f9f0d-9577-4f83-9b81-e1cc9d8bc9df"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"ipAddressPool" : "/oracle/public/public-ippool",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_IP_eth0_public"
},
"label" : "qs-classic_IP_eth0_public",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "IpAddressReservation",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/ac8dbfc5-d560-47e1-8d33-aeb54a0cc4c8"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"description" : "qs-classic Security Rule (IP Network)",
"tags" : [ "qs-classic" ],
"flowDirection" : "ingress",
"acl" : "{{qs-classic_AccessControlList:name}}",
"enabledFlag" : true,
"secProtocols" : [ "/oracle/public/ssh" ],
"dstVnicSet" : "{{qs-classic_VnicSet:name}}",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_SecurityRule"
},
"label" : "qs-classic_SecurityRule",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "SecurityRule",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/0ddfc441-7855-47a4-856f-15c400265975"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"appliedAcls" : [ "{{qs-classic_AccessControlList:name}}" ],
"description" : "qs-classic Virtual NIC Set",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_VnicSet",
"tags" : [ "qs-classic" ]
},
"label" : "qs-classic_VnicSet",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "VirtualNicSet",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/f7841713-54d1-4c9a-a6cb-32e3e84c753f"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"enabledFlag" : true,
"description" : "qs-classic Access Control List",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_AccessControlList",
"tags" : [ "qs-classic" ]
},
"label" : "qs-classic_AccessControlList",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "Acl",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/2f2915ca-5047-45bb-8875-0ae183a6425f"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"ipAddressPool" : "/oracle/public/cloud-ippool",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_IP_eth0_cloud"
},
"label" : "qs-classic_IP_eth0_cloud",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "IpAddressReservation",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/e3f78f72-3eca-4dd1-a054-fdd3fee8a51b"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : false,
"template" : {
"networking" : {
"eth0" : {
"vnic" : "/Compute-601138841/me@yahoo.com/qs-classic_eth0",
"ipnetwork" : "/Compute-601138841/default",
"is_default_gateway" : true,
"nat" : [ "network/v1/ipreservation:{{qs-classic_IP_eth0_public:name}}", "network/v1/ipreservation:{{qs-classic_IP_eth0_cloud:name}}" ],
"vnicsets" : [ "{{qs-classic_VnicSet:name}}" ]
}
},
"name" : "/Compute-601138841/me@yahoo.com/qs-classic",
"boot_order" : [ 1 ],
"storage_attachments" : [ {
"volume" : "{{qs-classic_storage_1:name}}",
"index" : 1
} ],
"label" : "qs-classic",
"shape" : "oc3",
"imagelist" : "/oracle/public/OL_7.2_UEKR4_x86_64",
"sshkeys" : [ "/Compute-601138841/me@yahoo.com/qs-classic" ]
},
"label" : "qs-classic_instance",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "Instance",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/instance"
} ],
"desired_state" : "active"
}

New Video : Online Relocation of a Pluggable Database

Hemant K Chitale - Fri, 2018-06-08 23:09
I have published a new YouTube Video:    Online Relocation of a Pluggable Database.

.
.
.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator