Feed aggregator

Oracle Warehouse Management Cloud Update Helps Businesses Scale Logistics to Meet Multi-Channel Demand

Oracle Press Releases - Tue, 2017-04-18 07:00
Press Release
Oracle Warehouse Management Cloud Update Helps Businesses Scale Logistics to Meet Multi-Channel Demand Momentum of Cloud WMS Solution Touted in Top Industry Analyst Reports

Redwood Shores, Calif.—Apr 18, 2017

Oracle today announced a new supply chain cloud functionality that helps businesses master challenging warehouse fulfillment issues in order to help outpace their competition. Version 8 of Oracle Warehouse Management Cloud (the former LogFire Warehouse Management System) has the latest functional user experience and integration enhancements that provide a comprehensive view of inventory from supplier to the distribution center (DC) to store shelves.

With Forrester research reporting that the total cloud market value is expected to top $236 billion dollars by 2020, and Transparency Market Research reporting the cloud WMS market is expected to expand from $1.2 billion in 2015 to $4.1 billion by 2024, Oracle is well-positioned to help companies on their journey to adopt SCM cloud solutions.

With Version 8, Oracle continues to lead the industry in modernizing application release cycles by leveraging 100 percent cloud architecture to offer access to frequent solution improvements similar to the easy updates with leading consumer applications.

“LogFire has conclusively proven the viability and compelling value proposition of their cloud-based WMS. The Oracle WMS Cloud is a no-compromise solution built from day one for the cloud. It enables companies to thrive in today’s dynamic, interconnected, omni-channel fulfillment economy,” said Jon Chorley, CSO & group vice president, SCM product strategy at Oracle. “Version 8 continues our dedication to providing customers with the very latest logistics execution capabilities, which now also gives them the ability to leverage other compelling cloud solutions from Oracle.”

Oracle Warehouse Management Cloud recently garnered attention for their offering in two different industry analyst reports. Both reports lauded Oracle’s position and commented on increased cloud WMS adoption across industries. The first is the IDC research on WMS in the Cloud and second is the 2017 Gartner Magic Quadrant for Warehouse Management Systems, which placed Oracle in the Leaders Quadrant.

Chorley continued, “We believe that 2017 is the year cloud computing in Warehouse Management Systems will go mainstream. Our positioning in the Gartner and IDC reports affirms that businesses are comfortable transitioning their fulfillment operations to the cloud. We’re taking every opportunity to introduce the value of the cloud to our customers to help them grow their businesses.”

At Oracle’s recent major supply chain conference—the Modern Supply Chain Experience in San Jose, California – Oracle users saw this technology first hand. Customers packed into sessions in order to learn more about the future of their industry and how the cloud will be transforming everything from sourcing and manufacturing to transportation, warehousing, and even store shelves.

For more information, please see the Gartner Magic Quadrant1 and IDC’s WMS in the Cloud report2.

For additional information on Oracle Supply Chain Management (SCM) Cloud, visit Facebook, Twitter or the Oracle SCM blog.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

1. Gartner, “2017 Magic Quadrant for Warehouse Management Systems,” by C. Dwight Klappich, Simon Tunstall, February 13, 2017. Oracle has acquired distribution rights for this report, expiration March 2018.
2. IDC, “WMS in the Cloud,” by John Santagate, January 2017. Oracle has acquired distribution rights for this report, expiration March 2018.

Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

Getting Started with Oracle Developer Samples

Are you just joining Oracle's transformation journey to Cloud? Are you just getting used to working with the technology leader ? Having trouble getting started?  Oracle is developing,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

3 Ways to Make Paperwork Less Difficult

WebCenter Team - Tue, 2017-04-18 06:54

Authored by: Sarah Gabot, Partner Marketing Manager, HelloSign

Let’s face it--paperwork, regardless of whether it’s on paper or in the cloud, is no fun. It still takes time and effort to prepare, and if you’re getting it signed, you’re often sitting waiting for the document to return. 

Believe it or not, you can make the paperwork even simpler with the help of an eSignature tool. 

We’ve listed out three ways you can make paperwork less painful. You can immediately begin applying these to the HelloSign for Oracle Content and Experience Cloud integration today. 

Avoid manually reviewing documents with data validation 
Human error happens. People accidentally input the wrong information or have typos when filling out forms. That’s why HelloSign created data validation

Data validation is a feature that prevents users from entering incorrect information. For example, if you require the user to type in his or her social security number, you know that only nine numbers are acceptable. No letters. Exactly nine numbers. 

Using this feature, when the documents return, there will be fewer errors to comb through and less back and forth between you and the signer. 

Increase signer completion rates with mobile-friendly signing
Nearly everybody has a smartphone these days. Being able to handle tasks on-the-go on a smartphone has become a requirement for most people. Signing documents is no exception. 

HelloSign’s signing experience is mobile-friendly. Instead of having to pinch on a small screen, the signer experience is intuitive with responsive pages. There’s also progress tracking that shows how many total (and remaining) required fields are left. 

A more mobile-friendly experience means that your signers are more inclined to complete the document faster. 

Save time by auto-filling documents. 
For certain documents, there are fields that you, the document preparer, can pre-fill. For example, if you have a templated document like an offer letter, you can pre-fill fields in the document (eg. job title, salary number) in HelloSign for Oracle CEC. 

It’s a quick way for you to prepare standardized documents without having to recreate it every time. 

Try it out yourself! 
See for yourself how you can simplify paperwork flows using Oracle Content and Experience Cloud and HelloSign. To learn how to get started, email our sales team at oracle-sales@hellosign.com or speak to your Oracle Account rep.

3 Ways to Make Paperwork Less Difficult

WebCenter Team - Tue, 2017-04-18 06:54

Authored by: Sarah Gabot, Partner Marketing Manager, HelloSign

Let’s face it--paperwork, regardless of whether it’s on paper or in the cloud, is no fun. It still takes time and effort to prepare, and if you’re getting it signed, you’re often sitting waiting for the document to return. 

Believe it or not, you can make the paperwork even simpler with the help of an eSignature tool. 

We’ve listed out three ways you can make paperwork less painful. You can immediately begin applying these to the HelloSign for Oracle Content and Experience Cloud integration today. 

Avoid manually reviewing documents with data validation 
Human error happens. People accidentally input the wrong information or have typos when filling out forms. That’s why HelloSign created data validation

Data validation is a feature that prevents users from entering incorrect information. For example, if you require the user to type in his or her social security number, you know that only nine numbers are acceptable. No letters. Exactly nine numbers. 

Using this feature, when the documents return, there will be fewer errors to comb through and less back and forth between you and the signer. 

Increase signer completion rates with mobile-friendly signing
Nearly everybody has a smartphone these days. Being able to handle tasks on-the-go on a smartphone has become a requirement for most people. Signing documents is no exception. 

HelloSign’s signing experience is mobile-friendly. Instead of having to pinch on a small screen, the signer experience is intuitive with responsive pages. There’s also progress tracking that shows how many total (and remaining) required fields are left. 

A more mobile-friendly experience means that your signers are more inclined to complete the document faster. 

Save time by auto-filling documents. 
For certain documents, there are fields that you, the document preparer, can pre-fill. For example, if you have a templated document like an offer letter, you can pre-fill fields in the document (eg. job title, salary number) in HelloSign for Oracle CEC. 

It’s a quick way for you to prepare standardized documents without having to recreate it every time. 

Try it out yourself! 
See for yourself how you can simplify paperwork flows using Oracle Content and Experience Cloud and HelloSign. To learn how to get started, email our sales team at oracle-sales@hellosign.com or speak to your Oracle Account rep.

OGG: Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2

Yann Neuhaus - Tue, 2017-04-18 06:20

Another GoldenGate 12.2 one: Some days ago I had this in the GoldenGate error log:

2017-04-12 14:56:08  WARNING OGG-02901  Oracle GoldenGate Capture for Oracle, extimch.prm:  Replication of UDT and ANYDATA from redo logs is not supported with the Oracle compatible parameter setting. Using fetch instead.
2017-04-12 14:56:08  ERROR   OGG-02912  Oracle GoldenGate Capture for Oracle, extimch.prm:  Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 or later.

Seemed pretty obvious that I was missing a patch.

Headed over to mos and searched for the patch mentioned:

ogg_17030189

Hm, only two hits and I was neither on Exadata nor on the 12.1.0.1 database release. After digging around a bit more in various mos notes there was one (Doc ID 2091679.1) which finally mentioned a workaround. When you install GoldenGate 12.2 you get a script by default in the GoldenGate Home which is called “prvtlmpg.plb”. Looking at the script:

oracle@xxxxx:/u01/app/ogg/ch_src/product/12.2.0.1.160823/ [xxxxx] ls prvtlmpg.plb
prvtlmpg.plb
oracle@xxxxx:/u01/app/ogg/ch_src/product/12.2.0.1.160823/ [xxxxx] strings prvtlmpg.plb
WHENEVER SQLERROR EXIT
set verify off 
set feedback off
set echo off
set serveroutput on
column quotedMiningUser new_value quotedMiningUser noprint
column quotedCurrentSchema new_value quotedCurrentSchema noprint
variable status number
prompt
prompt Oracle GoldenGate Workaround prvtlmpg
prompt
prompt This script provides a temporary workaround for bug 17030189.
prompt It is strongly recommended that you apply the official Oracle 
prompt Patch for bug 17030189 from My Oracle Support instead of using
prompt this workaround.
prompt
prompt This script must be executed in the mining database of Integrated
prompt Capture. You will be prompted for the username of the mining user.
prompt Use a double quoted identifier if the username is case sensitive
prompt or contains special characters. In a CDB environment, this script
prompt must be executed from the CDB$ROOT container and the mining user
prompt must be a common user.
prompt
prompt ===========================  WARNING  ==========================
prompt You MUST stop all Integrated Captures that belong to this mining
prompt user before proceeding!
prompt ================================================================

Really? You get a script to workaround a known issue by default? Lets try:

SQL> @prvtlmpg.plb

Oracle GoldenGate Workaround prvtlmpg

This script provides a temporary workaround for bug 17030189.
It is strongly recommended that you apply the official Oracle
Patch for bug 17030189 from My Oracle Support instead of using
this workaround.

This script must be executed in the mining database of Integrated
Capture. You will be prompted for the username of the mining user.
Use a double quoted identifier if the username is case sensitive
or contains special characters. In a CDB environment, this script
must be executed from the CDB$ROOT container and the mining user
must be a common user.

===========================  WARNING  ==========================
You MUST stop all Integrated Captures that belong to this mining
user before proceeding!
================================================================

Enter Integrated Capture mining user: GGADMIN

Installing workaround...                                                                                         
No errors.                                                                                                       
No errors.
No errors.                                                                                                       
Installation completed.                                                                                          
SQL>                                                                                                             

And finally the extract started fine. Interesting … There seems to be a patch for 11.2.0.4.7DBPSU in development but nothing else for the moment. Even the latest PSU for 11.2.0.4 seems not to include the patch.

 

Cet article OGG: Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 est apparu en premier sur Blog dbi services.

Oracle Audit Trail Add Program Name

The program name attribute (V$SESSION.PROGRAM) is not by default passed to Oracle’s audit logs. It can be optionally included. To do so, apply Patch 7023214 on the source database. After the patch is applied, the following event needs to be set:

ALTER SYSTEM SET
           EVENT='28058 trace name context forever'
           COMMENT='enable program logging in audit trail' SCOPE=SPFILE;

The table below summarizes key session attributres (V$SESSION) the are passed/not passed to Oracle auditing

Oracle Audit Trails

Session Attribute

(V$SESSION)

Description

Traditional Auditing (SYS.AUD$)

Fine Grained Auditing (SYS.FGA_LOG$)

CLIENT_IDENTIFIER

End user username

CLIENTID

CLIENTID

CLIENT_INFO

Concatenated application log string

Not passed

Not passed

MODULE

ABAP program, module, application component or service

Not passed

Not passed

ACTION

Business action being executed, page, code event, location within program

Not passed

Not passed

 

If you have questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP

Reference
 
 
 
Auditing, Oracle Database, Oracle Audit Vault
Categories: APPS Blogs, Security Blogs

Down the performance of single table using Various DML Operation.

Tom Kyte - Tue, 2017-04-18 01:46
Suppose I have a table that's called account_detail of customer account detail. And This table contain 100,000,000 Rows. suppose we fire select query on this table and at a same time may or may not be insert or update or delete record fire by staff...
Categories: DBA Blogs

PL SQL Bulk Insertion with a Mapping

Tom Kyte - Tue, 2017-04-18 01:46
I have a table Trans containing around 10 billion records an having index on a column "customer_id". Table Structure : FCT_Trans customer_id varchar2(20), trans_date date, trans_type varchar2(10), Cust_name varchar2(100), ect I have another ...
Categories: DBA Blogs

Trigger

Tom Kyte - Tue, 2017-04-18 01:46
Hi Team, I have to create a trigger which will record the old value as well as new value in case of insert. Situation is that I have one application and on frontend part whenever some changes happened then the xyz_id got changed. At the backend, ...
Categories: DBA Blogs

Migrating huge tables with blobs

Tom Kyte - Tue, 2017-04-18 01:46
Hello Tom, we migrate from informix to oracle and we have this problem : we have to unload tables with millions records, each record has blob, blobs are unloaded in some blob files with size near 2 GB. 170 thousand blobs are in one blob fil...
Categories: DBA Blogs

Loading data to CLOB column

Tom Kyte - Tue, 2017-04-18 01:46
Hello, I have 2 tables T1 and T2: <code> CREATE TABLE "T1" ("DATE_M" DATE, "ID" VARCHAR2(20), "ADDR" VARCHAR2(17), "VER" VARCHAR2(50), "MODEL" VARCHAR2(10), "ADD_I" VARCHAR2(10), "SN" VARCHAR2(15), "MODE" VARCHAR2(5), ...
Categories: DBA Blogs

Storing historic records

Tom Kyte - Tue, 2017-04-18 01:46
Hi, I have a requirement to write a view on the table containing historic price information of different products. The 1 st history record(when sorted in an asecnding order of date) for a product should have the GRANT_DATE as the effective_date...
Categories: DBA Blogs

How to make dashboard using Pointbase database

Nilesh Jethwa - Mon, 2017-04-17 16:05

Instant Visibility. A Pointbase dashboard visually summarizes all the important metrics you have selected to track, to give you a quick-and- easy overview of where everything stands. With real-time Pointbase SQL reporting reporting, it's a live view of exactly how your marketing campaign is performing.

  • Better Decision Making
  • Gain Competitive Advantage
  • Enhance Collaboration
  • Spotting potential problems
  • Merge with data from Excel Dashboards
  • Live SQL against database
  • No need for Data-warehouse or ETL
  • Leverage the speed and stability of your powerful database.

Read more at http://www.infocaptor.com/ice-database-connect-dashboard-to-pointbase-sql

How to find the object that caused ORA-08103 error

Bobby Durrett's DBA Blog - Mon, 2017-04-17 14:50

A developer told me that two package executions died with ORA-08103 errors and he didn’t know which object caused the errors.

I found two trace files that had the following contents:

*** SESSION ID:(865.1201) 2017-04-17 10:17:09.476
OBJD MISMATCH typ=6, seg.obj=21058339, diskobj=21058934, dsflg=100000, dsobj=21058339, tid=21058339, cls=1

*** SESSION ID:(595.1611) 2017-04-17 10:17:35.395
OBJD MISMATCH typ=6, seg.obj=21058340, diskobj=21058935, dsflg=100000, dsobj=21058340, tid=21058340, cls=1

Bug 13844883 on Oracle’s support site gave me the idea to look up the object id for the diskobj part of the trace as the current object id. So, I needed to look up 21058934 and 21058935. I used this query to find the objects:

select * from dba_objects where DATA_OBJECT_ID in
(21058934,
21058935);

This pointed to two index partitions that had been rebuilt while the package was running. I’m pretty sure this caused the ORA-08103 error. So, if you get an ORA-08103 error find diskobj in the trace file and look it up as DATA_OBJECT_ID in dba_objects.

Bobby

Categories: DBA Blogs

HCM Cloud R12 - 3 Cool Things

Floyd Teter - Mon, 2017-04-17 12:34
Just to give y'all a taste for HCM Cloud R12 as it rolls out, here are 3 new features I find really cool.

1.  Home Page with Quick Actions 


The coolness here comes from being able to easily initiate an action without requiring the user to have any knowledge of the application structure, navigation, or work area organization.  Simply find what you want to do and do it.  And, for the security geeks out there, access control is based on functional security for roles.

2.  Personalized Email Notifications


This is a feature folks have been requesting for some time:  personalizing email notifications.  You can apply your brand and preferred content as well as...wait for it...define custom templates for life cycle events.  And the scope of approval/rejections and requests for more information has been expanded for R12.

3.  UX Consistency Across Devices


For a long time, we've been working toward user experience ("UX") consistency across devices; the idea that the cloud is a platform that works the same way regardless of the device used for access.  We've nailed that concept in R12.  Look, feel, and work processes across devices are as consistent.  Your desktop, your laptop, your tablet, your phone...use what you want wherever you are.  The UX will remain the same.

So there you have it...3 cool things about R12.  You have others?  Tell us about them.  Find the comments.

Managing Oracle Cloud IaaS - The API way

Marcelo Ochoa - Mon, 2017-04-17 10:47
My last post shows how to deploy, for example, a Docker Swarm cluster at Oracle Cloud IaaS services.
This time I'll show you how to do the same without using the Web interface, I am an old fashioned sysadmin and I loves scripting and console tools ;)
The idea is to create a set of 5 nodes of Docker swarm cluster in five steps:

  • Deploy storage (Disks)
  • Deploy instances (VMs)
  • Deploy Docker
  • Deploy Docker Machine
  • Deploy Swarm

Oracle provides a complete set of URL end points to manage all the infrastructure of your needs, here an example of the functionalities:

  • Accounts
  • ACLs
  • Images
  • IP Address
  • Orchestrations
  • Security
  • Snapshots
  • Storage
  • Virtual NICs
  • VPN

as you can see it virtually cover all your needs for managing the cloud infrastructure.
Before start you need some basic information to manage the cloud through the API.

  • Username/Password cloud access
  • Indentity domain

above information usually is included at the welcome mail when you register a cloud account, here an screenshot:
here a full explanation on how to deal with above information.
Also using the compute UI I uploaded a public ssh key named ubnt.pub, imports an Ubuntu 16.10 compute image from the Cloud Market place:

and allowing ssh access to my compute instances.

Once you have user/password/identity/API endpoint information you are able to log using CURl Linux command line required to test my shell examples, you can see on-line at my GitHub account.
Step one - Deploy storage create disk VMsdeploy-storage.sh script basically creates a boot disk for your VM and data disk to store Docker repository (aufs backed storage on top of ext4 file-system)
Here an example calling the scripts with proper arguments:
[mochoa@localhost es]$ export API_URL="https://api-z999.compute.us0.oraclecloud.com/"
[mochoa@
localhost es]$ export COMPUTE_COOKIE="Set-Cookie: nimbula=eyJpZGVudGl0eSI6ICJ7XCJyZWFsbVwiOiBcImNvbXB1dGUtdXM2LXoyOFwiLCBcInZhbHVlXCI6IFwie1xcXCJjdXN0b21lclxcXCI6IFxcXCJDb21wdXRlLWFjbWVjY3NcXFwiLCBcXFwicmVhbG1cXFwiOiBcXFwiY29tcHV0ZS11czYtejI4XFxcIiwgXFxcImVudGl0eV90eXBlXFxcIjogXFxcInVzZXJcXFwiLCBcXFwic2Vzc2lvbl9leHBpcmVzXFxcIjogMTQ2MDQ4NjA5Mi44MDM1NiwgXFxcImV4cGlyZXNcXFwiOiAxNDYwNDc3MDkyLjgwMzU5MiwgXFxcInVzZXJcXFwiOiBcXFwiL0NvbXB1dGUtYWNtZWNjcy9zeWxhamEua2FubmFuQG9yYWNsZS5jb21cXFwiLCBcXFwiZ3JvdXBzXFxcIjogW1xcXCIvQ29tcHV0ZS1hY21lY2NzL0NvbXB1dGUuQ29tcHV0ZV9PcGVyYXRpb25zXFxcIiwgXFxcIi9Db21wdXRlLWFjbWVjY3MvQ29tcHV0ZS5Db21wdXRlX01vbml0b3JcXFwiXX1cIiwgXCJzaWduYXR1cmVcIjogXCJRT0xaeUZZdU54SmdjL3FuSk16MDRnNmRWVng2blY5S0JpYm5zeFNCWXJXcVVJVGZmMkZtdjhoTytaVnZwQVdURGpwczRNMHZTc2RocWw3QmM0VGJpSmhFTWVyNFBjVVgvb05qd2VpaUcyaStBeDBPWmc3SDJFSjRITWQ0S1V3eTl6NlYzRHd4eUhwTjdqM0w0eEFUTDUyeVpVQWVQK1diMkdzU1pjMmpTaHZyNi9ibU1CZ1Nyd2M4MUdxdURBMFN6d044V2VneUF1YVk5QTUxZmxaanJBMGVvVUJudmZ6NGxCUVVIZXloYyt0SXZVaDdUcGU2RGwxd3RSeFNGVVlQR0FEQk9xMExGaVd1QlpaU0FTZVcwOHBZcEZ2a2lOZXdPdU9LaU93dFc3VkFtZ3VHT0E1Yk1ibzYvMm5oZEhTWHJhYmtsY000UVE1LzZUMDJlZUpTYVE9PVwifSJ9; Path=/; Max-Age=1800"
[mochoa@
localhost es]$ ./deploy-storage.sh "$COMPUTE_COOKIE" "$API_URL"
COMPUTE_COOKIE value is generated as is described into the section Step 5: Get an Authentication Cookie. API_URL is the information showed at the Compute console UI Dashboard.
The shell script basically parse a file with cloud hosts with this syntax:
oc1 129.144.12.125 es_ingest
oc2 129.144.12.234 es_data
oc3 129.144.12.229 es_data
oc4 129.144.12.74 es_master
oc5 129.144.12.140 es_master
during storage operations only first column is used to create a 10Gb boot disk named boot_[instance_name] of type storage/default and boot enabled, and a repository disk named repo_[instance_name] with 45Gb size and type storage/latency designed for fast data access.
After a few seconds you can see the storage created as is shown below:
Step two - Create instances (VMs)
Once your storages are all on-line the script deploy-machines.sh will create your VMs, here a command line example, note that the COMPUTE_COOKIE should be valid, this cookie expire after 1800 seconds, once expired you have to call again to the login call.
deploy-machines also use cloud.hosts file first column to named your instances and to locate the properly storage.
[mochoa@localhost es]$ ./deploy-nodes.sh "$COMPUTE_COOKIE" "$API_URL"
-----------------
Creating Nodes...
-----------------
creating oc1 node...
after a few minutes all your instances will be ready as is show below:
 
at this time you see actual public IPs assigned to the instances, you have to edit cloud.hosts file using above information.
Step three - Deploy Docker software
deploy-docker.sh will use first two columns of cloud.hosts file, instance name and public IP for accessing using SSH.
Unlike previous scripts it requires your private ssh key (associated to the public one uploaded using Compute UI web pages), here a sample call:
[mochoa@localhost es]$ ./deploy-docker.sh /home/mochoa/Documents/Scotas/ubnt
----------------------------
Deploying Docker to nodes...
----------------------------
Deploying oc1 node with docker software ...
# Host 129.144.12.125 found: line 285
/home/mochoa/.ssh/known_hosts updated.
Original contents retained as /home/mochoa/.ssh/known_hosts.old
the shell mainly pulla an script named oracle-cloud-node-conf.sh which do all the post installation steps to get and Ubuntu 16.10 OS updated and with Docker software installed, also prepares and ext4 partition using  /dev/xvdc the disk named repo_[instance_name] during instance creation time, finally reboot the instance to get the new kernel updated.
Step four - Deploy Docker Machine software (cloud side and local side)
Docker Machine software is designed to manage from command line your Docker Cloud instances, it is designed to use custom drivers such as VirtualBox or AWS, for Oracle Cloud there isn't a driver but you can register your Cloud instances as generic instances managed using SSH.
deploy-machines.sh also uses first two columns of cloud.hosts file and needs your SSH private key file, here a sample call:
[mochoa@localhost es]$ ./deploy-machines.sh /home/mochoa/Documents/Scotas/ubnt
-------------------------------------
Deploying docker-machine for nodes...
-------------------------------------
Creating oc1 docker-machine ...
Running pre-create checks...
Creating machine...
(oc1) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Step five - Init Docker Swarm
Last step also uses first two columns of your cloud.hosts file, and receives only one argument which defines which instance is defined as Swarm master node.
deploy-swarm.sh uses your docker-machine(s) created in previous step, here an example call:
[mochoa@localhost es]$ ./deploy-swarm.sh oc5
----------------------------
Deploying swarm for nodes...
----------------------------
at this point you your Docker Swarm cluster is ready to use.
Testing with Elastic Search images
As in my previous post Deploying an ElasticSearch cluster at Oracle Cloud, a Swarm Cluster could be easily tested using Elastic Search cluster, I modified the script deploy-cluster.sh, to use cloud.hosts information and label the Swarm cluster using third column of cloud.hosts file and finally build my custom Docker images and start the cluster, here a sample usage:
[mochoa@localhost es]$ ./deploy-cluster.sh oc5
----------------------------
Deploying swarm for nodes...
----------------------------
-----------------------------------------
Building private ES image on all nodes...
-----------------------------------------
Building ES at oc1 node ...
oc5 argument is the instance defined into previous step as Swarm master node, the script also leaves running a Swarm Visualizer using 8080 port of oc5 instance and Cerebro ES monitoring tool at port 9000.
As you can see after you have your scripts ready to use the Cloud infrastructure will be ready in minutes if use Cloud API.


Things To Do, Places To Be If Content Is Your Thing

WebCenter Team - Mon, 2017-04-17 07:27

‘Tis the season to be busy. No, I am not talking about thetax season but we do have a lot of things cooking up on the Content side of thehouse this spring. You will be hearing a lot about Oracle Cloud solutions forContent and Experience over the next few days and coming weeks.

Here’s a quick snapshot of things to do and places to be, tohave the front row seating (so to say, not literally, of course) for all theupcoming activities:

Thursday, April 20
10 a.m. PST / 1 p.m. EST

Live Tweet Chat: Is headless Content Management Systemsignaling the end of Web Content Mgmt?

Follow and participate using #contentdgtl

Don’t sit on the sidelines on this one. Hear what theindustry is saying about the direction of content management and participate inthe live one hour conversation on twitter. Simply use one of the tweet chatplatforms to follow the conversation on #contentdgtl and chime in (more on tweet chat logistics here). Hear and beheard.

Tuesday, April 25th– Thursday, April 27th, Modern Customer Experience Conference, LasVegas, NV

If you are planning to be in Las Vegas for the Modern CXConference, don’t miss catching up with our executives, customers and productexperts and see our solutions in action! Here are a few sessions featuringcustomers, industry thought leaders, executives and product experts to keepnote of:

  • Customer Experience is the new Battleground: A Customer Panel on Business Transformation [KEY1314]
    Wednesday, Apr 26 | 8:30 a.m. | Mandalay Bay Ballroom
  • Content: The Linchpin for Consistent Experiences in an Omni-channel World [THT1269] 
    Wednesday, Apr 26 | 1:00 p.m. | CX Hub Theater

    Thursday, Apr 27 | 1:00 p.m. | CX Hub Theater
  • Transforming Global Operations for Sales and Marketing: The Atradius Story [BRK1069]
    Wednesday, Apr 26 | 4:00 p.m. | CX Sales Theater 1
  • Drive Omni-channel Content Operations with Oracle Content and Experience Cloud  [THT1327]
    Wednesday, Apr 26 | 6:40 p.m. | Modern MarketingTheater
  • Deliver Rich and Consistent Omni-channel Experiences [BRK1190]
    Thursday, Apr 27 | 2:00 p.m. | Reef A
  • Bring Proven Digital Marketing Strategies toHighly Regulated Industries [BRK1105]
    Thursday, Apr 27 | 3:15 p.m. |Breakers K and L

We will also have “Omni-Channel Content and Experience”kiosks in both CX and Modern Marketing Exhibition areas so do come check outthe solutions in action.

Tuesday, May 2nd
10:00 a.m. PDT/ 1:00 p.m. EDT

Live Webcast: IntroducingOracle Content and Experience Cloud

Join Oracle executive, David Le Strat, for a webcast on May2 at 10am PDT to learn about Oracle Content and Experience Cloud -- a DigitalExperience platform that drives omni-channel content management and deliversengaging experiences to your customers, partners, and employees.

Register here: http://bit.ly/2mILLPi

We will continue to share exciting updates and informationin the meantime but do mark your calendars so you can catch and participate inall of the above-mentioned events. You know why it is important for you and Ito participate in these? Well, here’s one reason:

Things To Do, Places To Be If Content Is Your Thing

WebCenter Team - Mon, 2017-04-17 07:27

‘Tis the season to be busy. No, I am not talking about the tax season but we do have a lot of things cooking up on the Content side of the house this spring. You will be hearing a lot about Oracle Cloud solutions for Content and Experience over the next few days and coming weeks.

Here’s a quick snapshot of things to do and places to be, to have the front row seating (so to say, not literally, of course) for all the upcoming activities:

Thursday, April 20
10 a.m. PST / 1 p.m. EST

Live Tweet Chat: Is headless Content Management System signaling the end of Web Content Mgmt?

Follow and participate using #contentdgtl

Don’t sit on the sidelines on this one. Hear what the industry is saying about the direction of content management and participate in the live one hour conversation on twitter. Simply use one of the tweet chat platforms to follow the conversation on #contentdgtl and chime in (more on tweet chat logistics here). Hear and be heard.

Tuesday, April 25th – Thursday, April 27th, Modern Customer Experience Conference, Las Vegas, NV

If you are planning to be in Las Vegas for the Modern CX Conference, don’t miss catching up with our executives, customers and product experts and see our solutions in action! Here are a few sessions featuring customers, industry thought leaders, executives and product experts to keep note of:

  • Customer Experience is the new Battleground: A Customer Panel on Business Transformation [KEY1314]
    Wednesday, Apr 26 | 8:30 a.m. | Mandalay Bay Ballroom
  • Content: The Linchpin for Consistent Experiences in an Omni-channel World [THT1269] 
    Wednesday, Apr 26 | 1:00 p.m. | CX Hub Theater

    Thursday, Apr 27 | 1:00 p.m. | CX Hub Theater
  • Transforming Global Operations for Sales and Marketing: The Atradius Story [BRK1069]
    Wednesday, Apr 26 | 4:00 p.m. | CX Sales Theater 1
  • Drive Omni-channel Content Operations with Oracle Content and Experience Cloud  [THT1327]
    Wednesday, Apr 26 | 6:40 p.m. | Modern Marketing Theater
  • Deliver Rich and Consistent Omni-channel Experiences [BRK1190]
    Thursday, Apr 27 | 2:00 p.m. | Reef A
  • Bring Proven Digital Marketing Strategies to Highly Regulated Industries [BRK1105]
    Thursday, Apr 27 | 3:15 p.m. | Breakers K and L

We will also have “Omni-Channel Content and Experience” kiosks in both CX and Modern Marketing Exhibition areas so do come check out the solutions in action.

Tuesday, May 2nd
10:00 a.m. PDT/ 1:00 p.m. EDT

Live Webcast: Introducing Oracle Content and Experience Cloud

Join Oracle executive, David Le Strat, for a webcast on May 2 at 10am PDT to learn about Oracle Content and Experience Cloud -- a Digital Experience platform that drives omni-channel content management and delivers engaging experiences to your customers, partners, and employees.

Register here:  http://bit.ly/2mILLPi

We will continue to share exciting updates and information in the meantime but do mark your calendars so you can catch and participate in all of the above-mentioned events. You know why it is important for you and I to participate in these? Well, here’s one reason:

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Interana

DBMS2 - Mon, 2017-04-17 05:10

Interana has an interesting story, in technology and business model alike. For starters:

  • Interana does ad-hoc event series analytics, which they call “interactive behavioral analytics solutions”.
  • Interana has a full-stack analytic offering, include:
    • Its own columnar DBMS …
    • … which has a non-SQL DML (Data Manipulation Language) meant to handle event series a lot more fluently than SQL does, but which the user is never expected to learn because …
    • … there also are BI-like visual analytics tools that support plenty of drilldown.
  • Interana sells all this to “product” departments rather than marketing, because marketing doesn’t sufficiently value Interana’s ad-hoc query flexibility.
  • Interana boasts >40 customers, with annual subscription fees ranging from high 5 figures to low 7 digits.

And to be clear — if we leave aside any questions of marketing-name sizzle, this really is business intelligence. The closest Interana comes to helping with predictive modeling is giving its ad-hoc users inspiration as to where they should focus their modeling attention.

Interana also has an interesting twist in its business model, which I hope can be used successfully by other enterprise software startups as well.

  • For now, at no extra charge, Interana will operate its software for you as a managed service. (A majority of Interana’s clients run the software on Amazon or Azure, where that kind of offering makes sense.)
  • However, presumably in connection with greater confidence in its software’s ease of administration, Interana will move this year toward unbundling the service as an extra-charge offering on top of the software itself.

The key to understanding Interana is its DML. Notes on that include:

  • Interana’s DML is focused on path analytics …
    • … but Interana doesn’t like to use that phrase because it sounds too math-y and difficult.
    • Interana may be the first company that’s ever told me it’s focused on providing a better nPath. :)
  • Primitives in Interana’s language — notwithstanding the company’s claim that it never ever intended to sell to marketing departments — include familiar web analytics concepts such as “session”, “funnel” and so on. (However, these are being renamed to more neutral terms such as “flow” in an upcoming version of the product.)
  • As typical example questions or analytic subjects, Interana offered:
    • “Which are the most common products in shopping carts where time-to-checkout was greater than 30 minutes?”
    • Exactly which steps in the onboarding process result in the greatest user frustration?
  • The Interana folks and I agree that Splunk is the most recent example of a new DML kicking off a significant company.
  • The most recent example I can think of in which a vendor hung its hat on a new DML that was a “visual programming language” is StreamBase, with EventFlow. That didn’t go all that well.
  • To use Founder/CTO Bobby Johnson’s summary term, the real goal of the Interana language is to describe a state machine, specifically one that produces (sets of) sequences of events (and the elapsed time between them).

Notes on Interana speeds & feeds include:

  • Interana only promises data freshness up to micro-batch latencies — i.e., a few minutes. (Obviously, this shuts them out of most networking monitoring and devops use cases.)
  • Interana thinks it’s very important for query response time to max out at a low number of seconds. If necessary, the software will return approximate results rather than exact ones so as to meet this standard.
  • Interana installations and workloads to date have gotten as large as:
    • 1-200 nodes.
    • Trillions of rows, equating to 100s of TBs of data after compression/ >1 PB uncompressed.
    • Billions of rows/events received per day.
    • 100s of 1000s of (very sparse) columns.
    • 1000s of named users.

Although Interana’s original design point was spinning disk, most customers store their Interana data on flash.

Interana architecture choices include:

  • They’re serious about micro-batching.
    • If the user’s data is naturally micro-batched — e.g. a new S3 bucket every few minutes — Interana works with that.
    • Even if the customer’s data is streamed — e.g. via Kafka — Interana insists on micro-batching it.
  • They’re casual about schemas.
    • Interana assumes data arrives with some kind of recognizable structure, via JSON, CSV or whatever.
      • Interana observes, correctly, that log data often is decently structured.
        • For example, if you’re receiving “phone home” pings from products you originally manufactured, you know what data structures to expect.
        • Interana calls this “logging with intent”.
      • Interana is fine with a certain amount of JSON (for example) schema change over time.
      • If your arriving data truly is a mess, then you need to calm it down via a pass through Splunk or whatever before sending it to Interana.
    • JSON hierarchies turn into multi-part column names in the usual way.
    • Interana supports one level of true nesting, and one level only; column values can be “lists”, but list values can’t be list themselves.

Finally, other Interana tech notes include:

  • Compression is a central design consideration …
    • … especially but not only compression algorithms designed to deal with great sparseness, such as run-length encoding (RLE).
    • Dictionary compression, in a strategy that is rarer than I once expected it to be, uses a global rather than shard-by-shard dictionary. The data Interana expects is of low-enough cardinality for this to be the better choice.
    • Column data is sorted. A big part of the reason is of course to aid compression.
    • Compression strategies are chosen automatically for each segment. Wholly automatically, I gather; you can’t tune the choice manually.
  • As you would think, Interana technically includes multiple data stores.
    • Data first hits a write-optimized store. Unlike the case of Vertica, this WOS never is involved in answering queries.
    • Asynchronously, the data is broken into columns, and banged to “disk”.
    • Asynchronously again, the data is sorted.
    • Queries run against sorted data, sorting recent blocks on-the-fly if necessary.
  • Interana lets you shard different replicas of the data according to different shard keys.
  • Interana is proud of the random sampling it does when serving approximate query results.
Categories: Other

Welcome to M|17, part 2

Yann Neuhaus - Mon, 2017-04-17 03:57

m17bannernew
Welcome to the second day of the MariaDB’s first user conference
On the 12th, at 09:00, started the first-ever experimental MariaDB Associate certification exam and I was glad to be among the first and participate
This exam was offered free of charges to all registered attendees
As I wrote over, it was really experimental because all candidates faced many problems
Certification
First, as this exam was proctored, the authentification process was very, very slow, essentially due to the overloaded network
Once done, we were all expecting a “Multiple Choice Question” as in almost all other certifications instead of we had to perform real-world database administration tasks on a remote linux box where a MariaDB server was installed
Following skills were tested:
Server configuration
Security
Users and Roles
Schema Operations
Query Performance
Backup and Restore
Testing duration was 90mn but when you are facing network break and slowness, it’s really short
To pass the exam you need 48 points on a total of 60, so 80%
One thing you do not have to forget when you are finished is to absolutely restart the MariaDB server otherwise all your servers configuration answers are lost
They kindly warned us before we started but at the end there were no alert and communication was roughly stopped
This certification will be definitely Online in one or two months
After lunch, which was as the day before a big buffet but more exotic, my decision was to go to the session of Ashraf Sharif from Severalnines
Step-By-Step: Clustering with Galera and Docker Swarm
I was really happy to see him as we often collaborated for several ClusterControl support cases. He was happy too
Unfortunately for him, he had to speed up because 45mn was not enough for such a vast topic
It was even quite a challenge as he had more than 140 slides and a demo
FullSizeRender
Several key notes were then proposed to close this 2-days event in the conference center
Again the air-conditioning was too cool and this time I got sick
Gunnar Hellekson, director of Product Management for Red Hat Enterprise Linux started with Open Source in a dangerous world
He discussed mainly on how we can leverage the amazing innovation coming out of open source communities while still plotting a journey with secure, stable and supported open source platforms, illustrating with some examples of customer and organizations that use open source to not just innovate but add more competitive advantage
The last key note was proposed by Michael Widenius himself, Everything Old is New: the return of relational
As the database lanscape is changing, evolving very fast and is no longer the property of some as Oracle, IBM or Microsoft,
he is convinced that even NOSQL may work for a subset of use cases, open source relational database are delivering more and more capabilities for NoSQL use cases at a rapid pace

As a conclusion for this MariaDB’s first user conference, my overall impression is positive, it was well organized, all the staff were enthusiastic and open, we could meet and talk with a lot of different people
So, a sweet juicy well dosed workshop, some high level sessions to bring sweetness and acidity into perfect harmony, 3 or 4 spicy key notes to enhance the taste of the event spirit, all ingredients to a cocktail shaker, shake and you obtain the delicious and unforgettable M|17 cocktail.

 

Cet article Welcome to M|17, part 2 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator