Feed aggregator

Improving Google Crawl Rate Optimization

Nilesh Jethwa - Thu, 2017-12-07 16:12

There are different components that form an SEO strategy, one of which is commonly referred to as crawling, with tools often being called spiders. When a website is published on the Internet, it is indexed by search engines like Google to determine its relevance. The site is then ranked on the search engine with a higher ranking being attributed to a higher visibility potential per primary keyword.

In its indexing process, a search engine must be able to crawl through the website in full, page by page, so that it can determine the site’s digital value. This is why it’s important for all elements of the page to be crawl-able or else there might be pages that the search engine won’t be able to index. As a result, these wont be displayed as relevant results when someone searches for it with a relevant keyword.

Search engines like Google work fast. A website can be crawled and indexed within minutes of publishing. So one of your main goals is to see to it that your site can be crawled by these indexing bots or spiders. In addition, the easier your site is to crawl, the more points the search engine will add to your overall score for ranking.

There are different methods that you can try to optimize your crawl rate and here are some of them:

Read more at https://www.infocaptor.com/dashboard/how-to-improve-google-crawl-rate-optimization

Errors when downloading a file on page submit in Oracle Application Express 5.1 or later...

Joel Kallman - Thu, 2017-12-07 15:59
Recently, Sharon Kennedy from our team approached me for some help with file download in Oracle Application Express (APEX).  Sharon is the primary developer of Oracle Live SQL (among many of her other responsibilities), and she wanted to initiate a file download in a page process, after page submission.  Since I've done this 100 times in APEX applications, should be easy, right?

Back in 2014, I wrote a short blog post showing how to generate a link to download a file from a BLOB stored in a table.  But this problem was slightly different.  The application flow was:

  1. In Oracle Live SQL Administration, an administrator would click the button "Download Oracle Content"
  2. The page would then be submitted, and a PL/SQL page process would fire, which would query all of the static scripts and tutorials from Live SQL, zip them up using APEX_ZIP, and initiate a file download.

However, when the button was clicked, the page would be submitted, no file download would be initiated, and the following error was displayed on the page:


Error: SyntaxError: Unexpected token r in JSON at position 0



After spending more than an hour debugging the Live SQL application, I resorted to a simpler test case.  I created a trivial application with a button on the first page, which would submit and invoke the PL/SQL page process:

declare  
l_file_blob blob;
l_file_name apex_application_files.filename%type;
l_file_mimetype apex_application_files.mime_type%type;
begin
select blob_content, mime_type, filename into l_file_blob , l_file_mimetype , l_file_name from apex_application_files where id = 2928972027711464812;
sys.owa_util.mime_header( l_file_mimetype , false );
sys.htp.p('Content-Disposition: attachment; filename="' || l_file_name ||'"');
sys.htp.p('Content-length: ' || sys.dbms_lob.getlength( l_file_blob ));
sys.owa_util.http_header_close;
sys.wpg_docload.download_file( l_file_blob );
-- Stop page processing
apex_application.stop_apex_engine ;
end;


With my test case, it was exactly the same error encountered, the meaningless error message of "Error: SyntaxError: Unexpected token r in JSON at position 0".

I finally gave up and contacted Patrick Wolf on the APEX product development team, who helped me solve this problem in one minute.  Granted...Patrick was both the creator of the problem and the creator of the solution!

To resolve this problem:

  1. Open the page in Page Designer in Application Builder
  2. Edit the page attributes
  3. In the Advanced section of the page properties on the right hand side of Page Designer, change "Reload on Submit" to "Always" (changing it from "Only for Success" to "Always")
That's it!



Setting "Reload on Submit" to "Always" will POST the page and render the result using the behavior as it was in APEX 5.0 and earlier.  In APEX 5.1, if Reload on Submit is set "Only for Success" (the default), it will use the new optimized page submission process, and expect a specifically formatted JSON result returned from the APEX engine.  Obviously, when I employ a page process which overrides the HTP buffer and emit binary content (instead of a well-formed JSON result), the libraries on the page don't know how to deal with that, and thus, results in this obtuse "Unexpected token r..." message.

Secure Oracle E-Business Suite 12.2 with Allowed Redirects

Steven Chan - Thu, 2017-12-07 13:37

A redirect is an HTTP response status code "302 Found" and is common method for redirecting a URL. Client redirects are a potential attack vector. The Oracle E-Business Suite 12.2.4+ Allowed Redirects feature allows you to define a whitelist of allowed redirects for your Oracle E-Business Suite 12.2 environment. Allowed Redirects is enabled by default with Oracle E-Business Suite 12.2.6.

When the Allowed Redirects feature is enabled, redirects to sites that are not configured in your whitelist are not allowed. This feature provides defense against unknown and potentially damaging sites. This is an example of an attack that the Allowed Redirect feature will prevent if properly configured:

Your users will see an error message if a redirect is blocked by Allowed Redirects:

Note: Allowed Redirects will only block navigation to sites that happen via client redirects. It is not intended to prevent other methods for accessing external sites.

Where can I learn more?

Related Articles

References

Categories: APPS Blogs

GoldenGate, SOA Admin, OAM & Apps DBA + FREE Training This Week

Online Apps DBA - Thu, 2017-12-07 01:57

  In this Week, you will find: 1. Oracle GoldenGate for DBAs, Apps DBAs and Cloud Migration    1.1 [FREE Live Webinar] Learn Oracle GoldenGate What, Why & How    1.2 Oracle GoldenGate 12c: Troubleshooting using LogDump Utility 2. Concurrent Managers: Overview & Concepts Oracle EBS R12 for Apps DBAs 3. For SOA & FMW Admins: Oracle SOA Suite Administration: […]

The post GoldenGate, SOA Admin, OAM & Apps DBA + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Grouping Id

Tom Kyte - Thu, 2017-12-07 01:06
Hi Tom I never really understood the usage of GROUPING_ID function in OTN.I heard it avoids using multiple GROUPING functions.Can you please illustrate with a small example Thanks
Categories: DBA Blogs

How To Benefit From SEO Audit

Nilesh Jethwa - Wed, 2017-12-06 16:37

Businesses need to capitalize on the growing online market if they want to succeed in modern commerce. The thing about Internet marketing is that there are a number of things that have to be addressed to ensure that sites are performing well and actually exist as assets for companies that use them.

One of the things that businesses online need to ensure is that they run an SEO audit every now and then. What the audit does is give them insights as to how their websites are performing from its current search engine standing to its effectiveness as an online marketing tool.

It’s important that business sites provide information and remain relevant. With the SEO audit, companies can determine which particular components need improvement and which ones are functioning correctly. Everything from content quality to backlinking to indexing is assessed through this process and this is why it’s something that can’t be discounted from the equation.

Unbeknownst to most people, an SEO audit doesn’t only look into the performance of on-page activities. It also assesses any off-page activities that a company might currently be or have engaged in. When it comes to the latter, a good example would be the assessment of the performance, reliability, and value of third-party inbound links.

Read more at https://www.infocaptor.com/dashboard/the-benefits-of-an-seo-audit

Announcing Open Source Jenkins Plugin for Oracle Cloud Infrastructure

OTN TechBlog - Wed, 2017-12-06 15:12

Jenkins is a continuous integration and continuous delivery application that you can use to build and test your software projects continuously. Jenkins OCI Plugin is now available on Github and it allows users to access and manage Oracle Cloud Infrastructure resources from Jenkins. A Jenkins master instance with Jenkins OCI Plugin can spin up slaves (Instances) on demand within the Oracle Cloud Infrastructure, and remove the slaves automatically once the Job completes.

After installing Jenkins OCI Plugin, you can add a OCI Cloud option and a Template with the desired Shape, Image, Domain, etc. The Template will have a Label that you can use in your Jenkins Job. Multiple Templates are supported. The Template options include Labels, Domains, Credentials, Shapes, Images, Slave Limits, and Timeouts.

Below you will find instructions for building and installing the plugin, which is available on GitHub: github.com/oracle/jenkins-oci-plugin

Installing the Jenkins OCI Plugin

The following section covers compiling and installing the Jenkins OCI Plugin.

Plugins required:
  • credentials v2.1.14 or later
  • ssh-slaves v1.6 or later
  • ssh-credentials v1.13 or later
Compile and install OCI Java SDK:

Refer to OCI Java SDK issue 25. Tested with Maven versions 3.3.9 and 3.5.0.

Step 1 – Download plugin $ git clone https://github.com/oracle/oci-java-sdk
$ cd oci-java-sdk
$ mvn compile install Step 2 – Compile the Plugin hpi file $ git clone https://github.com/oracle/jenkins-oci-plugin
$ cd jenkins-oci-plugin
$ mvn compile hpi:hpi

Step 3 – Install hpi

  • Option 1 – Manage Jenkins > Manage Plugins > Click the Advanced tab > Upload Plugin section, click Choose File > Click Upload
  • Option 2 – Copy the downloaded .hpi file into the JENKINS_HOME/plugins directory on the Jenkins master
Restart Jenkins and “OCI Plugin” will be visible in the Installed section of Manage Plugins.

For more information on configuring the Jenkins Plugin for OCI, please refer to the documentation on the GitHub project. And if you have any issues or questions, please feel free to contact the development team by submitting through the Issues tab.

Related content

Benefits Of Transportation and Logistic Dashboards

Nilesh Jethwa - Wed, 2017-12-06 14:45

Transportation and Logistics Dashboard and KPI benefits

Currently, the industry of transportation and logistics is gaining momentum regarding the use of dashboards. Increasingly, non-technical companies involved in this sector are seeing the advantages of implementing data-driven platforms in their marketing goals.

The most particular change seen is the substantial potential as well as interest from small cities to “smart” cities. For instance, startups are co-creating valuable products with transportation and logistics dashboard providers through modern solutions like the TransportBuzz.

Dashboards for Transportation and Logistics

Dashboards or data-driven platforms are a modern way of simplifying how organizations control and manage their access to assets like services and data. Usually, that directs them to the following advantages:

  • More revenue channels.
  • Increased brand awareness and wider reach.
  • External sources that facilitates open innovation.
  • Better operational efficiency.

Read more at http://www.infocaptor.com/dashboard/

How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7?

Yann Neuhaus - Wed, 2017-12-06 10:17

DISCLAIMER: I know it exists other solutions to do it

Pre-requisites:
– a virtual machine (or not) with CentOS7 installed
– a free disk or partition

I use a VBox machine and I added a 5GiB hard disk

We list the disk and partition to check if our new hard is added.

[root@deploy ~]$ lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0   20G  0 disk
├─sda1                       8:1    0    1G  0 part /boot
└─sda2                       8:2    0   19G  0 part
  ├─cl-root                253:0    0   21G  0 lvm  /
  └─cl-swap                253:1    0    2G  0 lvm  [SWAP]
sdb                          8:16   0   10G  0 disk

Good, we can continue..

Let’s partition the disk using fdisk

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now, we need to inform the kernel that the partition table has changed. To do that, either we reboot the server or we run partprobe

[root@deploy ~]$ partprobe /dev/sdb1
[root@deploy ~]$

We create a physical volume

[root@deploy ~]$ pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@deploy ~]$ pvs
  PV         VG Fmt  Attr PSize  PFree
  /dev/sda2  cl lvm2 a--  19.00g       0
  /dev/sdb1     lvm2 ---   5.00g    5.00g
  /dev/sdc2  cl lvm2 a--   5.00g 1020.00m

We create a volume group

[root@deploy ~]$ vgcreate  vg_deploy /dev/sdb1
  Volume group "vg_deploy" successfully created

We check that the volume group was created properly

[root@deploy ~]$ vgdisplay vg_deploy
  --- Volume group ---
  VG Name               vg_deploy
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0
  Free  PE / Size       1279 / 5.00 GiB
  VG UUID               5ZhlvC-lpor-Ti8x-mS9P-bnxW-Gdtw-Gynocl

Here, I set the size of the logical volume with PE (Physical Extent). One PE represents 4.00 MiB

We create a logical volume on our volume group

[root@deploy ~]$ lvcreate -l 1000 -n lv_deploy vg_deploy
  Logical volume "lv_deploy" created.

We have a look to check how our logical volume “lv_deploy” looks like

[root@deploy ~]$ lvdisplay /dev/vg_deploy/lv_deploy
  --- Logical volume ---
  LV Path                /dev/vg_deploy/lv_deploy
  LV Name                lv_deploy
  VG Name                vg_deploy
  LV UUID                2vxcDv-AHfB-7c2x-1PM8-nbn3-38M5-c1QoNS
  LV Write Access        read/write
  LV Creation host, time deploy.example.com, 2017-12-05 08:15:59 -0500
  LV Status              available
  # open                 0
  LV Size                3.91 GiB
  Current LE             1000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Let’s create our file system on the new logical volume

[root@deploy ~]$ mkfs.xfs  /dev/vg_deploy/lv_deploy
meta-data=/dev/vg_deploy/lv_deploy isize=512    agcount=4, agsize=256000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1024000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We now create a new directory “mysqldata” for example

[root@deploy ~]$ mkdir /mysqldata

We add the new entry for our new logical volume

[root@deploy ~]$ echo "/dev/mapper/vg_deploy-lv_deploy     /mysqldata      xfs     defaults      0 0" >> /etc/fstab

We mount it

[root@deploy ~]$ mount -a

We check the filesystem is mounted properly

[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.9G   33M  3.9G   1% /mysqldata

We add some files to the /mysqldata directory (a for loop will help us)

[root@deploy mysqldata]$ for i in 1 2 3 4 5; do dd if=/dev/zero  of=/mysqldata/file0$i bs=1024 count=10; done
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000282978 s, 36.2 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000202232 s, 50.6 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000255617 s, 40.1 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000195752 s, 52.3 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000183672 s, 55.8 MB/s
[root@deploy mysqldata]$ ls -l
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

NOW the interesting part is coming because we are going to reduce our /mysqldata filesystem
But first let’s make a backup of our current /mysqldata FS

[root@deploy mysqldata]$ yum -y install xfsdump
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

Bad news! we cannot reduce an xfs partition directly so we need:
– to backup our filesystem
– umount the filsesytem && delete the logical volume
– re-partition tle logical volume with xfs FS
– restore our data

Backup the file system

[root@deploy mysqldata]$ xfsdump -f /tmp/mysqldata.dump /mysqldata
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.4 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> test
session label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of deploy.example.com:/mysqldata
xfsdump: dump date: Tue Dec  5 08:36:20 2017
xfsdump: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsdump: session label: "test"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 83840 bytes

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> test
media label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 75656 bytes
xfsdump: dump size (non-dir files) : 51360 bytes
xfsdump: dump complete: 5 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /tmp/mysqldata.dump OK (success)
xfsdump: Dump Status: SUCCESS

Then, we unmount the filesystem and delete the logical volume

[root@deploy ~]$ umount /mysqldata/

[root@deploy ~]$ df -hT
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root xfs        21G  8.7G   13G  42% /
devtmpfs            devtmpfs  910M     0  910M   0% /dev
tmpfs               tmpfs     920M     0  920M   0% /dev/shm
tmpfs               tmpfs     920M  8.4M  912M   1% /run
tmpfs               tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1           xfs      1014M  227M  788M  23% /boot
tmpfs               tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2          iso9660   4.3G  4.3G     0 100% /media/iso

[root@deploy ~]$ lvremove /dev/vg_deploy/lv_deploy
Do you really want to remove active logical volume vg_deploy/lv_deploy? [y/n]: y
  Logical volume "lv_deploy" successfully removed

We recreate the logical volume with a lower size (from 1000 PE to 800 PE)

[root@deploy ~]$ lvcreate -l 800 -n lv_deploy vg_deploy
WARNING: xfs signature detected on /dev/vg_deploy/lv_deploy at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/vg_deploy/lv_deploy.
  Logical volume "lv_deploy" created.

We build the XFS filesystem

[root@deploy ~]$ mkfs.xfs /dev/mapper/vg_deploy-lv_deploy
meta-data=/dev/mapper/vg_deploy-lv_deploy isize=512    agcount=4, agsize=204800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=819200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We remount the filesystem

[root@deploy ~]$ mount -a
[root@deploy ~]$
[root@deploy ~]$
[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.2G   33M  3.1G   2% /mysqldata

We list the content of /mysqldata directory

[root@deploy ~]$ ls -l /mysqldata
total 0

Let’s restore our data

[root@deploy ~]$ xfsrestore -f /tmp/mysqldata.dump /mysqldata
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: deploy.example.com
xfsrestore: mount point: /mysqldata
xfsrestore: volume: /dev/mapper/vg_deploy-lv_deploy
xfsrestore: session time: Tue Dec  5 08:36:20 2017
xfsrestore: level: 0
xfsrestore: session label: "test"
xfsrestore: media label: "test"
xfsrestore: file system id: 84832e04-e6b8-473a-beb4-f4d59ab9e73c
xfsrestore: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsrestore: media id: 8fda43c1-c7de-4331-b930-ebd88199d0e7
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 5 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /tmp/mysqldata.dump OK (success)
xfsrestore: Restore Status: SUCCESS

Our data are back

[root@deploy ~]$ ls -l /mysqldata/
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

Hope this helps :-)

 

Cet article How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7? est apparu en premier sur Blog dbi services.

Oracle Open Sources Kubernetes Tools for Serverless Deployment and Intelligent Multi-Cloud Management

Oracle Press Releases - Wed, 2017-12-06 09:00
Press Release
Oracle Open Sources Kubernetes Tools for Serverless Deployment and Intelligent Multi-Cloud Management New tools expand open, integrated and enterprise-grade container native application development platform with no cloud lock-in

KubeCon, Austin, Texas—Dec 6, 2017

Oracle today announced that it is open sourcing the Fn project Kubernetes Installer and Global Multi-Cluster Management, two projects designed to help developers build the next generation of container native applications leveraging Kubernetes. Both projects are integrated with the Oracle Container Native Application Development Platform, providing developers with an integrated and enterprise-grade platform to build, deploy and operate applications.

The Fn project Installer follows the recent open-sourcing of the Fn project and allows developers to leverage serverless capabilities on any Kubernetes environment, including within Oracle’s new managed Kubernetes service, Oracle Container Engine Cloud Service. Oracle also open sourced a technical preview of Global Multi-Cluster Management, a new set of distributed cluster management features for Kubernetes federation that intelligently manages planet scale applications that are hybrid, multi-region and multi-cloud. With this set of capabilities, customers can quickly build and auto-scale global applications or spot clusters on-demand and enable cloud migrations and hybrid scenarios.

“There continue to be significant concerns by developers looking into serverless development that cloud providers are leading them into a lock-in situation and away from industry standards,” said Mark Cavage, vice president of software development at Oracle. “The Oracle Container Native Application Development Platform, along with the new tools introduced today, are built on top of Kubernetes and provide an open source based, community driven, and thus, cloud-neutral, integrated container native technology stack that prevents cloud lock-in while enabling the flexibility of true hybrid and multi-cloud deployments.”

Kubernetes, led by the Cloud Native Computing Foundation, is the container management and orchestration platform for modern application workloads on which the development community has standardized, as it provides the mean to manage clusters that span multiple clouds. By working on top of Kubernetes, the Oracle Container Native Application Development Platform and its serverless tools address the next cloud challenge of applying application-aware decision logic to container native deployments based on rules such as cost, regional affinity, performance, quality of service and compliance.

“As Veritone continues to enhance its next-generation Veritone Developer application for AI data product, application and engine providers, it was essential for us to avoid cloud lock-in and to work with standards-based tools, such as Oracle’s, that enable true hybrid cloud management,” said Al Brown, senior vice president of engineering at Veritone. “An open standards-based approach is an important guarantee that our developer community can continue to build the AI applications of tomorrow without worrying about infrastructure or vendor lock-in.”

Both the Fn project and the Oracle Container Native Application Development Platform were announced shortly after Oracle joined the Container Native Computing Foundation in September.

Contact Info
Alex Shapiro
Oracle PR
+1.415.608.5044
alex.shapiro@oracle.com
Kristin Reeves
Blanc & Otus
+1.925.787.6744
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. 

Talk to a Press Contact

Alex Shapiro

  • +1.415.608.5044

Kristin Reeves

  • +1.925.787.6744

Kubernetes, Serverless, and Federation – Oracle at KubeCon 2017

OTN TechBlog - Wed, 2017-12-06 09:00

Today at the KubeCon + CloudNativeCon 2017 conference in Austin, TX, the Oracle Container Native Application Development team open sourced two new Kubernetes related projects which we are also demoing here at the show.  First, we have open sourced an Fn Installer for Kubernetes. Fn is an open source serverless project announced this October at Oracle OpenWorld.  This Helm Chart for Fn enables organizations to easily install and run Fn on any Kubernetes deployment including on top of the new Oracle managed Kubernetes service Oracle Container Engine (OCE). 

Second, we have open sourced Global Multi-Cluster Management, a new set of distributed cluster management features for Kubernetes federation that intelligently manages highly distributed applications – “planet-scale” if you will - that are multi-region, hybrid, or even multi-cloud.  In a federated world, many operational challenges emerge - imagine how you would manage and auto-scale global applications or deploy spot clusters on-demand.  For more info, make sure to check out the Multi-Cluster Ops in a Hybrid World session by Kire Filipovski and Vitaliy Zinchenko on Thursday December 7 at 3:50pm!

Pushing Ahead: Keep it Open, Integrated and Enterprise-Grade

Customers are seeking an open, cloud-neutral, and community-driven container-native technology stack that avoids cloud lock-In and allows them to run the same stack in the public cloud as they run locally.  This was our vision when we launched the Container Native Application Development Platform at Oracle OpenWorld 2017 in October.

 

Since then Oracle Container Engine was in the first wave of Certified Kubernetes platforms announced in November 2017, helping developers and dev teams be confident that there is consistency and portability amongst products and implementations.  

So, the community is now looking for the same assurances from their serverless technology choice: make it open and built in a consistent way to match the rest of their cloud native stack.  In other words, make it open and on top of Kubernetes.  And if the promise of an open-source based solution is to avoid cloud lock-in, the next logical request is to make it easy for DevOps teams to operate across clouds or in a hybrid mode.  This lines up with the three major “asks” we hear from customers, development teams and enterprises: their container native platform must be open, integrated, and enterprise-grade:

  • Open: Open on Open

Both the Fn project and Global Multi-Cluster Management are cloud neutral and open source. Doubling down on open, the Fn Helm Chart enables the open serverless project (Fn) to run on the leading open container orchestration platform (Kubernetes).   (Sure beats closed on closed!)  The Helm Chart deploys a fully functioning cluster of Fn github.com/fnproject/fn on a Kubernetes cluster using the Helm helm.sh/ package manager.

  • Integrated: Coherent and Connected

Delivering on the promise of an integrated platform, both the Fn Installer Helm Charts and Global Multi-Cluster Management are built to run on top of Kubernetes and thus integrate natively into Oracle’s Container Native Platform.  While having one of everything works in a Home Depot or Costco, it’s no way to create an integrated, effortless application developer experience – especially at scale across hundreds if not thousands of developers across an organization.  Both the Fn installer and Global Multi-Cluster Management will be available on top of OCE, our managed Kubernetes service

  • Enterprise-Grade: HA, Secure, and Operationally Aware

With the ability to deploy Fn to an enterprise-grade Kubernetes service such as Oracle Container Engine you can run serverless on a highly-available and secure backend platform.  Furthermore, Global Multi-Cluster Management extends the enterprise platform to multiple clusters and clouds and delivers on the enterprise desire for better utilization and capacity management. 

Production operations for large distributed systems is hard enough in a single cloud or on-prem, but becomes even more complex with federated deployments – such as multiple clusters applied across multi-regions, hybrid (cloud/on-prem), and multi-cloud scenarios.  So, in these situations, DevOps teams need to deploy and auto-scale global applications or spot clusters on-demand and enable cloud migrations and hybrid scenarios.

With Great Power Comes Great Responsibility (and Complexity)

So, with the power of Kubernetes federation comes great responsibility and new complexities: how to deal with challenge of applying application-aware decision logic to container native deployments.  Thorny business and operational issues could include cost, regional affinity, performance, quality of service, and compliance.  When DevOps teams are faced with managing multiple Kubernetes deployments they can also struggle with multiple cluster profiles, deployed on a mix of on-prem and public cloud environments.  These are basic DevOps question that are hard questions to answer:

  • How many clusters should we operate?
    • Do we need separate clusters for each environment?
    • How much capacity do we allocate for each cluster?
  • Who will manage the lifecycle of the clusters?
  • Which cloud is best suited for my application?
  • How do we avoid cloud lock-in?
  • How do we deploy applications to multiple clusters?

The three open source components that make up Global Multi-Cluster Management are: (1) Navarkos (which means Admiral in Greek) enables a Kubernetes federated deployment to automatically manage multi-cluster infrastructure and manage clusters in response to federated Kubernetes application deployments; (2) Cluster Manager provides lifecycle management for Kubernetes clusters using a Kubernetes federation backend; and (3) the Federated Ingress Controller is an alternative implementation of federated ingress using external DNS.

Global Multi-Cluster Management works with Kubernetes federation to solve these problems in several ways:

  • Creates Kubernetes clusters on demand and deploys apps to them (only when there is a need)
    • Clusters can be run on any public or private cloud platform
    • Runs the application matching supply and demand
  • Manages cluster consistency and cluster life-cycle
    • Ingress, nodes, network
  • Control multi-cloud application deployments
    • Control applications independently of cloud provider
  • Application-aware clusters
    • Clusters are offline when idle
    • Workloads can be auto-scaled automatically
    • Provides the basis to help decide where apps run based on factors that could include cost, regional affinity, performance, quality of service and compliance

Global Multi-Cluster Management ensures that all of the Kubernetes clusters are created, sized and destroyed only when there is a need for them based on the requested application deployments.  If there are no application deployments, then there are no clusters. As DevOps teams deploy various applications to a federated environment, then Global Multi-Cluster Management makes intelligent decisions if any clusters should be created, how many of them, and where.  At any point in time the live clusters are in tune with the current demand for applications, and the Kubernetes infrastructure becomes more application and operationally aware.

See Us at Booth G8, Join our Sessions, & Learn More at KubeCon + CloudNativeCon 2017

Come see us at Booth G8 and meet our engineers and contributors!  As a local Austin native (and for the rest of the old StackEngine team) we’re excited to welcome you all (y’all) to Austin.  Make sure to join in to “Keep Cloud Native Weird.”    And be fixin’ to check out these sessions:

 

Naming of archivelog files with non existing top level archivelog directory

Yann Neuhaus - Wed, 2017-12-06 06:33

In Oracle 12.2 an archive log directory is accepted, if top level directory does not exist:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/
 total 2267920
 drwxr-xr-x. 2 oracle dba        96 Dec  6 05:36 arch ...

Now database accepts this non existing archivelog destination:

SQL> alter system set log_archive_dest_3='LOCATION=/u02/oradata/DMK/arch/arch2';
System altered.

But not this:

SQL> alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4';
 alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4'
 *
 ERROR at line 1:
 ORA-02097: parameter cannot be modified because specified value is invalid
 ORA-16032: parameter LOG_ARCHIVE_DEST_4 destination string cannot be translated
 ORA-07286: sksagdi: cannot obtain device information.
 Linux-x86_64 Error: 2: No such file or directory

Log file format is set as following:

SQL> show parameter log_archive_format;
NAME                                 TYPE        VALUE
 ------------------------------------ ----------- ------------------------------
 log_archive_format                   string      %t_%s_%r.dbf
 SQL>

 

Now let’s see how archive log files look like in log_archive_dest_3:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/arch/arch2*
 -rw-r-----. 1 oracle dba 3845120 Dec  6 05:36 /u02/oradata/DMK/arch/arch21_5_960106002.dbf

So Oracle just adds the non existing top level directory to beginning of archivelog filename.

 

Cet article Naming of archivelog files with non existing top level archivelog directory est apparu en premier sur Blog dbi services.

Docker-Swarm: One manager, two nodes with Alpine Linux

Dietrich Schroff - Tue, 2017-12-05 16:32
After creating a Alpine Linux VM inside virtualbox and after adding docker because of the small disk footprint (Alpine Linux: 170MB | with docker: 280MB) i performed the following steps to create a docker swarm:
  • cloning the vm twice
  • assigning a static ip to the manager node
  • create new MACs for the network interface cards on the nodes 


Then i followed the tutorial https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/ but without running the docker-machine commands, because i have 3 VMs and do not want to run the node on top of docker.

manager:
alpine:~# docker swarm init --advertise-addr 192.168.178.46
Swarm initialized: current node (wy1z8jxmr1cyupdqgkm6lxhe2) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w8j37ng1l7jsmbghl-0yfr1eu5u66z8pinweisltmci 192.168.178.46:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.nodes
#     docker swarm join --token SWMTKN-1-3b7f69d3wgty0u68oab8724z07fkyvgc0w
8j37ng1l7jsmbghl-0yfr1eu5u66z8pinweisltmci 192.168.178.46:2377
This node joined a swarm as a worker.
And then a check on the master:
alpine:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wy1z8jxmr1cyupdqgkm6lxhe2 *   alpine              Ready               Active              Leader
pusf5o5buetjqrsmx3kzusbyt     node01              Ready               Active             
io3z3b6nf8xbzkyzjq6sa7cuc     node02              Ready               Active             
Run a first job:
alpine:~# docker service create --replicas 1 --name helloworld alpine ping 192.168.178.1
rsn6igby4f6d7uuy8eny7sbfb
overall progress: 1 out of 1 tasks
1/1: running  
verify: Service converged
But on my manager i get no output for "docker ps". But this is, because the service is not running here:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
wrrobalt4oe7        helloworld.1        alpine:latest       node01              Running             Running 2 minutes ago                      
Node 1 shows:
node01:~# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
40c5e9b2ffbc        alpine:latest       "ping 192.168.178.1"   3 minutes ago       Up 3 minutes                            helloworld.1.wrrobalt4oe7mrbhxjlweuxgk
If i do a kill on the ping process, it is immediately restarted:
node01:~# ps aux|grep ping
 2457 root       0:00 ping 192.168.178.1
 2597 root       0:00 grep ping
node01:~# kill 2597
node01:~# ps aux|grep ping
 2457 root       0:00 ping 192.168.178.1
 2600 root       0:00 grep ping
A scale up is no problem:
alpine:~# docker service create --replicas 2 --name helloworld alpine ping 192.168.178.1
3lrdqdpjuqml6creswdcqpn2p
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
616scw68s8bv        helloworld.1        alpine:latest       node02              Running             Running 8 seconds ago                      
n8ovvsw0m4id        helloworld.2        alpine:latest       node01              Running             Running 8 seconds ago                      
And a shutdown of node02 is no problem:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Ready               Ready 2 seconds ago                             
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running 17 seconds ago                          
n8ovvsw0m4id        helloworld.2        alpine:latest       node01              Running             Running about a minute ago          


After a switchoff of node01 both service are running on the remaining master:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Running             Running about a minute ago                      
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running about a minute ago                      
pd8dfp4133yw        helloworld.2        alpine:latest       alpine              Running             Running 2 seconds ago                           
n8ovvsw0m4id         \_ helloworld.2    alpine:latest       node01              Shutdown            Running 2 minutes ago              
So failover is working.
But failback does not occur. After switching on node01 again, the service remains on the manager:
alpine:~# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE               ERROR                         PORTS
bne2enbkabfo        helloworld.1        alpine:latest       alpine              Running             Running 4 minutes ago                                    
616scw68s8bv         \_ helloworld.1    alpine:latest       node02              Shutdown            Running 4 minutes ago                                    
pd8dfp4133yw        helloworld.2        alpine:latest       alpine              Running             Running 2 minutes ago                                    
n8ovvsw0m4id         \_ helloworld.2    alpine:latest       node01              Shutdown            Failed about a minute ago   "task: non-zero exit (255)"  
alpine:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
wy1z8jxmr1cyupdqgkm6lxhe2 *   alpine              Ready               Active              Leader
pusf5o5buetjqrsmx3kzusbyt     node01              Ready               Active             
io3z3b6nf8xbzkyzjq6sa7cuc     node02              Down                Active             


Last thing: How to stop the service?
alpine:~# docker service rm  helloworld
helloworld
alpine:~# docker service ps helloworld
no such service: helloworld
Remaining open points:
  • Is it possible to do a failback or limit the number of a service per node?
  • How to do this with a server application?
    (load balancer needed?)
  • What happens, if the manager fails / is shutdown?

What are the benefits of Manufacturing Dashboards?

Nilesh Jethwa - Tue, 2017-12-05 16:15

Today in the US economy, the major players in the manufacturing industry are electronics, automobile, steel, consumer goods, and telecommunications. And as they offer more advanced products, including tablets and smartphones. These technological advancements significantly influence consumer lifestyles.

Along with these changes, the global manufacturing industry is currently embracing a new key player called metrics based manufacturing. This is actually the latest trend that industries need to consider in their sales funnel. So, what does this mean?

Read more at http://www.infocaptor.com/dashboard/manufacturing-dashboards-what-are-their-benefits

JET Composite Component in ADF Faces UI - Deep Integration

Andrejus Baranovski - Tue, 2017-12-05 12:21
Oracle JET team doesn't recommend or support integrating JET into ADF Faces. This post is based on my own research and doesn't reflect best practices recommended by Oracle. If you want to try the same - do it on your own risk. 

All this said, I still think finding ways of further JET integration into ADF Faces is important. Next step would be to implement editable grid JET based component and integrate it into ADF to improve fast user data entry experience.

Today post focus is around read-only JET composite component integration into ADF Faces. I would recommend to read my previous posts on similar topic, today I'm using methods described in these posts:

1. JET Composite Component - JET 4.1.0 Composite - List Item Action and Defferred Loading

2. JET and ADF integration - Improved JET Rendering in ADF

You can access source code for ADF and JET Composite Component application in my GitHub repository - jetadfcomposite.

Let's start from UI. I have implemented ADF application with regions. One of the regions contains JET Composite. There is ADF Query which sends result into JET Composite. There is integration between JET Composite and ADF - when link is clicked in JET Composite - ADF form is refreshed and it displays current row corresponding to selected item in JET Composite. List on the left is rendered from series of JET components, component implements one list item:


As you can see, there are two type calls:

1. ADF Query sends result and JET Composite. ADF -> JET call
2. JET Composite is forcing ADF form to display row data for selected item. JET -> ADF call

Very important to mention - JET Composite is getting data directly from ADF Bindings, there is no REST layer here. This simplifies JET implementation in ADF Faces significantly.

What is the advantage of using JET Composite in ADF Faces? Answer - improved client side performance. For example, this component allows to expand item. Such action in pure ADF Faces component would produce request to the server. While in JET it happens on the client, since processing is done in JS:


There is no call to the server made when item is expanded:


Out of the box - JET Composite works well with ADF Faces geometry. In this example, JET Composite is located inside ADF Panel Splitter. When Panel Splitter is resized, JET Composite UI is nicely resized too, since it is out of the box responsive. Another advantage of using JET Composite in ADF Faces UI:


When link "Open" is clicked in JET Composite - JS call is made and through ADF Server Listener we update current row in ADF to corresponding data. This shows how we can send events from JET Composite to ADF Faces:


It works to navigate to another region:


And come back - JET content is displayed fine even after ADF Faces PPR was executed (simple trick is required for this to work, see below). If we explore page source, we will see that each JET Composite element is stamped in HTML within ADF Faces HTML structure:


Great thing is - JET Composite which runs in JET, doesnt require any changes to run in ADF Faces. In my example, I only added hidden ID field value to JET Composite, to be able to pass it to ADF to set current row later:


I should give couple of hints regarding infrastructure. It is not convenient to copy JET Composite code directly into ADF application. More convenient is to wrap JET code into JAR and attach it this way to ADF. To achieve that, I would recommend to create empty Web project in JDEV, copy JET Composite code there (into public_html folder) and build JAR out of it:


Put all JET content into JAR:


If Web project is located within main ADF app, make sure to use Working Sets and filter it out to avoid it to be included into EAR during build process:


Now you can add JAR with JET into ADF app:


In order for JET HTML/JS resources to be accessible from JAR file, make sure to add required config into main ADF application web.xml file. Add ADF resources servlet, if it is not added already:


Add servlet mapping, this will allow to load content from ADF JAR library:


To load such resources as JSON, CSS, etc. from ADF JAR, add ADF library filter and list all extensions to be loaded from JAR:


Add FORWARD and REQUEST dispatcher filter mapping for ADF library filter from above:


As I mentioned above, JET Composite is rendered directly with ADF Bindings data, without calling any REST service. This simplifies JET Composite implementation in ADF Faces. It is simply rendered through ADF Faces iterator. JET Composite properties are assigned with ADF Faces EL expressions to get data from ADF Bindings:


JET is not compatible with ADF PPR request/response. If JET content is included into ADF PPR response - context gets corrupted and is not displayed anymore. To overcome this we are re-drawing JET context, if it was included into PPR response. This doesn't reload JET modules, but simply re-draws UI. In my example, ADF Query sends PPR request to the area where JET Composite renders result. I have overridden query listener:


Other methods, where PPR is generated for JET Composite - tab switch and More link, which loads more results. All these actions are overridden to call methods in the bean:


Method reDrawJet is invoked, which calls simple utility method to invoke JS function which actually re-draws JET UI:


JET UI re-draw happens in JS function, which cleans Knockout.JS nodes and reapplies current JET model bindings:


JET -> ADF call is made through JET Composite event. This event is assigned with JS function implemented in ADF Faces context. This allows to call JS located in ADF Faces, without changing JET Composite code. I'm using regular ADF server listener to initiate JS -> server side call:


ADF server listener is attached to generic button in ADF Faces:


ADF server listener does it job and applies received key to set current row in ADF. Which automatically triggers ADF form to display correct data:

NetSuite Adds Three New Partners Seeking to Drive Growth with Cloud ERP

Oracle Press Releases - Tue, 2017-12-05 08:00
Press Release
NetSuite Adds Three New Partners Seeking to Drive Growth with Cloud ERP Apps Associates, BTM Global and iSP3 Expand Existing Oracle Relationships with NetSuite Practices

SAN MATEO, Calif.—Dec 5, 2017

Oracle NetSuite, one of the world’s leading providers of cloud-based financials / ERPHRProfessional Services Automation (PSA) and omnichannel commerce software suites, today announced the addition of new partners to NetSuite’s Solution Provider Program and Alliance Partner Program. Apps Associates, BTM Global and iSP3 are expanding their longtime Oracle partner relationships to add NetSuite practices to their portfolio. Partnering with NetSuite allows them to help organizations in a range of industries improve operational efficiency, business agility, customer focus and data-driven decision-making. As NetSuite partners, the three technology consulting and implementation firms are equipped to meet growing demand for cloud ERP as the pace of business accelerates and organizations look to graduate from standalone applications that can limit the ability to innovate and scale. The new partners benefit with added capacity to grow their areas of expertise and client base, as well as flexibility to offer proprietary add-on solutions and increase both margin and recurring revenue.

“Our three new partners add a diversity of industry focus and expertise with a common commitment to helping clients thrive in the cloud,” said Craig West, Oracle NetSuite Vice President of Alliances and Channels. “We’re delighted to collaborate with them in delivering agile cloud solutions that help our mutual customers scale, innovate and grow.”

Apps Associates, a Platinum Level Member of Oracle PartnerNetwork, Extends Offerings as NetSuite Solution Provider

Apps Associates (www.appsassociates.com), an 800-person global technology and business services firm based in Acton, Mass., is a Platinum level member of the Oracle PartnerNetwork. Apps Associates provides services across the Oracle product line for ERP, HCM, Analytics, Cloud, Integration, Database and other technologies. Founded in 2002, Apps Associates has grown its business across the US, Europe and India with a client base in life sciences, medical devices, manufacturing, distribution, financial services, retail and healthcare. Apps Associates also partners with AWS, Salesforce, Dell Boomi, TIBCO and MuleSoft. In teaming up with NetSuite, Apps Associates furthers its mission of working with customers to enable business improvement by streamlining business processes using advanced technology. The firm will handle NetSuite financials / ERP, HR, PSA, CRM and ecommerce in response to growing customer demand for flexible cloud business management software.

“When NetSuite became part of the Oracle family of products, it was a natural opportunity to extend our offerings and bring a holistic solution to our customers,” said Scott Shelko, NetSuite Practice Manager at Apps Associates. “Apps Associates brings scale and discipline with our NetSuite practice, so our customers have the value of a full lifecycle partner that really understands the platform.”

BTM Global Expands its Oracle Retail Solutions Portfolio as NetSuite Alliance Partner

BTM Global (www.btmglobal.com), headquartered in Minneapolis and Ho Chi Minh City, Vietnam, helps retailers compete in a fast-changing world with systems integration and development services. Its clients range from regional chains to the world’s most recognized brands, including Red Wing Shoes, True Value, World Kitchen and Perry Ellis International. Founded by veterans of RETEK, a software company acquired by Oracle, BTM Global has deep expertise in Oracle Retail solutions that address retailers’ needs around store-based solutions, merchandising, business intelligence and point-of-service systems, notably EMV. 

BTM Global notes that more retailers, especially in the small and midmarket space, are interested in agile, cloud-based solutions that can be rapidly deployed and extended into new markets. BTM Global’s new partnership with NetSuite expands its commitment to providing its clients with a diversity of expertise, resulting in more creative and bolder solutions for retailers’ technology challenges. BTM Global will provide its core services – development, implementation, support and strategic technology planning – to retailers leveraging SuiteCommerce, NetSuite’s ecommerce platform, along with CRM and back-office functionality. BTM Global is also a Gold level member of Oracle PartnerNetwork.

”We’re very happy to be able to offer NetSuite customers our unique, well-rounded expertise for their integration and development needs,” said Tom Schoen, President at BTM Global. “NetSuite offers a proven omnichannel commerce solution on a unified cloud platform with access to real-time data to improve business agility.”

iSP3 Builds on JD Edwards Relationship as a NetSuite Solution Provider

iSP3 (www.isp3.ca), a technology consulting firm that focuses on implementations of Oracle JD Edwards software, is a Platinum level member of Oracle PartnerNetwork that serves companies in the mining, building and construction, oil and gas, and government industries, as well as general business. Founded as a three-person firm in 2001 based in Vancouver, Canada, iSP3 also has offices in Toronto and Seattle. The firm has consultants across North America and Latin America, and has completed more than 100 projects in the Americas as well as Eastern Europe and Asia. In recent years, the 23-person iSP3 has seen rising demand among both prospects and clients for cloud-based ERP. Now, as a NetSuite partner, iSP3 is able to meet that demand with NetSuite’s leading cloud ERP platform. iSP3’s NetSuite practice can address demand for cloud ERP, while providing agility and scale for clients to grow. Beyond ERP, NetSuite’s integrated capabilities for ecommerce, CRM, HR, PSA and other functions also provide iSP3 the opportunity to expand its business into new industries and business processes.

“NetSuite’s cloud-based platform is a very good fit for iSP3 and our clients,” said William Liu, Senior Partner at iSP3. “The ability to deploy NetSuite’s unified system in a short period of time to gain ROI very quickly is extremely attractive.”

Launched in 2002, the NetSuite Solution Provider Program is the industry’s leading cloud channel partner program. Since its inception, NetSuite has been a leader in partner success, breaking new ground in building and executing on the leading model to make the channel successful with NetSuite. A top choice for partners who are building new cloud ERP practices or for those expanding their existing practice to meet the demand for cloud ERP, NetSuite has enabled partners to transform their business model to fully capitalize on the revenue growth opportunity of the cloud. The NetSuite Solution Provider Program delivers unprecedented benefits that include highly attractive margins and range from business planning, sales, marketing and professional services enablement, to training and education. For more information about the NetSuite Solution Provider Program, visit www.netsuite.com/partners.

Contact Info
Christine Allen
Oracle NetSuite
603-743-4534
PR@netsuite.com
About NetSuite Alliance Program

The NetSuite Alliance Partner program provides business transformation consulting services as well as integration and implementation services that help customers get even more value from their NetSuite software. Alliance Partners are experts in their field and have a deep and unique understanding of NetSuite solutions. NetSuite provides Alliance Partners with a robust set of resources, certified training, and tools, enabling them to develop expertise around specific business functions, product areas, and industries so they can efficiently assist customers, differentiate their practices, and grow their business. For more information, please visit http://www.netsuite.com/portal/partners/alliance-partner-program.shtml.

About Oracle NetSuite

Oracle NetSuite pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, it provides a suite of cloud-based financials / Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • 603-743-4534

No journal messages available before the last reboot of your CentOS/RHEL system?

Yann Neuhaus - Tue, 2017-12-05 02:20

As you probably noticed RedHat as well as CentOS switched to systemd with version 7 of their operating system release. This also means that instead of looking at /var/log/messages you are supposed to use journcalctl to browse the messages of the operating system. One issue with that is that messages before the last reboot of your system will not be available, which is probably not want you want.

Lets say I started my RedHat linux system just now:

Last login: Tue Dec  5 09:12:34 2017 from 192.168.22.1
[root@rhel7 ~]$ uptime
 09:14:14 up 1 min,  1 user,  load average: 0.33, 0.15, 0.05
[root@rhel7 ~]$ date
Die Dez  5 09:14:15 CET 2017

Asking for any journal logs before that will not show anything:

[root@rhel7 ~]$ journalctl --help  | grep "\-\-since"
  -S --since=DATE          Show entries not older than the specified date
[root@rhel7 ~]$ journalctl --since "2017-12-04 00:00:00"
-- Logs begin at Die 2017-12-05 09:13:07 CET, end at Die 2017-12-05 09:14:38 CET. --
Dez 05 09:13:07 rhel7.localdomain systemd-journal[86]: Runtime journal is using 6.2M (max allowed 49.6M, trying to 
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuset
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpu
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuacct

Nothing for yesterday, which is bad. The issue here is the default configuration:

[root@rhel7 ~]$ cat /etc/systemd/journald.conf 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

“Storage=auto” means that the journal will only be persistent if this directory exists (it does not in the default setup):

[root@rhel7 ~]$ ls /var/log/journal
ls: cannot access /var/log/journal: No such file or directory

As soon as this is created and the service is restarted the journal will be persistent and will survive a reboot:

[root@rhel7 ~]$ mkdir /var/log/journal
[root@rhel7 ~]$ systemctl restart systemd-journald.service
total 4
drwxr-xr-x.  3 root root   46  5. Dez 09:15 .
drwxr-xr-x. 10 root root 4096  5. Dez 09:15 ..
drwxr-xr-x.  2 root root   28  5. Dez 09:15 a473db3bada14e478442d99da55345e0
[root@rhel7 ~]$ ls -al /var/log/journal/a473db3bada14e478442d99da55345e0/
total 8192
drwxr-xr-x. 2 root root      28  5. Dez 09:15 .
drwxr-xr-x. 3 root root      46  5. Dez 09:15 ..
-rw-r-----. 1 root root 8388608  5. Dez 09:15 system.journal

Of course you should look at the other parameters that control the size of journal as well as rotation.

 

Cet article No journal messages available before the last reboot of your CentOS/RHEL system? est apparu en premier sur Blog dbi services.

https://medium.com/oracledevs

Senthil Rajendran - Mon, 2017-12-04 23:02

Oracle Developers

Aggregation of articles from Oracle engineers, Developer Champions, partners, and developer community on all things Oracle Cloud and its technologies.


Oracle Cloud Conference @Bangalore

Senthil Rajendran - Mon, 2017-12-04 21:21

Attending Oracle Cloud Conference @Bangalore 2017

Dynamic Sql to get the value of the column which is formed by concatenating two strings.

Tom Kyte - Mon, 2017-12-04 18:06
Hi Team, I have a query like this I will get the column name at run time something like IF conditions 1 then Column A. IF conditions 2 then Column B. IF conditions 3 then Column C. IF conditions 4 then Column D. Once i get to know whi...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator